Tag: salary system

  • Care Matters, or How To Distribute Autonomy and Not Break Things in the Process

    Care Matters, or How To Distribute Autonomy and Not Break Things in the Process

    At Lunar Logic, we have no formal managers, and anyone can make any decision. This introduction is typically enough to pique people’s curiosity (or, rather, trigger their disbelief).

    One of the most interesting aspects of such an organizational culture is the salary system.

    Since we all can decide about salaries—ours and our colleagues—it naturally follows that we know the whole payroll. Oh my, can that trigger a flame war.

    Transparent Salaries

    I wrote about our experiments with open salaries at Lunar in the past. At least one of those posts got hot on Hacker News—my “beloved” place for respectful discussions.

    As you may guess, not all remarks were supportive.

    Comments about transparent salaries from Hacker News

    My favorite, though?

    IT WILL FAIL. Salaries are not open for a reason. It is against human nature.

    No. Can’t do. Because it is “against human nature.” Sorry, Lunar, I guess. You’re doomed.

    On a more serious note, many comments mentioned that transparent salaries may/will piss people off.

    The thing they missed was that transparency and autonomy must always move together. You can’t just pin the payroll to a wall near a water cooler. It will, indeed, trigger only frustration.

    By the same token, you can’t let people decide about salaries if they don’t know who earns what. What kind of decisions would you end up with?

    So, whatever the system, it has to enable salary transparency and give people influence over who earns what.

    Cautionary Tale

    Several years back, I had an opportunity to consult for a company that was doing open salaries. Their problem? Selfishness.

    In their system, everyone could periodically decide on their raise (within limits). However, each time after the round of raises, the company went into the red. All the profits they were making—and more—went to increased salaries.

    The following months were spent recovering from the situation and regaining profitability, only to repeat the cycle again next time.

    Their education efforts had only a marginal effect. Some were convinced, but seeing how colleagues aimed for the maximum possible raise, people yielded to the trend.

    The cycle has perpetuated.

    So what did go wrong? After all, they followed the rulebook. They merged autonomy with transparency. And not only with salaries. The company’s profit and loss statements were transparent, too.

    It’s just people didn’t care.

    Care

    Over the years, when I spoke about distributed autonomy, I struggled to nail down one aspect of it. When we get people involved in decision-making, we want them to feel the responsibility for the outcomes of their decisions.

    The problem is that we have a diverse interpretation of the word. I once was on the sidelines of a discussion about responsibility versus accountability. People were arguing about which one was intrinsic and which was extrinsic.

    As the only non-native English speaker in the room, I checked the dictionary definitions. Funny thing, both sides were wrong.

    Still, I’d rather go with how people understand the term (living language) rather than with dictionary definitions.

    So, what I mean when I refer to being responsible for the outcomes of one’s decisions is this intrinsic feeling.

    I can’t make someone feel responsible/accountable for the outcomes of their call. At most, I can express my expectations and trigger appropriate consequences.

    To dodge the semantic discussion altogether, I picked the word agency instead.

    The only problem is that it translates awfully to my native Polish. Frustrated, I started a chat with my friend, and he was like, “Isn’t the thing you describe just care?”

    He nailed it.

    Care strongly suggests intrinsic motivation, and “caring for decision’s outcomes” is a perfect frame.

    How Do You Get People to Care?

    The story of the company with self-set salaries—and many comments in the Hacker News thread—shows a lack of care for their organizations.

    “As far as I get my fat raise, I don’t care if the company goes under.”

    So, how do you change such perspectives?

    Care, not unlike trust, is a two-way relationship. If one side doesn’t care for the other, it shouldn’t expect anything else in return. And similarly to trust, one builds care in small steps.

    Imagine what would happen if Amazon adopted open salaries for its warehouse workers. Would you expect them to have any restraint? I didn’t think so. But then, all Amazon shows these people is how it doesn’t give a damn about them.

    And that can’t be changed in one quick move, with Jeff Bezos giving a pep talk about making Amazon “Earth’s best employer” (yup, he did that).

    First, it’s the facts, not words, that count. Second, it would be a hell of a leap for any company, let alone a behemoth employing way more than a million people.

    As I’m writing this, I realize that taking care of people’s well-being is a prerequisite for them to care about the company. And that, in turn, is required in order to distribute autonomy.

    The Role of Care

    The trigger to write this post was a conversation earlier today. We’re organizing a company off-site, and I was asked for my take on paying for something from the company’s pocket.

    Unsurprisingly, the frame of the question was, “Can we spend 250 EUR on something?”

    Now, a little bit of context may help here. Last year was brutal for us business-wise. Many people make some concessions to keep us afloat. Given all that, my personal take was that if I had 250 EUR to spend, I’d rather spend it differently.

    But that wasn’t my answer.

    My answer was:

    • Everybody knows our P&L
    • Everybody knows the invoices we issued last month
    • Everybody knows the costs we have to cover this month
    • Everybody knows the broader context, including people’s concessions
    • We have autonomy
    • Go ahead, make your decision

    In the end, we’re doing a potluck-style collection.

    Sure, it was just a 250 EUR decision. That’s a canonical case of a decision that can not sink a company. But the end of that story is exactly the reason why I’m not worried about putting in the hands of our people decisions that are worth a hundredfold or thousandfold.

    We’ve never gone under because we’ve given ourselves too many selfish raises. Even if we could. The answer to why it is so lies in how we deal with those small-scale things.

    After all, care is as much a prerequisite for distributed autonomy as alignment is.


    This is the third part of a short series of essays on autonomy and alignment. Published so far:

    Feel free to subscribe/follow here, on Bluesky, or LinkedIn for updates.
    I also run the Pre-Pre-Seed Substack, which is dedicated to discussing early-stage products.

  • The Case for Subjective Assessment System

    The Case for Subjective Assessment System

    I admit I designed or helped to design 5 assessment systems in my career. No, I’m not proud. Frankly, I’m pretty sure the first 3 brought net negative value, i.e., they created more harm than value.

    So when we set out to reinvent our salary system, I said publicly (and more than once): “An assessment system? Over my dead body.”

    Fast forward 7 years, and it was time to eat my own words. While the change to transparent salaries was a big thing, and literally no one would instead go back to what we had before, we saw issues piling up.

    An interesting fact, among others, some of the issues were:

    • Raises not happening frequently enough
    • Too little money spent on salary increases

    And yes, that was all in a system where anyone could self-set their own salary.

    Long story short, a part of the best way out we could devise meant introducing an assessment system.

    The Fallacy of Assessment

    Assessment systems are a standard in organizations, big and small. There are broadly accepted good practices, like including perspectives from everyone around (so-called 360 assessments) or focusing on facts (outcomes, observable behaviors, etc.).

    The bottom line is the aspiration to improve the objectivity of any given assessment.

    And that’s fool’s gold.

    The fallacy of assessment systems in a professional context is that there’s no such thing as objectivity. There can’t be.

    As Marcus Buckingham and Ashley Goodall eloquently explain in Nine Lies About Work, we’re fundamentally incapable of answering questions like “How good are Pawel’s software development skills?” or “How good of a leader is Pawel?”

    That holds true even if we pretend to assess observable artifacts, like Pawel’s code or his behaviors during staff meetings. We could observe but a fraction of what a person employs to deliver the outcomes.

    To make things worse, even when we consciously focus on observing Pawel’s visible behaviors and outcomes of his work, we will only see a tiny part of it. After all, we have our work, too, let alone all the other people in the team that we need to take care of as well.

    Curiously enough, the closest person to objectively assess anything about Pawel will be himself. He knows most about his trials and tribulations as well as triumphs. He knows when he thrives and when he struggles.

    Except we intuitively dismiss his assessment as subjective.

    Thinking: Fast and Slow

    One of the fundamental observations Dan Kahneman made in his Thinking: Fast and Slow was that whenever our brain faces a difficult question, it subconsciously changes it to a similar albeit simple-to-answer one.

    Another one is that we typically make snap decisions and only then look for arguments supporting our choice (also actively dismissing those that would go against it).

    Couple these two with our assessment example and the challenge described above. The question about Pawel’s development skills is difficult. The most honest answer I can give is that I don’t really know, and the best I could do would be to spend a lot of time trying to inform myself better, but a) I don’t want to do this, and b) it would only improve my answer by a thin margin.

    But here’s a simple question that I can answer instantaneously. What is my opinion about Pawel’s development skills? Yes, the change may look subtle, but it makes all the difference.

    The new question doesn’t force me to consider facts, outcomes, and observable behaviors alike. It literally goes for my judgmental opinion, which, of course, may take data into account. It may also include all the prejudice, bias, hearsay, and other sources of misinformation.

    Obviously, it is explicitly subjective.

    Yet, our lazy brain answers the simple subjective question while pretending it deals with a difficult and more objective one. Oh, and once it has the answer, it follows up by justifying it with all the supporting arguments it can find.

    All the efforts to make the assessment system “more objective” are then thwarted by our brains.

    No wonder so many people consider their assessment systems unfair.

    The Way Out

    Following Buckingham and Goodall’s advice, one might ask what would happen if we ditched the pretense of objectivity and embraced subjectivity. While I can’t give a general answer, here’s a story of what happened at Lunar.

    The starting point was that we had a 360 assessment in place. We designed it around observable behaviors. The measured satisfaction with the assessment (and, broader, the salary system) was a whopping 80%.

    If it ain’t broken, don’t fix it, they say. So that’s precisely what we did. We redefined our categories of flexibility, experience, effectiveness, and people skills with straightforward and subjective questions, such as:

    • I would always want [that person] to be my leader.
    • When a team requires multiple organizational and technical skills, I would always want [that person] to be on the team.
    • Whenever there’s a role no one is willing to take, I would always count on [that person] to take the responsibility.
    • etc.

    We answer them on a scale from “I strongly disagree” to “I strongly agree.” Also, the “always” qualifier is important as it stretches the answers across the scale.

    Initially, there were about a dozen of such questions. We wanted to ensure that all the aspects of the existing solution are covered.

    After another iteration of the “old” assessments, I suggested we experimentally try the new approach and compare the results. It should have been an easy sell.

    That’s when all hell broke loose.

    I didn’t appreciate how much resistance there would be against trying something new, even when it wasn’t supposed to affect anything. Not unless we would have validated the outcomes, at least.

    The new approach was considered explicitly subjective (as opposed to the perceived objectivity of the old one). It was expected its outcomes would affect the fairness of the payroll. People feared we would lose a lot of sophistication and details in assessments as the new questions were necessarily broad and generic. For example, we stopped mentioning any specific technologies we work with. Then, there was concern that people would start playing their favorites (I like you, so sure, I would love you to be my leader).

    The Experiment

    No amount of discussion could get everyone on board, but at least I could play the “let’s try it” card. If we didn’t like the outcome, nothing would change, after all.

    What have we learned?

    It was easier to answer the questions. We got significantly more answers in the new scheme (up from 47% to 65% of all possible responses).

    It also took much less time to answer the subjective questions than when we pretended to be objective (about a half time spent on the activity, despite providing more responses).

    The best thing?

    The results correlated almost perfectly with the old system. The correlation coefficient was 0.98. That’s math for “these series are as identical as it gets.”

    We literally generated the same results (or, in terms of quantity, better) with half the effort.

    Many still felt uncomfortable with a blatant admission that we use individual opinions as assessments. Nonetheless, the experiment’s outcome spoke for itself.

    We have been doing it all along; we’ve just cultivated an illusion of objectivity.

    Summary

    Two years down the timeline, no one looks back. We reduced the set of questions to just five. And if I wanted to get super-radical, we could stick with just one: “I would always want [the person] to be my leader.” This single response correlates most with all the categories we used to assess.

    When you consider it, it makes perfect sense. There are many interdisciplinary traits and skills we’d expect from an ideal leader. We’d want them to support us as a person and professional. We’d turn to them with problems, both technical and interpersonal. We’d look for guidance and challenge in them. We’d want them to make the team better. Be fair to everyone. And many more.

    If someone does well on all those accounts (and more), it is only fair to expect them to shine in a traditional skill/trait-based assessment, too.

    Still, we want to stress some other aspects of our work explicitly as well. But that still leaves us with only five questions, which, in most cases, we can easily answer from the top of our heads.

    I won’t say that we somehow started loving assessments. No one does that. But we:

    • get the same quality
    • but better quantity
    • by spending much less time on the activity
    • and are similarly satisfied with the system

    What’s not to love?