When I’m writing these words I’m on my way home from Lean Agile Scotland. While summarizing the event Chris McDermott mentioned a few themes, two of them being organizational culture and experimentation.
Experimentation is definitely my thing. I am into organizational culture too. I should be happy when Chris righteously pointed both as the themes of the event. At the same at that very moment time alarm lights went off in my head.
We refer a lot to safe to fail experiments. We talk about antifragile or resilient environments. And then we quickly turn into organizational culture.
The term culture hacking pops up frequently.
And I’m scared.
The reason is that in most cases there is no safe to fail experiment when we talk about an organizational culture. The culture is an outcome of everyone’s behaviors. It is ultimately about people. In other words an experiment on the culture, or a culture hack if you will, means changing people behaviors.
If you mess it up, more often than not, there’s no coming back. We may introduce a new factor that would influence how people behave. However, removing that factor does not bring the old behaviors back. Not only that though. Often there’s no simple way to introduce another factor that would bring back the old status quo.
There’s a study which showed that introducing a fine for popping up late at a daycare to pick up a child resulted in in more parents being late, as they felt excused for their behavior. This was quite an unexpected outcome of the experiment. However, even more interesting part is that removing the fine did not affect parents’ behaviors at all – they kept popping up late more frequently than before the experiment.
It’s natural. Our behaviors are outcome of the constraints of the environment and our experience, knowledge and wisdom.
We will affect behaviors by changing the constraints. The change is not mechanistic though. We can’t exactly predict what’s going to happen. At the same time the change affects our experience, knowledge and wisdom and thus irreversibly changes the bottom line.
I can give you a simple example. When we decided to go transparent with salaries at Lunar Logic it was a huge cultural experiment. What I knew from the very beginning though was there was no coming back. Ultimately, we can make salaries “non-transparent” again. Would that change what people learned about everyone’s salary? No. Would that change that they do look at each other through the perspective of that knowledge?
It might have affect the way they look at the company in a negative way, as suddenly some of the authority that they’d had was taken away. In other words, even from that perspective they’d have been better if such an experiment hadn’t been run at all than if it was tried and rolled back.
I’m all for experimentation. I definitely do prefer safe to fail experiments. I am however aware that there are whole areas where such experiments are impossible most of the time, if not all of the time.
The culture is one such area. It doesn’t mean that we shouldn’t be experimenting with the culture. It’s just that we should be aware of the stakes. If you’re just flailing around with your culture hacks there will be casualties. Having experimentation mindset is a lousy excuse.
I guess the part of my pet peeve with understanding the tools and the methods is exactly this. When we introduce a new constraint, and a method or a tool is a constraint, we invariably change the environment and thus influence the culture. Sometimes irreversibly.
It get even trickier when the direct goal of the experiment is to change the culture. Without understanding what we’re doing it’s highly likely that such a culture hack will backfire. Each time I run an experiment on a culture I like to think that the change will be irreversible and then I ask myself once again: do I really want to run it?
If not I simply don’t mess with the culture.
3 comments… add one
Hi Pawel,
Great post. I understand your reservations. As you know, I have declared war to personal and interpersonal inertia, so I run change experiments all the time, at a very rapid pace (specifically by applying PopcornFlow). I’d like to make a couple of observations.
I might be wrong (I haven’t read the study yet), but I sense that the ‘parent fine’ experiment may have been applied ‘to’ people (parents=victims) rather than ‘co-designed’ with them. It’s ok (since often you don’t have the luxury to involve people directly), but likely a risk factor to consider.
“Being late” may or may not have been a perceived big problem to them.
Parents evidently gladly paid a small fee for the option to be late.
Deterrents hardly work even in societies where the death penalty at stake, so a small fee is laughable at best. My dentist routinely calls me the day before and sends me text reminders prior to an appointment. That’s a preventive rather than a contingent action (can’t say the same for the parents). Maybe, as simple text message would be something to consider next, at least for ‘risky parents’.
But, as you also said, “rolling back” may or may not be feasible. Once you probe a complex system, you likely influenced it in irreversible ways. I never expect nor promise to return to the status quo if things don’t work as expected. Like a virus, to change is to mutate. Some of it is evolution & improvement. After all, if we didn’t have a problem is the first place, why would you even care to try to change the status quo? I do, however, expect to set a date and promise to “revisit” our actions. “Rolling back” may be still an option. But, for sure, there are other possibilities to consider such as 1. Persist with the experiment for a little more (if we are not sure or need more time) 2. Fully commit to the strategy we explored with this experiment (if we are happy to do so) 3. Launch other experiments based on the new knowledge 4. explore or re-evaluate other options 5. revisit the problem in the first place (e.g. “is it still a problem?”, “is it the right problem?”)
Also, the second experiment about the transparent salaries was a bold move. Assuming that the problem is shared and people care, I’d consider it a coherent option to explore. In PopcornFlow, I delay commitment by “exploring” options (directions/strategies) and “committing” to experiments instead :-) An experiment would be a small step in a given direction. For example – and I’m really thinking out loud here – given that direction, you could have reduced scope and limited it to a subset of people (team? People hired in the last 2 years? People hired for more than x years?) or maybe just to this year’s/quarter’s bonus distribution (if 1. exists, 2. it’s not uniformly distributed as sometimes happens in orgs). In fact, you could have introduced transparent bonuses to test if your assumptions about current salary distribution are, in fact, good assumptions. Yes, if things don’t go as expected, much less is at stake and you can try even more things before going full steam.
I’m just playing anyway. For you, my friend, it’s too late :-)
Hi Pawel,
I read the salary post just now, so I see that you are, indeed, going through incremental stages. Great. Like Mike, I also wonder “what problem are we trying to solve?”, “Guys, is it a shared problem or is it just my opinionated opinion?” and, “If we all agree that this is a problem, is this a problem we should be addressing now?”.
Anyway, great job.
Lunar Logic looks like an awesome place to learn and evolve.
@Claudio – The open salaries experiment didn’t have a subset or a partial experiment in it that I’d consider viable. Option you explore are worth considering (e.g. Jimdo went with open salaries within a single team as a lead experiment).
However in our context it would require to trigger another experiment prior to this one: either introducing teams or bonuses. Neither of these was something that we wanted to do or would leave us unchanged.
There was the opt-in mechanism designed in. In other words everyone was invited to the experiment but no one was forced in. We also did a lot of ground work to explain motivations and consequences of doing that. By no means it was a gung-ho approach. At the same time I would lie if I said I knew the outcomes.
And I knew that there’s no going back and it’s not safe to fail. That’s why we made it as fail safe as possible.
Interestingly enough what I keep saying about how it went is that I knew it would be much easier than what everybody though and in reality it was much easier than I though.
I guess the deliberate approach to the experiment paid off. It was though nothing like a culture hack in a meaning most people use the words.