Author: Pawel Brodzinski

  • Pitfalls of Kanban Series: Stalled Board

    One of signals that something may be wrong with a Kanban implementation is when the Kanban board’s design doesn’t change over time. Of course it is more likely that rapid changes of the board will be happening during early stages of the Kanban implementation, but even in mature teams a board that looks exactly like it did a year ago is sort of a warning light. For very fresh Kanban teams I would expect that board design would differ month-to-month.

    Actually, a stalled board is more a symptom than a problem on its own. The root cause is likely the fact that the team stopped improving their process. On one hand, it is a common situation that at the beginning the potential for change is significantly higher and it diminishes over time. On the other, I’ve yet to see a perfect team that doesn’t need to improve their processes at all.

    In such cases, what can one do to catalyze opportunities to discuss the board design?

    One idea that comes in very handy is to watch for situations in which any team member takes an index card to update its status and, for whatever reasons, struggles to find the right place on the board to put the card. It may be so because there isn’t relevant stage in the value stream, or the work currently done isn’t in line with the rest of process, or a task is sort of specific, or what have you. This kind of situation is a great trigger to start a discussion on the board’s alignment to what the team really does and how the work is done.

    Another idea is to dedicate a retrospective just to discuss the board. Such a constrained retro can be a natural opportunity to look for board-related issues or improvements in this area. I think about a class of issues that might not be painful enough to be brought as biggest problems to solve on regular basis, but, at the same time, we know that tiny changes in the board design or WIP limits can introduce tremendous changes in people’s behavior.

    There is also a bigger gun – a significant board face-lift. Following the idea that Jim Benson shared during his ACE Conference keynote, teams find it easy to describe and define about 80% of processes they follow. The rest seems vague and that’s always a tricky part when you think about value stream mapping. This is, by the way, totally aligned with my experience with such exercises – I’ve yet to meet a team that can define the way they work instantly and without arguing.

    Of course, introduction of visualization helps to sort this one out although it’s not that rare that we fall into a trap of idealizing our flow or just getting used to whatever is on the board.

    Then you can always use the ultimate weapon and redesign your board from scratch. Probably people will have in mind the board you’ve just wiped out and will mimic it to some point. Anyway, odds are that if you start a discussion from the beginning – the moment when work items of different classes of services arrive to the team –some new insight will pop up, helping you to improve the board.

    On a side note: it is also a good moment to discuss what you put on an index card exactly; I treat it as an integral part of board design, as you will need a different design for standard-sized work items and different for highly-variable projects on portfolio board.

    Read the whole Kanban pitfalls series.

  • Scott Berkun on Consultants and Practitioners

    Continuing the discussion on differing perspectives of consultants and practitioners, I have asked Scott Berkun a few questions on the subject. I chose Scott because for the past few months he has been coping with both options: while publishing his next book – Mindfire: Big Ideas for curious Minds – he spent a year and a half having something like a regular job at WordPress.com.

    Not only was I curious about Scott’s views on the subject but also I think we can learn a lot from him, especially those of us who are considering coupling both roles. So here are a few gems of knowledge gleened from Scott.

    Scott, you’ve recently left Automattic where you worked for some time and it has triggered me to ask you a few questions about your spell there. The difference between insider versus outsider or practitioner versus consultant perspective is something that draws my interest for some time already. You’ve decided to try living both lives concurrently and it gives you a unique perspective on a subject.

    Reading your blog and your tweets over time, my impression is that your enthusiasm for having a regular job while pursuing your career as a writer and a consultant was diminishing. Was that only an impression or there is something more to it?

    The plan was always to stay at WordPress.com for about a year. It’s a great place to work and it was hard to leave. Any complaining I did was probably just to help convince myself I needed to leave, which was hard to do as I enjoyed it so much. I stayed there for 18 months, 6 months longer than I’d planned.

    What was the biggest challenge of having two so different careers at the same time?

    Having two careers sucks. I don’t recommend it. My success in writing depends on full commitment. I can write books because I have no excuses not to. I succeed by focus. It’s the primary thing I’m supposed to do. Having two jobs divided my energy and I don’t have the discipline needed to make up for the gap. It also changed my free time. I noticed immediately the amount of reading I did dropped dramatically. I used to read about a book every week or so. That dropped to a book every few months. Having two jobs meant my brain demanded idle time which came at the expense of reading. I felt like I was working all the time, which isn’t healthy for anyone.

    And what was your biggest lesson from this time?

    The next book is about my experience working at WordPress.com and what I learned will be well documented there. Professionally I learned creating culture is the most powerful thing a leader does, and WordPress.com has done that exceedingly well.

    Do you think that coupling consultancy and a regular job is doable in the long run?

    I don’t know why anyone would want to work that much in the same field, honestly. For anyone who thinks I’m good at managing teams, or writing books, a huge reason why is the other interests and experiences I’ve had in my life that have nothing to do with leadership or software or writing.

    Do you plan to get another job at some time in future again? Why?

    As long as I’m paid to speak to people who are leaders and managers, it’s wise for me to periodically go back to working in an organization where I’m leading and managing people. It forced me to test how much of my own advice I actually practice, and refreshed my memory on what the real challenges are. Any guru or expert who hasn’t done the thing they’re lecturing others about in years should have their credibility questioned. I figure once a decade or so it’s a necessary exercise for any guru with integrity.

    Why we should consider moving to (or staying in) a consultancy role?

    When I first quit to be on my own I did a lot of consulting. As soon as the books started doing well and I had more requests to speak, I did less and less of it. I do it rarely now. Consultancy can be liberating as you are called in to play a specific role on a short time frame. If you like playing that specific role and like change (since who you work with changes with each new project), consultancy can make you happy. It pays well if you are well known enough to find clients.

    Why we should consider moving to (or staying in) regular jobs?

    Consultants rarely have much impact. Advice is easy to ignore. Consulting can be frustrating and empty for the consultant, even if you are paid well. Anyone serious about ideas and making great things knows they have to have their own skin in the game to achieve a dream. You can’t do that from the consulting sidelines. In a regular job at least there is the pretense of ownership. Everyone should be an entrepreneur at least once in their life: you can only discover what you are capable of, or not, when you free yourself from the constraints of other people.

  • Practitioner versus Consultant

    Whenever I’m acting as a coach, a facilitator or a consultant for a team there’s one thing that struck me every time – how much being a practitioner helps me in performing in the role. And when I say a practitioner I think about doing similar work as the teams do on a daily basis, and not only coaching, consulting or facilitating. It’s like doing my regular stuff except it is a bit different. But then, I’m solving similar problems every day, am I not?

    Personally, I could imagine myself being full-time consultant, although I believe I’d lose something this way. On one hand full-time consultants are exposed to more different environments as, well, this is what they do – they visit different organizations and work with them. On the other, consultants come, consultants go – most of the time they don’t hang around to see the final results of their work. After all it isn’t their responsibility to make the change stick.

    However, when I think about consultant versus practitioner perspective the biggest thing that still keeps me on practitioner’s side of the fence is fear of disconnection. At this moment whenever I’m “selling” you something it is likely verified in the organization I work (or have worked) for. Been there, seen that, done that. You can trust me.

    It’s not that I read this trendy book or my company is selling training of that method. It’s not that I spent much time on conferences listening to all those published authors, thought-leaders and whatnot who are extremely knowledgeable but are also long gone from real jobs, you know, the ones that produce something tangible.

    I really touch the crap. And live with it. So whenever I’m wrong there’s no one else but me to clean up the mess.

    So what I’m thinking about here are two things. One question is for consultants that are reading the blog (I know there are quite a few of you): how are you coping with the issue of disconnection? Or maybe it is just non-issue?

    Another question would be for those of you who are considering hiring some help to sort things out in your organization: would you prefer a consultant or a practitioner and why?

    I’d be glad to hear as many voices as possible, so if you are considering commenting the post but not really sure, please do – you’ll earn my infinite gratitude. And you definitely want it because it is exchangeable for a beer when you meet me.

  • Cadences and Iterations

    Often, when I’m working with teams that are familiar with Scrum, they find the concept of cadence new. It is surprising as they are using cadences, except they do it in a specific, fixed way.

    Let’s start from what most Scrum teams do, or should do. They build their products in sprints or iterations. At the beginning of each sprint they have planning session: they groom backlog, choose stories that will be built in the iteration, estimate them etc. In short, they replenish their to do queue.

    When the sprint ends the team deploys and demos their product to the client or a stakeholder who is acting client. Whoever is a target for team’s product knows that they can expect a new version after each timebox. This way there is a regular frequency of releases.

    Finally, at the very end of the iteration the team runs retrospective to discuss issues and improve. They summarize what happened during the sprint and set goals to another. Again, there is a rhythm of retrospectives.

    Then, the next sprint starts with a planning session and the whole cycle starts again.

    It looks like this.

    All practices – planning, release and retros – have exactly the same rhythm set by the length of timebox. A cadence is exactly this rhythm.

    However, you can think of each of practices separately. Some of us got used to the fact that frequency of planning, releases and retrospectives is exactly the same, but when you think about this it is just an artificial thing introduced by Scrum.

    Would it be possible to plan every second iteration? Well, yes, why not? If someone can tell in advance what they want to get, it shouldn’t be a problem.

    Would it be a problem if we had planning more often then? For many Scrum teams it would. However, what would happen if we planned too few stories for the iteration and we would be done halfway through the sprint? We’d probably pull more stories from backlog. Isn’t that planning? Or in other words, as long as we respect boundaries set by the team, wouldn’t it possible to plan more frequently?

    The same questions you can ask in terms of other practices. One thing I hear repeatedly is that more mature teams change frequency of retrospectives. They just don’t need them at the end of every single sprint. Another strategy is ad-hoc retro which usually makes them more frequent than timeboxes. Same with continuous delivery which makes you deploying virtually all the time.

    And this is where the concept of cadence comes handy. Instead of talking about a timebox, which fixes time for planning, releases and retrospectives, you start talking about a cadence of planning, a cadence of releasing and a cadence of retrospectives separately.

    At the beginning you will likely start with what you have at the moment, meaning that frequencies are identical and synchronized. Bearing in mind that these are different things you can perfectly tweak them in a way that makes sense in your context.

    If you have comfort of having product owner or product manager on-site, why should you replenish your to do queue only once per sprint? Wouldn’t it be better if the team worked on smaller batches of work, delivering value faster and shortening their feedback loops?

    On the other hand, if the team seems mature frequency of retros can be loosened a bit, especially if you see little value coming out of such frequent retros.

    At the same releases can be decided ad-hoc basing of value of stories the team has built or client’s readiness to verify what has been built or on weather in California yesterday.

    Depending on policies you choose to set cadences for your practices it may look like this.

    Or completely different. Because it’s going to be adjusted to the specific way of working of your team.

    Anyway, it is likely, that the ideal cycle of planning, releases and retrospectives isn’t exactly the same, so keeping cadences of all of these identical (and calling them iteration or timebox) is probably suboptimal.

    What more, thinking about a cadence you don’t necessarily need them to be fixed. As long as they are somewhat predictable they totally can be ad-hoc. Actually, in some cases, it is way better to have specific practice triggered on event basis and not on time basis. For example, a good moment to replenish to do queue is when it gets empty, a good moment to release is when we have a product ready, which may even be a few times a day, etc.

    Note: don’t treat it as a rant against iterations. There are good reasons to use them, especially when a team lacks discipline in terms of specific practices, be it running retros or regular deployments. If sprints work for you, that’s great. Although even then running a little experiment wouldn’t hurt, would it?

  • Trap of Estimation

    So we had this project which was supposed to end by the end of July. Unfortunately a simple burnup chart, which we used to track progress, seemed rather grim – it was consistently showing very beginning of September as a completion date. A month late.

    Suddenly, one day it started looking almost perfectly – end of July! Yay!

    Wait, wait, wait. What? I mean, what the hell happened on a single day that we suddenly recovered from a month-long slip? Something stinks here.

    After a while of digging we came up with the answer. Actually team invested some time to re-estimate all the work, including the work which was already done. Now, how the heck does it affect burnup chart?

    It seems the chart on the y axis, where the work items were shown, had a sum of estimates and not just a number of tasks. It means that changing estimates in retrospect affects the scope and percent complete of the project. It actually means that such changes can affect predicted completion date as well.

    This is called creative accounting and some people went to jail because of that, you know.

    My first question is whether such re-estimation changes real status of a project in terms of how much functionality is ready, what is done, how many bugs are fixed or lines of code written or any other creative, crazy or dumb measure you can come up with to say how much work has been done. Or does it change how much more work is there to be done?

    No! Double no, actually. It’s just a trick to tell us we aren’t screwed up that much. Actually, I accept the fact that we might have been OK in the first place and the chart was wrong. That would be awesome. But fixing the chart in such way, one, doesn’t change the status of work in any way and, two, just covers the real issue so it is harder to address it.

    What is the real issue then? Well, there are a couple of them. First, using time-based estimates to show how much work is to be done is asking for troubles. Unless you are a freaking magician and you can make your estimates right 5 months before you even start working on the task, that is. If you’re just a plain human, like me, and you assume your estimates are wrong using them as a basis for tracing project progress seems sort of dumb to me.

    It would be much better to count features or, if they vary much in size, count weight of features. Say S size is 3 times smaller than M and this is 3 times smaller than L or something like this. By the way, as you gather historical data you can pretty much fix these factors learning from past facts.

    Second, even if you decided to go with estimates to judge how much work is to be done, what makes you thinking that fixing estimates in retrospect pushes you forward for an inch in terms of the next project you’re going to run? Do you expect to know exactly, in advance how much time will take you to build features in future projects? Because this kind of knowledge you’re applying now to “fix” your estimates in current project.

    I would rather prefer a discussion on how to judge the scope better at the beginning of projects because this is going to be your benchmark. In this case precise estimates are almost useless. I will likely be pretty close in terms of telling how many features we have to build. It’s going to be trickier to say which of them will be small, medium or large. But I refuse to guess how many freaking hours each and every feature will take to build because such effort is utterly futile. It just so happens that I’ve forgotten to take my damn crystal ball with me, so sorry, that’s not going to work.

    In this case estimation brings us to a trap. Knowing exactly how much time each work item has taken it is easy to track the progress in retrospect. Average 8-year old child would be able to connect the dots. However, unless you’re a bloody superhero and you’re going to have such data at the beginning of your next project don’t treat it as viable method of tracking progress.

    Use any data that will be available in high quality at the beginning of a project. Number of features, maybe sized in some way if your team have some experience in sizing and you understand variability of work.

    Anyway, whatever you do, just don’t change the benchmark in retrospect as it’s going to mess your data and cover a real problem, which is that you should improve the way you set the benchmark in the first place.

    By the way: if you happen to work on time and material basis you can perfectly ignore the whole post, you lucky bastard. Actually, I doubt you even reached to this point of the post anyways.

  • Visualization Should Be Alive

    I’ve had a great discussion recently. A starting point was information from Kanban board – basing on my knowledge it wasn’t up to date and, as it appeared later, it wasn’t without a reason. A way of visualizing how situation looked like in a team was sort of tricky.

    We used the situation to discuss in details what is happening and how we should visualize it. Anyway, one thing struck me in retrospect – the less visualization changes the fewer chances we have to start such discussions.

    A good (or rather a bad) example is my portfolio Kanban board. Considering I try to visualize there projects of different sizes it’s not that uncommon when, in the long run, there are few changes on the board. On one hand, this is acceptable and even expected. On the other, there aren’t enough “call for action” situations when people are expected to do something, like moving stickies etc. Those situations that trigger important discussions.

    This is also why I prefer Kanban boards that are built of rather small-sized work items than those that are filled with huge tasks. They just aren’t that lively. They tend to stall.

    And when visualization stalls its value diminishes. People tend to develop a specific kind of blindness. They start treating their information radiator just like another piece of furniture, which results in the board being exactly that – a piece of furniture. Not useful in terms of improving team’s work.

    So remember this: as long as you expect visualization to generate value, it should live. If it doesn’t, think how you can make livelier. You won’t regret.

  • On Feedback

    I’m not a native English speaker, which basically means my English is far from perfect. Not a surprise, eh? Anyway, it happens sometimes when one of natives I’m talking with corrects me or specifically points one of mistakes I keep making.

    And I’m really thankful for that.

    I’m thankful most of the time such feedback happens instantly so I can refer to the mistake and at least try to correct it somehow.

    This is what happened recently when one of my friends pointed one of pronunciation mistakes I keep making. It worked. It did because feedback loop was short. It worked even better because it was critical feedback. I didn’t get support for all the words I pronounce correctly. It was just a short message: “you’re doing this wrong.”

    Of course it is my thing to decide whether I want to do something about this. Nevertheless I can hardly think of positive feedback I could receive that would be that helpful.

    When you think about this, it is contradictory to what we often hear about delivering feedback. It isn’t uncommon that we are thought how we should focus on positives because this is how we “build” people and not “destroy” them. Even more, delivering positive feedback is way more pleasant and for most people easier as well. It is tempting to avoid the critical part.

    When we are on feedback loops I have one obvious association. Agile in its core is about feedback loops, and short ones. We have iterations so we deliver working software fast and receive feedback from clients. Or even better, we have steady flow so we don’t wait till the end of sprint to get this knowledge about the very next feature we complete. We build (and possibly deploy too) continuously so we know whether what we’ve build is even working. And of course we have unit tests that tell us how our code works against predefined criteria.

    It is all about feedback loops, right?

    Of course we expect to learn that whatever we’ve built is the thing clients wanted, our code hasn’t broken the build and all the tests are green. However, on occasion, something will be less than perfect. A feature will work not exactly the way a client expected, a build will explode, a bunch of tests will go red or pronunciation of a word will be creepy.

    Are we offended by this feedback?

    Didn’t think so. What more, it helps us improve. It is timely, specific and… critical. So why, oh why are we that reluctant to share critical feedback?

    It would be way more harmful strategy to wait long before closing a feedback loop, no matter what the feedback is. Would it really tell you something if I pointed you this two-line change in code you did 4 months ago, that broke a couple of unit tests? Meaningless, isn’t it? By the way: this is why I don’t fancy performance reviews, even though I see the point of doing them in specific environments.

    Whenever you think of sharing feedback with people think about feedback you get from your build process or tests – it doesn’t matter that much whether it is positive or critical; what makes the difference is the fact it is quick and factual.

    You can hardly go wrong with timely and factual feedback, no matter whether it is supportive or not.

  • Pitfalls of Kanban Series: Wishful Thinking

    I find it pretty common that teams who adopt Kanban try to draw the ideal process on their boards. Not exactly the one they really follow, but the one they’d like to. It is thinking taken from prescriptive methods – since we have this ideal process we want to implement let’s just draw it so we know where we are heading to.

    Bad news is that Kanban in general, and Kanban board specifically, doesn’t work that way. You may draw pretty much any process on your Kanban board, a better or a worse one, too detailed or too generic, or just simply different. However in each case data you get from the board won’t reflect reality.

    The end result is pretty simple as with the board that is not up to date people will be making their everyday project decisions basing on a lie.

    What more, drawing the ideal process on the board instead of one that team really follows brings additional pains. People are confused when it comes to work with the board, e.g. I’m supposed to do code review but I haven’t and won’t do it but hey, I should put an index card here because our process on the board says so.

    As a result it is way harder to show value of Kanban board, people lose interest in updating it and whole Kanban implementation quickly deteriorates.

    The first step to deal with the problem is admitting your process is less-than-ideal. Pretty often it means admitting your process is simply crappy. As funny as it may sound teams find it really hard to go past this step. We wish we were better and this wishful thinking blinds us.

    Then it’s time to adjust the board so it reflects the way team works. It may be painful. I saw teams throwing out code review stage. I saw those throwing out all the testing stages. That didn’t mean they didn’t want to review code or test. That meant they weren’t able to do it consequently for majority of features they built.

    Note: at this stage a pressure from the top may appear. How come that you aren’t testing? That’s outrageous! That just cannot be! Well, it is, so better get used to it for a time being because drawing the ideal process on the Kanban board is almost as useful as drawing unicorns there. If such pressure appears, you definitely want to resist it.

    The final stage is continuous evolutionary improvement. If you track down root causes of suboptimal process you will likely find that your flow is unbalanced. If you want to balance the flow slack time and WIP limits should be your best friends. Treat them seriously. Don’t violate limits; don’t get tempted to use slack time to build new features.

    This change won’t be fast but at least odds are it will be successful. Drawing results of your wishful thinking on the Kanban board will fail for sure.

    Read the whole Kanban Pitfalls series.

  • Kanban and Behavioral Change

    One of my favorite and yet most surprising things I learned about Kanban over the years is how it steers change of behavior among mature teams. It shouldn’t be a surprise that at Kanban Leadership Retreat (#klrat) I ended up facilitating a session covering this area.

    Those of you familiar with #klrat format would understand me when I say I didn’t have any specific expected outcome of the session. I wanted to start with exchanging stories and see how it goes. Maybe we would be able to observe some patterns of behavioral changes steered by Kanban and learn from that.

    Fast forward to what we ended up with. For me it is still work in progress, so will see more on the subject soon. Anyway I pushed my thinking forward in terms of how we can stimulate our teams to improve.

    One thing you instantly notice on the picture with the session summary (click to enlarge) is that there despite the fact we were gathering unrelated stories we started building a chain of dependencies. Another observation is how frequently visualization pops up in different examples.

    What I believe we will be able to build is sort of a graph showing what kind of behavioral changes can be influenced or even incentivized with adopting specific practices. What more, I don’t want to keep it purely about Kanban. Although at #klrat Kanban context was set by design I’m pretty sure we can, and should, go beyond this context.

    A few highlight from the session, basing on stories we’ve shared:

    • Visualization combined with WIP limits and daily meetings around the board improve general team-wide knowledge about what’s happening. In short term this influence how people deal everyday tasks, encouraging them to get involved in work done by others, thus removing personal queues. As a result it pulls people toward generalization as they’re involved in many different tasks.
    • Visualization and measuring flow improves understanding of work. It makes people to focus on pain points and incentivize them to improve problematic areas. As a result a team takes more responsibility for how the work is done.
    • Rich visualization along with slack available to people results in better decisions and better use of slack time. It bases on a notion that rich visualization derives more meaningful data so whenever people decide what to do, especially when they aren’t constrained, e.g. when they’re using slack, improves the quality or potential outcome of these decisions. The final effect is building team’s collective intelligence both in terms of what they do and how they do it.
    • Making visualization fun fuels viral adoption of the method. It bases on a fact that people like having fun with tools they use. More fun means more people trying to find out what this cool thing is and how to use it. Eventually you get more people willing to introduce the method and better attitude toward the adoption.
    • Measuring flow (again) and understanding and visualizing (again) the nature of work can have impact in a multiple ways: creating incentive to change, building trust to a team and getting rid of 100% utilization. A simple fact that we were able to address so many effects to improved understanding how we work is a strong indicator that we do have problems with that. We might be onto something here.
    • Avoiding 100% utilization and slack time, which is generated this way, may be a tool to build leadership among the team (people are free to make decisions on what’s being done) and at the same time can improve fun factor (people can choose to work on fun stuff). In both cases we strengthen our teams and develop people.

    By the way: if you add a real story to each of these highlights you may imagine the experience we exposed ourselves to.

    In retrospect, one thing that is definitely worth stressing is how little we seem to know about nature of our work. Actually if you think about it, Kanban deals with this issue in a very comprehensive way.

    Visualization vastly improves information availability so we know better what is happening. Explicit policies force us to define, or agree on, the way we do work. Without explicit discussions on that we often believe we know how exactly how the work is done even when it’s clearly not true. Then we have flow management with a focus on measures that can change our perception of work highly. It’s a common situation when, being an outsider basing on a handful of simple measures, you can surprise a team showing them specifics of their work.

    If that wasn’t enough we get WIP limits that steer changes in the nature of work and give us an important lesson about nature of work in general.

    Anyway, leaving tools aside the lesson is to invest more effort to understanding how we work. It will pay off.

  • Slacker Manifesto

    We are professionals, we take pride of our work and we pursue continuous improvement. Most of all we learn.

    One thing we’ve learned is that pursuing 100% utilization is a myth and a harmful one. Another thing we’ve learned is value of slack time. Building on that, hereby we declare ourselves Slackers.

    And here is our Manifesto (you can sign it by leaving a comment if you like to).

    Slacker Manifesto

    On occasions we do nothing. Or something different. Or else.

    Because this means that our teams are more effective.

    It also means that we are freaking awesome.

    Signatories:
    Full list of signatories can be found here — in a stand-alone copy of Slacker Manifesto.

    Big thanks to Kate Terlecka and Andrzej Lorenz who influenced creation of this masterpiece highly.