Tag: learning

  • Practices, Principles, Values

    I was never a fan of recipes. Even less so when I heard that I have to apply them by the book. What I found over years was that books rarely, if ever, describe a context that is close enough to mine. This means that specific solutions wouldn’t be applicable in the same way as described in the original source.

    This is why I typically look for more abstracted knowledge and treat more context-dependent advice as rather inspiration that a real advice.

    From what I see that’s not a common attitude. I am surprised how frequently at conferences I would hear an argument that the sessions weren’t practical enough only because there was no recipe included. This is only a symptom though.

    A root cause for that is more general way of thinking and approaching problems. Something that we see over and over again when we’re looking at all sorts of transformations and change programs.

    People copy the most visible, obvious, and frequently least important practices.

    Jeffrey Pfeffer & Robert Sutton

    Our bias toward practices is there not without a reason. After all, we’ve heard success stories. What Toyota were doing to take over the lead in automotive industry. The early successes of companies adopting Agile methods. There were plenty of recipes in the stories. After all that’s what we first see when we are looking at the organizations.

    Iceberg

    The tricky part is that practices, techniques, tools and methods are just a tip of the iceberg. On one hand this is exactly what we see when we look at the sea. On the other there’s this ten times bigger part that is below the waterline. The underwater part is there and it is exactly what keep the tip above the water.

    In other words if we took just the visible tip of the iceberg and put it back to water the result wouldn’t be nearly as impressive.

    Practices, Principles, Values

    This metaphor is very relevant to how organizational changes happen. The thing we keep hearing about in experience reports and success stories is just a small part of the whole context. Unless we understand what’s hidden below the waterline copying the visible part doesn’t make any sense.

    Principles

    A thing beyond any practice is a principle. If we are talking about visualization we are implicitly talking about providing transparency and improving understanding of work too. Providing transparency is not a practice. It is a principle that can be embodied by a whole lot of practices.

    The interesting part is that there are principles behind practices but there are also principles that are embraced by an organization. If these two sets aren’t aligned applying a specific practice won’t work.

    Let me illustrate that with a story. There was a team of software architects in a company where Kanban was being rolled out across multiple teams. In that specific team there was a huge resistance even at the earliest stages, which is simply visualizing work.

    What was happening under the hood was that transparency provided by visualization was a threat for people working on the team. They were simply accomplishing very little. Most of the time they would spend time on meetings, discussions, etc. Transparency was a threat for their sense of safety, thus the resistance.

    Without understanding a deeper context though one would wonder what the heck was happening and why a method wouldn’t yield similar results as in another environment.

    Values

    The part that goes even deeper are values. When talking about values there’s one thing that typically comes to mind, which is all sorts of visions and mission statements, etc. This is where we will find values a company cares about. To be more precise: what an organization claims to care about.

    The problem with these is that very commonly there is a huge authenticity gap between the pretense and everyday behaviors of leaders and people in an organization.

    One value that would be mentioned pretty much universally is quality. Every single organization cares about high quality, right? Well, so they say, at least.

    A good question is what values are expressed by everyday behaviors. If a developer hears that there’s no time to write unit tests and they’re supposed to build ore features or no one really cares whether a build is green or red, what does that tell you about real values of a company?

    In fact, the pretense almost doesn’t matter at all. It plays its role only to build up frustration of people who see inauthenticity of the message. The values that matter would be those illustrated by behaviors. In many cases we would realize that it would mean utilization optimization, disrespecting people, lack of transparency, etc.

    Again, this is important because we can find values behind practices. If we take Kanban as an example we can use Mike Burrows’ take on Kanban values. Now, an interesting question would be how these values are aligned with values embraced by an organization.

    If they are not the impact of introducing the method would either be very limited, or non-existent or even negative. This is true for any method or practice out there.

    Mindfulness

    The bottom line of that is we need to be mindful in applying practices, tools and methods. It goes really far as not only does it mean initial deep understanding of the tools we use but also understanding of our own organization.

    This is against “fake it till you make it” attitude that I frequently see. In fact, in a specific context “making it” may not even be possible and without understanding the lower part of the iceberg we won’t be able to figure out what’s going wrong and why our efforts are futile.

    Paying attention to principles and values also enables learning. Without that we will simply copy the same tools we already know, no matter how applicable they are in a specific context. This is by the way what many agile coaches do.

    Mindful use of practice leads to learning; mindless use of practice leads to cargo cult.

  • Recipes Are (Almost) Useless

    One of the least useful pieces of advice you may ever get on management would go along the lines: “we’ve done such and such and it worked freaking miracles for us, thus you should do the same.” In fact, all the ‘shoulds’ and ‘musts’ are sort of a warning signal for me, whenever I learn about someone doing something right.

    This is by the way something that is surprisingly prevalent in many management books. A story of someone wildly successful who shares how they believe they achieved it. While I do appreciate a story and all the insider’s insight that help to understand a little bit more it is only a story.

    Was it backed up by credible research? Was it successfully adapted in a number of other organizations? What might be critical assumptions that we base on and how would the story be different in another context?

    It’s not that I would call each of these stories bullshit. Pretty much the opposite. I often find that many of the ideas shared are aligned with my experience. The issue is that on occasions I can track down some of the underlying prerequisites that aren’t even mentioned in the source. Does it mean lack of good will? I don’t think so. I’d rather go with fairly shallow knowledge.

    Unfortunately this has a lot to do with what we, as consumers of books, articles or conference presentations, expect. It is oh so common when the basic expectation is to get a recipe. Tell me how to build my startup. Tell me how to fix my effectiveness problem. Tell me how to grow an awesome team. Tell me how to change my organizational culture. Tell me how to scale up a method we are using.

    Yes, recipes sell well. They are sometimes dubbed “actionable content” as an opposition to theories that are non-actionable. That is unless someone undertakes effort to understand them and adjust to their own context.

    Over years I’ve been enthusiastic about some methods and practices I discovered. I’ve been skeptical about many more. I’ve changed my mind about quite a bunch of them too. An interesting realization is that the further from a plain recipe a method is the more I tend to consider it useful in the long run. I guess one of the reasons for me to stick with Kanban for all these years is that I quickly realized how adaptive the method is and how much liberty I could take using it.

    This is also a reason why I never become a fanboy of Scrum. While obviously there’s much value in specific practices it offers in specific contexts I got discouraged by early “do it by the book or you aren’t doing Scrum” attitude promoted by the community.

    I digress though. The point I want to make is pretty simple.

    Recipes are useless.

    OK, OK. Almost useless.

    It doesn’t really matter whether we discuss a project management techniques, software development practices or general management methods.

    It’s not without a reason that pretty much any approach, once it becomes popular, ends up being not nearly as useful as it was reported in early success stories. There are just too many possible contexts to have any universally sound solution.

    What’s more, during its early days, a method is typically handled by people who invest much time to understanding how and why it works. After all, there are no success stories yet, or very few of them, so a sane person handles such a thing cautiously. Then, eventually it goes downhill. Some would treat a method as universal cure for all the pains of any organization, others would sense a good certification business opportunity and suddenly any understanding of ‘whys’ and ‘hows’ is gone.

    One thing is what I believe in and how I approach the tools I adopt to my toolbox. Another one is that I frequently get asked about an advice. I guess this is inevitable when you do at least a bit of coaching and training. Anyway, in every such situation the challenge is to dodge a bullet and avoid giving a recipe as an answer. This by the way means that if I am answering your question with ‘shoulds’ and ‘musts’ you should kick me in the butt and ignore the answer altogether.

    A much better answer would be sharing a handful of alternatives along with all the assumptions I know that made them work in the first place. Additional points are for supporting that with relevant research or models.

    In either case the goal should be the ultimate understanding why this or that thing would work in a specific context.

    Without that a success story is just that – a story.

    Don’t get me wrong. I like stories. We are storytellers after all. It’s just that a story itself bears only that much value.

    The next time you hear a story, treat it for what it is. The next time someone offers you a recipe, treat it for what it is.

  • Why People Don’t Learn

    Josh Bradley in a comment under one of my older posts made me realize an interesting thing. Let me do the weirdest thing ever and quote myself a few times.

    “In general, people don’t care if you want to (and can) teach them something. They don’t want to learn.”

    Pawel Brodzinski, 2010

    “People are lazy. They don’t learn because it’s easier to leave things as they are.”

    Pawel Brodzinski, 2010

    Theory X tells us that people are lazy and we need to supervise them otherwise they’d do nothing. If you ask me, that’s total bullshit.”

    Pawel Brodzinski, 2013

    Now I feel so much better – someone has just quoted me. Wait, wasn’t it auto-quotation? Oh well…

    The point is that three years later I seem to have completely opposite point of view. I used to think that people are inherently lazy and now I consider that absurd. Embarrassing, isn’t it?

    Let me start with defending my younger self. On one level lazy, not willing to learn attitude is as ubiquitous as it was. I still look at the vast majority of people and see the same dysfunction. People would complain how their organizations don’t support their intrinsic urge to learn. At the same time they’d idly sit looking as learning opportunities as they pass by making a swooshing sound.

    The symptoms haven’t changed.

    What has changed is how much of a cause I ascribe to the people.

    I’m not a systems thinking junkie. I do consider people co-creators of the system they operate in. At the same time though they start with a given situation and can’t change it freely, thus the system constrains them on many accounts.

    How does it translate to laziness and reluctance to learn? Well, the questions we should ask are how the organization supports learning and what the rewards (or punishments) are when one decides to invest their time to self-development.

    There are (many) companies which don’t support personal development of their employees. This makes the game whole more challenging. At the same time I’m yet to see an organization where there is virtually no opportunities to learn.

    In fact, I think these two perspectives are inseparably connected. An organization that doesn’t support learning would discourage people with an urge to learn to stay there in a longer run. What’s more people who rarely give a damn about learning would thrive there sustaining the existing culture. Obviously, the opposite is true as well.

    As Jim Benson said “people build systems build people.” Both of them have to be in place to see continuous learning culture flourish.

  • Retrospectives Reloaded

    I’ve read and heard a lot advice on running better retrospectives. I’d even go that far to say that if you speak at agile event and you want an instant hit “how to run a good retro” should be very high on your list of potential topics.

    After all, this whole “getting better” thing seems really important to everyone and improvement is almost always a struggle for teams. A retrospective is a nice package; it’s pretty self-explanatory, it has a good name and it’s open-ended enough that you can adjust it to your needs.

    It’s not a new concept though. I was doing post mortems back then when agile was just a word in a dictionary. Same goal, different package. Well, maybe not that sexy, and that’s exactly why you should rather jump on a retro bandwagon to win the crowds.

    Anyway, the problem with vast majority of stuff I’ve heard or seen about running better retrospectives is simple: they are all recipes. You may apply them and they either work or not. Even if you’re lucky and they happen to work once, they quickly wear out. After all, how many times do we laugh at the same old gig?

    Then, we’re back to the square one. How to revive the next retro one more time?

    It’s because we get the wrong advice on retrospectives over and over again. It shouldn’t be “try this” or “try that.” The real magic happens when the thing you do during a retro is fresh and everyone gets involved.

    Both things are often surprisingly easy. You can count on people’s engagement when you don’t push them out of their comfort zones too far, e.g. I would say that singing in front of the team probably isn’t the best choice you might make. But all sorts of drawing are usually safe.

    And being fresh? Just think about all these funny ideas you discuss by the water-cooler. Playing an act, recording own version of movie classics, a concert, a cooking event, or cleaning a randomly chosen car on a parking lot are all such concepts. People were enthusiastic to sign up for such things. Wouldn’t they be so if it was a part of a retro?

    However, if you’re going to use a cooking event as an awesome idea for the next retro you understood nothing. Go, read the post again from the beginning. I don’t want to give you any recipe. I’m going to simply point the fact that every goddamn team on the planet Earth has a ton of fresh ideas for their next retro. It’s enough if you retrieve a single one of them.

    In fact, it’s even better. If you use one of those crazy ideas your team discussed a couple of days ago during lunch, odds are they will eagerly sign up for it.

    This is why I’m not going to be stunned with the next “better retros” session. Because, you know, I have such a session a few times a week. I just pay attention.

  • Radar Charts and Maturity of Kanban Implementations

    One of outcomes of Hakan Forss’ session on depth of Kanban practices at the Kanban Leadership Retreat was the use of radar charts to show the maturity of a Kanban implementation. The whole discussion started with the realization that different teams adopt Kanban practices in different orders, thus we need a tool to assess them somehow.

    Radar charts, or spider charts, seem to be good tools for visualizing how well a team is doing. However, when you start using them, interesting things pop up.

    Coming Up with Results

    First, how exactly do you tell how mature an adoption of a specific practice is? How far are we on a scale from 0 to 5 with visualization? Why? What about limiting work in progress? Etc.

    One of my teams decided to describe 0 as “doing nothing” and max as “where we think we would like to be.” With such an approach, a radar chart can be treated as a motivational poster – it shows exactly how much we still should do with our Kanban implementation. It also means that the team aims at a moving target – as time passes they will likely improve and thus set more ambitious goals.

    There is also a drawback to this approach. Such an assessment is very subjective and very prone to gaps in knowledge. If I think that everything there is to be done about WIP limits is to set those numbers in each column on the board and avoid violating them, I will easily hit the max on the “limiting WIP” axis. Then of course I’ll award myself the Optimist of the Week and Ignorant of the Month prizes, but that’s another story.

    On a side note: I pretty much expect that someone is going to come up with some kind of a poll with a bunch of questions that do the job for you and tell you how far you are with each practice. And, similarly to the Nokia Test, I think it will be a very mixed blessing with negatives outweighing positives.

    Finding Common Results

    The second issue is about gathering collective knowledge from a team. People will likely differ in their judgment – one would say that visualization is really mature, while the other will state that there’s lot more to be done in there.

    The obvious strategy is to discuss the areas where the differences are the biggest. However, it’s not a fancy flavor of planning poker so, for heaven’s sake, don’t try to make everyone agree on the same number. It is subjective after all.

    One more interesting trick that can be done is putting all the results on a single radar chart with min and max values creating the borders of an area. This area will tell you how your Kanban implementation is perceived.

    With such a graph not only do you want to have this bagel spread as far as possible but also to have it as thin as possible. The latter may be even a more important goal in closer perspective as a wide spread of results means that team members understand the tool they use very differently.

    Comparing Results between Teams

    The third issue pops up when you compare graphs created by different teams. Let’s assume you have both issues above solved already and you have some kind of consistent way of judging maturity of Kanban practices. It is still very likely that different teams will follow different paths to Kanban adoption, thus their charts will differ. After all this is what launched the whole discussion in the first place.

    It means, however, that you may draw very interesting conclusions from comparing the results of different teams. You don’t try to say which team is better and which needs more work. You actually launch discussions on how people are doing things and why they think they are good (or bad) at them. You enable collaborative learning.

    As a bonus you can see patterns on a higher level. For example, people across the organization are doing pretty well with visualization, have very mixed outcomes in terms of managing flow and are not that good when it comes to limiting WIP. It can help you focus on specific areas with your coaching and training effort.

    Besides, it is funny to see how a personal kanban maturity radar chart can look like.

    To summarize, radar charts are nice visuals to show you where you are with your Kanban adoption, but they may, and should, be used as a communication enabler and a learning catalyst.

  • How Much Work In Progress Do You Have?

    One of common patterns of adopting Kanban is that teams start just with visualization and, for whatever reasons, resist applying Work In Progress limits at the very beginning. While, and let me stress it, resignation from introducing WIP limits means drawing most of improvement power out of the system I understand that many teams feel safe to start this way.

    If you are in such point, or even a step earlier, when you’re just considering Kanban but haven’t yet started, and you are basically afraid of limits I have a challenge for you. Well, even if you use WIP I have the very same challenge.

    First, think of limits you might want to have.

    Second, measure how the tasks flow through your process. It’s enough to write down the date when you start working on a task and the date when you’re done with it – the difference would give you a cycle time.

    Third, after some time, check how many tasks in progress you really had every day. In other words: check what your WIP was.

    Odds are you will be surprised.

    One of my teams followed the “let’s just start with visualization and we’ll see how it goes” path. We even discussed WIP limits but eventually they weren’t applied. It is a functional team of 4 that juggles tasks which are pretty often blocked by their “clients,” i.e. beyond team’s control. The process, besides backlog and done bucket, is very simple: there’s only one column – ongoing.

    The discussion ended up with the idea of limit of 8, considering there are some rather longish tasks mixed with quite a few short but urgent tasks, e.g. “needs to be done today” sort, and of course there are frequent blockers. In other words rough limits two tasks per person should account for all the potential issues.

    As I’ve mentioned, WIP limits weren’t initially set. Even the WIP limit of 8 looked too scary at that point. After a few months we came back to the discussion. Fortunately, this time we had hard data from a hundred days.

    Guess what the worst WIP was.

    Seven. Over the course of a hundred days there wasn’t a single case that the scary limit of 8 was reached, let alone violated. What more, there were only 5 days where limit was higher than 6. In other words setting the limit of 6 and keeping it would be no sweat. A challenge starts at 5, which sounds very reasonable for such team.

    All of that considering that each and every blocked item was counted within the limit as at the moment the team doesn’t gather the data to show how long a task remains blocked.

    The lesson I got was that we can and should challenge our WIP limits basing on historical data. How often we hit WIP limits. How often we violate them. If it appears that we have enough padding that we barely scratch the ceiling on rare occasions it is a time to discuss reducing WIP limits. After all, it might mean that we are pursuing 100% utilization, which is bad.

    If WIP limits are barely and rarely painful, they aren’t working.

  • Better Conferences or Better Learning?

    Bob Marshall recently published his ideas how to improve conferences. Pretty radical ideas I’d say. Basically what Bob proposes is to move from traditional one-way communication to bi- or multi-directional conversations with expertise available on demand (read the whole post – it’s worth it). By the way similar points were shared by Jurgen Appelo in his writing as well.

    I’m no conference animal, even though I helped a bit to organize a few of such events and attended a few more. I went through different formats, from whole day long workshops, through few hour long tutorials, through anything between 90 and 30 minute long sessions, open spaces, TED-like no-more-than-18 minute-long performances, lightning talks, pecha kuchas and whatnot.

    While I understand Bob’s desire to change knowledge consumption from push model to pull model I find it hard to buy his ideas uncritically.

    There is one reason. The conference isn’t better because this or that format is generally better, but because the very set of people attending the very event learned much. In other words, thinking about an event we should think how this specific set of attendees is going to learn, which is a function of how they expect to learn and how they are prepared to learn.

    One of the best events I ever attended was Kanban Leadership Retreat. It was an unconference. It exploited many of ideas Bob shares. From a perspective of attendee, who was willing to learn even though they brought significant knowledge on the subject, it was great. The learning process was very multi-directional and pretty much everyone was both: a teacher and a student.

    At the same time on occasions I speak at events where such format would fall flat on its face. It would, as people who attend generally expect knowledge to be pushed to their heads. You may laugh but actually even such approach is sometimes expected in a whole spectrum of behaviors. On one end there’s mindless zombie who was sent to the event by the company (yet still they can learn something). On another there’s TED, where you know close to nothing on vast majority of subjects being discussed and actually expect expertise from people on the stage. Note: we’re still in “Dear speaker, I know nothing of whatever you’re talking about” land. I know there is another dimension where you move from one-way learning to everyone’s a teacher attitude.

    So basically my thought on the subject is: first, understand what the effective method of learning is for this very group you’re sharing your knowledge with. And yes, I’m talking here about majority, or average, if you excuse me such vast oversimplifications. I’m saying so because we don’t measure success of event by happiness of most demanding person in the room. Even more, probably the most demanding person in the room shouldn’t be happy with the event, because arguably it would usually come at a price of having many others not catching up with the content.

    Having said that I believe that generally speaking conferences should head the way Bob describes as our focus is still on pushing knowledge, not pulling it. I wouldn’t be so quick to revolutionary change all the events though – I would rather look for opportunity to broaden variety of methods attendees can use to learn.

    This is what a better learning is all about. And better learning is something better conferences should be all about.

  • Learn. Adapt. Experiment. Repeat.

    One of recurring themes in my discussions on different methods and practices we use in our professional lives is: understand why and how the thing works so you can safely adjust it or substitute it with something else and get the same effect.

    A common example is stand-ups. Why are stand-ups limited to short time (15 minutes)? Why were they intended to be done with people standing and not sitting? Why do we answer three standard questions? And finally, how does it help us?

    Can you answer these questions from the top of your head?

    I know, it isn’t rocket science whatsoever. Yet I know many leaders, and even more teams, that would struggle to answer them reasonably.

    Such understanding of tools we use isn’t crucial only because it means you can go beyond by-the-book approach with methods and practices you adopt. It also is a signal that you know and use learn-adapt-experiment-repeat pattern. And this is a game-changer in terms of improving the way you and your team works.

    Let me share a story. I had a management retreat today, which was basically dedicated to discussion over handful of topics that are important for us. During the retreat’s summary a bit of feedback I received a couple of times was about the method of finishing discussions we used.

    Basically we had a Kanban board to organize subjects to discuss and at any given moment we had 1 (if any) subject that was “ongoing.” Now, if anyone out of 14 people in a room felt that discussion wasn’t adding value anymore or was meandering toward something totally different, they put a small sticky on subject’s index card. Once we had 3 stickies the discussion was over and could go further later, meaning during a break or after the retreat, in a group of people interested.

    My goal was simply not to see a dozen people bored to death only because there still are 2 folks who are willing to continue discussing something deadly important to them. At the same time I didn’t want to cut the discussion in half only because a timeslot dedicated for it was over, thus no timeslots whatsoever.

    Although no one taught me the method directly I’d lie if I said that I came up with the idea. Actually a few days ago I read Benjamin Mitchell’s post about two hands rule – a method one can use to cut irrelevant discussions during stand-ups.

    What I learned from Benjamin’s post wasn’t a stand-up-related technique. I learned the mechanism and understood how it worked. I didn’t dismiss the idea only because I don’t regularly attend any stand-up these days.

    Eventually, just after a few days, it came up handy. It required some changes in details as forcing people to keep their hands up for 20 minutes could be considered mobbing, but in its heart it is exactly the same tool.

    What happened here is I learned something new, adapted it as needed and experimented (I didn’t know how it would go). Finally, I learned something new. It seems I’m already at the beginning of the next iteration of the pattern. And I have a new tool in my toolbox. One which comes handy with things I regularly do.

    I’m two steps ahead. How about you? Are you there too or you still are following the book?

  • Learn! Or How to Get a Better Job

    A couple of days ago I had a chance to speak at a local meetup. It probably won’t come as a surprise that I was speaking on Kanban. In fact, it was a test run of one of my presentations I was preparing for one of big events. The point is, only few people shown up.

    Don’t get me wrong, I don’t complain. Actually, such events are always win-win. Speaker gets some valuable feedback and an audience attends a session for free, which they would have to pay for otherwise. My end of the deal worked fine – I’ve already improved the session basing on feedback I received. However, I was somehow surprised, and in a negative way, that only few people popped up.

    Well, maybe “surprise” isn’t the right word. If you asked me I would say that most people didn’t really care to exploit chances to learn, so they wouldn’t use this one either. People, in general, don’t want to learn. They don’t, even if they state otherwise. People, again, in general, are lazy. They are, even if they deny.

    So no, I didn’t expect wild crowds even though I believe the message about the meetup reached quite a bunch of people. I see the same pattern whenever me, or my friends, are involved in organization of a local community events.

    However, since I always consider a glass half-full I see a good side of the situation too. If you happen to be the part of this small bunch of people and you actually care to exploit any occasions to learn, not only do you unwind yourself but you also become a demanded employee on a job market.

    I was discussing with one of my friends how does he see himself as an engineer. His point was that he wasn’t a rock star developer. My point was sort of similar. He wasn’t a rock star developer… yet.

    What I consider as one of his biggest strengths is his urge to learn. He doesn’t have a problem to invest a couple of hours of evening or weekend to attend local community event. He does this as A, it is a chance to learn and B, it is an occasion to meet interesting people and exchange experience with them.

    What he basically does is he’s consciously working on becoming a better professional than he is right now. So if you asked me about his value on job market I wouldn’t answer talking about what he knows at the moment, but what kind of potential the guy has and how he is using it. Give me the choice among him and another developer who is very skilled but have a regular “I don’t give a damn” approach and it would be a no-brainer for me when it comes to choose who I want to work with.

    Sometimes I hear complaints about different trainings or presentations people attend. It wasn’t that stunningly mind-blowing, or the trainer could have been better, or two third of content wasn’t new at all or whatever else. Now, let me stress it, in my whole life I’ve never been on a training, conference or meetup where I wasn’t able to learn anything at all. Yes, it is true that sometimes you learn by negative examples, meaning the only thing you get is knowledge how not to do things. But it is still a lesson, and a valuable one!

    So even though I expect people don’t give a damn I’m still surprised why it is so. If I asked all these people whether they want their career to be just a bit better they would agree in a second. And yet they do nothing to improve the situation they’re in.

    If I counted all the hours I voluntarily spent on learning, including all the ramblings I share on this blog it would be hell lot of time. And believe me; I don’t regret any minute spent on this, even though I learned many things I don’t use at the moment. And no, no one paid me for that. It was just an investment on my side. The investment, which pays off, as I’m better professional today than I was yesterday. Or so I hope.

    This basically means that if you happen to hire me you don’t just buy what I am today, but you also get all the potential I’m striving to exploit. The same I look for when I hire. I look at who you can become in a couple of years, not only what you’re worth now.

    Why am I writing all that? I do, to make you move your butt, look for occasions to learn and exploit them! Yes, I have selfish motivation as well. Next time I do something in local community I want to see more faces popping up. I want to see more people who strive to learn since it means there are more people I want to hire. And now that you asked, yes, I consider it win-win.

  • Experiment!

    I have a question for you: when was the last time you did an experiment on work you do? I mean if you are a developer when did you try something new: new practice, new way of doing things, maybe new technology to deal with a problem? If you are a project manager when you tried to do your stuff differently? Maybe it was a different way of running a retrospective or moving a planning meeting to a cafeteria or something completely different?

    If you don’t have the answer at hand think about it for a while. It is kind of important.

    If you think that there is expected answer you’re right. The expected answer is “today.” I expect you experiment all the time. I expect you challenge the way you work constantly. I expect you put rules to a question every now and then.

    It’s a funny observation – as I go through different questions on PMSE I often see this pattern: “I follow this, this and that and I do have this issue.” Every time one of answer is: “challenge your rules.” You can bet this kind of answer will be there in a matter of hours.

    This doesn’t come up as a surprise for me. This is the lesson I get over and over again. What you know is wrong. Well, it’s wrong in a way that it isn’t the best, or the most optimal, way of doing things. So if you want to keep you saw sharp you need to look for ways to improve your toolbox constantly.

    Is there a better way to do it than experimenting?

    So please do one thing today: do an experiment on your toolbox, your rules or the way you build things. Change something and see how it goes.

    Then, make it a habit.

    And if you ask what kind of experiment I did today there were many of them: I was experimenting with approach we have to training, with tools we use to build applications, with Kanban at portfolio level and with team organization. And these are only things I touched in some way today.