Author: Pawel Brodzinski

  • Splitting Huge Tasks

    On occasions I deal with an issue small enough that it barely deserves a full-blown blog post, yet it is hard to pack it into 140 characters of a tweet. However, when I’m advising on such an issue for yet another time it is a clear signal that sharing an idea how to deal with it might be useful. So, following an experimentation mindset, I’m going to try short posts addressing this kind of issues and see how they are received.

    One of pretty common problems is with splitting tasks. For example a typical task for a team can take something between 4 hours and a couple of days. And then, there is this gargantuan task that takes 3 months. Actually, 3 months, 5 days and 3 hours. It is, however, quite a coherent work item. Basing on merits it does make sense to treat it as a single task.

    On a side note: for whatever reasons it happens more often in non-software development teams.

    The problem is, it heavily affects any metrics you may gather. Sometimes it affects metrics to the point when analyzing them doesn’t make much sense anymore. If you include this huge task in your metrics they all go mad. If you don’t, you basically hide the fact that a part of the team was working on something that isn’t accounted at all. So the question is: should you accept it and move on or do something with the task?

    I’m not orthodox but I’d rather split the task to smaller ones. Usually this is the point when new issues raise – for example the task can be split but for pieces so small that measuring them separately adds way too much hassle. An alternative can be grouping these tiny pieces into batches of a size that does make sense for the team.

    Anyway, I’d still go with splitting the task, even if division is artificial to some point. The knowledge you gain from metrics is worth the effort.

    In short: when in doubt – split.

  • Radar Charts and Maturity of Kanban Implementations

    One of outcomes of Hakan Forss’ session on depth of Kanban practices at the Kanban Leadership Retreat was the use of radar charts to show the maturity of a Kanban implementation. The whole discussion started with the realization that different teams adopt Kanban practices in different orders, thus we need a tool to assess them somehow.

    Radar charts, or spider charts, seem to be good tools for visualizing how well a team is doing. However, when you start using them, interesting things pop up.

    Coming Up with Results

    First, how exactly do you tell how mature an adoption of a specific practice is? How far are we on a scale from 0 to 5 with visualization? Why? What about limiting work in progress? Etc.

    One of my teams decided to describe 0 as “doing nothing” and max as “where we think we would like to be.” With such an approach, a radar chart can be treated as a motivational poster – it shows exactly how much we still should do with our Kanban implementation. It also means that the team aims at a moving target – as time passes they will likely improve and thus set more ambitious goals.

    There is also a drawback to this approach. Such an assessment is very subjective and very prone to gaps in knowledge. If I think that everything there is to be done about WIP limits is to set those numbers in each column on the board and avoid violating them, I will easily hit the max on the “limiting WIP” axis. Then of course I’ll award myself the Optimist of the Week and Ignorant of the Month prizes, but that’s another story.

    On a side note: I pretty much expect that someone is going to come up with some kind of a poll with a bunch of questions that do the job for you and tell you how far you are with each practice. And, similarly to the Nokia Test, I think it will be a very mixed blessing with negatives outweighing positives.

    Finding Common Results

    The second issue is about gathering collective knowledge from a team. People will likely differ in their judgment – one would say that visualization is really mature, while the other will state that there’s lot more to be done in there.

    The obvious strategy is to discuss the areas where the differences are the biggest. However, it’s not a fancy flavor of planning poker so, for heaven’s sake, don’t try to make everyone agree on the same number. It is subjective after all.

    One more interesting trick that can be done is putting all the results on a single radar chart with min and max values creating the borders of an area. This area will tell you how your Kanban implementation is perceived.

    With such a graph not only do you want to have this bagel spread as far as possible but also to have it as thin as possible. The latter may be even a more important goal in closer perspective as a wide spread of results means that team members understand the tool they use very differently.

    Comparing Results between Teams

    The third issue pops up when you compare graphs created by different teams. Let’s assume you have both issues above solved already and you have some kind of consistent way of judging maturity of Kanban practices. It is still very likely that different teams will follow different paths to Kanban adoption, thus their charts will differ. After all this is what launched the whole discussion in the first place.

    It means, however, that you may draw very interesting conclusions from comparing the results of different teams. You don’t try to say which team is better and which needs more work. You actually launch discussions on how people are doing things and why they think they are good (or bad) at them. You enable collaborative learning.

    As a bonus you can see patterns on a higher level. For example, people across the organization are doing pretty well with visualization, have very mixed outcomes in terms of managing flow and are not that good when it comes to limiting WIP. It can help you focus on specific areas with your coaching and training effort.

    Besides, it is funny to see how a personal kanban maturity radar chart can look like.

    To summarize, radar charts are nice visuals to show you where you are with your Kanban adoption, but they may, and should, be used as a communication enabler and a learning catalyst.

  • Feedback Culture

    This is a rant. I’m sorry.

    We have our mouths full of feedback. We are eager to get feedback on our work. We consider sharing feedback as a crucial part of the work of any leader. Feedback this. Feedback that.

    Yeah, that’s all true. Except we’re missing one part.

    When it comes to leaving our comfort zones, we instantly start sucking at sharing feedback. We suck big time. You don’t like how our folks from PR team dealt with a recent initiative, right? After all you are just telling me that. So why won’t you just go and tell them? Brilliant, isn’t it?

    It’s pretty easy, you know. You use your mouth to construct these things called words and you build sentences out of words. And then the magic happens – you can transmit the message using sentences. Voila!

    That’s easy. Really. Just remember to be honest. Share the message in a straightforward way. Don’t judge. You will manage. I believe in you.

    Don’t get me wrong. I’m not freaking out over a single situation. I see this as a pattern. Actually, whenever I see any questions regarding feedback my default answer is “honest and straightforward.” The problem is this answer doesn’t seem to very popular. Actually beating around the bush or simply “don’t tell anything” types of answers seems to be the standard behavior for many.

    So why, oh why, are you surprised that you don’t get much quality feedback? After all you too are contributing to building this sick organization that is just afraid to share any. It’s simple – if no one shares feedback no one receives it either. It doesn’t populate like freaking lemmings or something.

    And while we are on this topic, well, it’s not only how you (don’t) share feedback; it’s also how you receive it. Next time someone wants to share something critical about you or your work, try this: STFU and listen. The other person has just moved their butt out of their comfort zone to tell you something they think is important. The least you shall do is to let them do their part. But you should do better – listen and try to learn something from it. A simple “thank you” seems proper too.

    You may even disagree with the merits of the feedback but it isn’t some kind of odd negotiation or something. No one is trying to win this discussion with you. No one is attacking you. So spare me the drama and don’t get all defensive. It neither helps you nor the other guy.

    Most of all, it definitely does nothing good to the feedback culture you may try to introduce into your organization. Not to mention building trust.

    If you really want to build an open feedback culture in your company, start sharing and stop being a jerk, I mean defensive, when you receive feedback. If your organization doesn’t appreciate this, think again whether it is the right organization to be with.

    Now that you asked, yes, such an attitude means that you become vulnerable in front of your superiors, peers and colleagues. And yes, it is a crucial part of building trust. I don’t know how it is in your case but I wouldn’t like to work for an organization that is incapable of building trust. Would you?

  • Pitfalls of Kanban Series: Stalled Board

    One of signals that something may be wrong with a Kanban implementation is when the Kanban board’s design doesn’t change over time. Of course it is more likely that rapid changes of the board will be happening during early stages of the Kanban implementation, but even in mature teams a board that looks exactly like it did a year ago is sort of a warning light. For very fresh Kanban teams I would expect that board design would differ month-to-month.

    Actually, a stalled board is more a symptom than a problem on its own. The root cause is likely the fact that the team stopped improving their process. On one hand, it is a common situation that at the beginning the potential for change is significantly higher and it diminishes over time. On the other, I’ve yet to see a perfect team that doesn’t need to improve their processes at all.

    In such cases, what can one do to catalyze opportunities to discuss the board design?

    One idea that comes in very handy is to watch for situations in which any team member takes an index card to update its status and, for whatever reasons, struggles to find the right place on the board to put the card. It may be so because there isn’t relevant stage in the value stream, or the work currently done isn’t in line with the rest of process, or a task is sort of specific, or what have you. This kind of situation is a great trigger to start a discussion on the board’s alignment to what the team really does and how the work is done.

    Another idea is to dedicate a retrospective just to discuss the board. Such a constrained retro can be a natural opportunity to look for board-related issues or improvements in this area. I think about a class of issues that might not be painful enough to be brought as biggest problems to solve on regular basis, but, at the same time, we know that tiny changes in the board design or WIP limits can introduce tremendous changes in people’s behavior.

    There is also a bigger gun – a significant board face-lift. Following the idea that Jim Benson shared during his ACE Conference keynote, teams find it easy to describe and define about 80% of processes they follow. The rest seems vague and that’s always a tricky part when you think about value stream mapping. This is, by the way, totally aligned with my experience with such exercises – I’ve yet to meet a team that can define the way they work instantly and without arguing.

    Of course, introduction of visualization helps to sort this one out although it’s not that rare that we fall into a trap of idealizing our flow or just getting used to whatever is on the board.

    Then you can always use the ultimate weapon and redesign your board from scratch. Probably people will have in mind the board you’ve just wiped out and will mimic it to some point. Anyway, odds are that if you start a discussion from the beginning – the moment when work items of different classes of services arrive to the team –some new insight will pop up, helping you to improve the board.

    On a side note: it is also a good moment to discuss what you put on an index card exactly; I treat it as an integral part of board design, as you will need a different design for standard-sized work items and different for highly-variable projects on portfolio board.

    Read the whole Kanban pitfalls series.

  • Scott Berkun on Consultants and Practitioners

    Continuing the discussion on differing perspectives of consultants and practitioners, I have asked Scott Berkun a few questions on the subject. I chose Scott because for the past few months he has been coping with both options: while publishing his next book – Mindfire: Big Ideas for curious Minds – he spent a year and a half having something like a regular job at WordPress.com.

    Not only was I curious about Scott’s views on the subject but also I think we can learn a lot from him, especially those of us who are considering coupling both roles. So here are a few gems of knowledge gleened from Scott.

    Scott, you’ve recently left Automattic where you worked for some time and it has triggered me to ask you a few questions about your spell there. The difference between insider versus outsider or practitioner versus consultant perspective is something that draws my interest for some time already. You’ve decided to try living both lives concurrently and it gives you a unique perspective on a subject.

    Reading your blog and your tweets over time, my impression is that your enthusiasm for having a regular job while pursuing your career as a writer and a consultant was diminishing. Was that only an impression or there is something more to it?

    The plan was always to stay at WordPress.com for about a year. It’s a great place to work and it was hard to leave. Any complaining I did was probably just to help convince myself I needed to leave, which was hard to do as I enjoyed it so much. I stayed there for 18 months, 6 months longer than I’d planned.

    What was the biggest challenge of having two so different careers at the same time?

    Having two careers sucks. I don’t recommend it. My success in writing depends on full commitment. I can write books because I have no excuses not to. I succeed by focus. It’s the primary thing I’m supposed to do. Having two jobs divided my energy and I don’t have the discipline needed to make up for the gap. It also changed my free time. I noticed immediately the amount of reading I did dropped dramatically. I used to read about a book every week or so. That dropped to a book every few months. Having two jobs meant my brain demanded idle time which came at the expense of reading. I felt like I was working all the time, which isn’t healthy for anyone.

    And what was your biggest lesson from this time?

    The next book is about my experience working at WordPress.com and what I learned will be well documented there. Professionally I learned creating culture is the most powerful thing a leader does, and WordPress.com has done that exceedingly well.

    Do you think that coupling consultancy and a regular job is doable in the long run?

    I don’t know why anyone would want to work that much in the same field, honestly. For anyone who thinks I’m good at managing teams, or writing books, a huge reason why is the other interests and experiences I’ve had in my life that have nothing to do with leadership or software or writing.

    Do you plan to get another job at some time in future again? Why?

    As long as I’m paid to speak to people who are leaders and managers, it’s wise for me to periodically go back to working in an organization where I’m leading and managing people. It forced me to test how much of my own advice I actually practice, and refreshed my memory on what the real challenges are. Any guru or expert who hasn’t done the thing they’re lecturing others about in years should have their credibility questioned. I figure once a decade or so it’s a necessary exercise for any guru with integrity.

    Why we should consider moving to (or staying in) a consultancy role?

    When I first quit to be on my own I did a lot of consulting. As soon as the books started doing well and I had more requests to speak, I did less and less of it. I do it rarely now. Consultancy can be liberating as you are called in to play a specific role on a short time frame. If you like playing that specific role and like change (since who you work with changes with each new project), consultancy can make you happy. It pays well if you are well known enough to find clients.

    Why we should consider moving to (or staying in) regular jobs?

    Consultants rarely have much impact. Advice is easy to ignore. Consulting can be frustrating and empty for the consultant, even if you are paid well. Anyone serious about ideas and making great things knows they have to have their own skin in the game to achieve a dream. You can’t do that from the consulting sidelines. In a regular job at least there is the pretense of ownership. Everyone should be an entrepreneur at least once in their life: you can only discover what you are capable of, or not, when you free yourself from the constraints of other people.

  • Practitioner versus Consultant

    Whenever I’m acting as a coach, a facilitator or a consultant for a team there’s one thing that struck me every time – how much being a practitioner helps me in performing in the role. And when I say a practitioner I think about doing similar work as the teams do on a daily basis, and not only coaching, consulting or facilitating. It’s like doing my regular stuff except it is a bit different. But then, I’m solving similar problems every day, am I not?

    Personally, I could imagine myself being full-time consultant, although I believe I’d lose something this way. On one hand full-time consultants are exposed to more different environments as, well, this is what they do – they visit different organizations and work with them. On the other, consultants come, consultants go – most of the time they don’t hang around to see the final results of their work. After all it isn’t their responsibility to make the change stick.

    However, when I think about consultant versus practitioner perspective the biggest thing that still keeps me on practitioner’s side of the fence is fear of disconnection. At this moment whenever I’m “selling” you something it is likely verified in the organization I work (or have worked) for. Been there, seen that, done that. You can trust me.

    It’s not that I read this trendy book or my company is selling training of that method. It’s not that I spent much time on conferences listening to all those published authors, thought-leaders and whatnot who are extremely knowledgeable but are also long gone from real jobs, you know, the ones that produce something tangible.

    I really touch the crap. And live with it. So whenever I’m wrong there’s no one else but me to clean up the mess.

    So what I’m thinking about here are two things. One question is for consultants that are reading the blog (I know there are quite a few of you): how are you coping with the issue of disconnection? Or maybe it is just non-issue?

    Another question would be for those of you who are considering hiring some help to sort things out in your organization: would you prefer a consultant or a practitioner and why?

    I’d be glad to hear as many voices as possible, so if you are considering commenting the post but not really sure, please do – you’ll earn my infinite gratitude. And you definitely want it because it is exchangeable for a beer when you meet me.

  • Cadences and Iterations

    Often, when I’m working with teams that are familiar with Scrum, they find the concept of cadence new. It is surprising as they are using cadences, except they do it in a specific, fixed way.

    Let’s start from what most Scrum teams do, or should do. They build their products in sprints or iterations. At the beginning of each sprint they have planning session: they groom backlog, choose stories that will be built in the iteration, estimate them etc. In short, they replenish their to do queue.

    When the sprint ends the team deploys and demos their product to the client or a stakeholder who is acting client. Whoever is a target for team’s product knows that they can expect a new version after each timebox. This way there is a regular frequency of releases.

    Finally, at the very end of the iteration the team runs retrospective to discuss issues and improve. They summarize what happened during the sprint and set goals to another. Again, there is a rhythm of retrospectives.

    Then, the next sprint starts with a planning session and the whole cycle starts again.

    It looks like this.

    All practices – planning, release and retros – have exactly the same rhythm set by the length of timebox. A cadence is exactly this rhythm.

    However, you can think of each of practices separately. Some of us got used to the fact that frequency of planning, releases and retrospectives is exactly the same, but when you think about this it is just an artificial thing introduced by Scrum.

    Would it be possible to plan every second iteration? Well, yes, why not? If someone can tell in advance what they want to get, it shouldn’t be a problem.

    Would it be a problem if we had planning more often then? For many Scrum teams it would. However, what would happen if we planned too few stories for the iteration and we would be done halfway through the sprint? We’d probably pull more stories from backlog. Isn’t that planning? Or in other words, as long as we respect boundaries set by the team, wouldn’t it possible to plan more frequently?

    The same questions you can ask in terms of other practices. One thing I hear repeatedly is that more mature teams change frequency of retrospectives. They just don’t need them at the end of every single sprint. Another strategy is ad-hoc retro which usually makes them more frequent than timeboxes. Same with continuous delivery which makes you deploying virtually all the time.

    And this is where the concept of cadence comes handy. Instead of talking about a timebox, which fixes time for planning, releases and retrospectives, you start talking about a cadence of planning, a cadence of releasing and a cadence of retrospectives separately.

    At the beginning you will likely start with what you have at the moment, meaning that frequencies are identical and synchronized. Bearing in mind that these are different things you can perfectly tweak them in a way that makes sense in your context.

    If you have comfort of having product owner or product manager on-site, why should you replenish your to do queue only once per sprint? Wouldn’t it be better if the team worked on smaller batches of work, delivering value faster and shortening their feedback loops?

    On the other hand, if the team seems mature frequency of retros can be loosened a bit, especially if you see little value coming out of such frequent retros.

    At the same releases can be decided ad-hoc basing of value of stories the team has built or client’s readiness to verify what has been built or on weather in California yesterday.

    Depending on policies you choose to set cadences for your practices it may look like this.

    Or completely different. Because it’s going to be adjusted to the specific way of working of your team.

    Anyway, it is likely, that the ideal cycle of planning, releases and retrospectives isn’t exactly the same, so keeping cadences of all of these identical (and calling them iteration or timebox) is probably suboptimal.

    What more, thinking about a cadence you don’t necessarily need them to be fixed. As long as they are somewhat predictable they totally can be ad-hoc. Actually, in some cases, it is way better to have specific practice triggered on event basis and not on time basis. For example, a good moment to replenish to do queue is when it gets empty, a good moment to release is when we have a product ready, which may even be a few times a day, etc.

    Note: don’t treat it as a rant against iterations. There are good reasons to use them, especially when a team lacks discipline in terms of specific practices, be it running retros or regular deployments. If sprints work for you, that’s great. Although even then running a little experiment wouldn’t hurt, would it?

  • Trap of Estimation

    So we had this project which was supposed to end by the end of July. Unfortunately a simple burnup chart, which we used to track progress, seemed rather grim – it was consistently showing very beginning of September as a completion date. A month late.

    Suddenly, one day it started looking almost perfectly – end of July! Yay!

    Wait, wait, wait. What? I mean, what the hell happened on a single day that we suddenly recovered from a month-long slip? Something stinks here.

    After a while of digging we came up with the answer. Actually team invested some time to re-estimate all the work, including the work which was already done. Now, how the heck does it affect burnup chart?

    It seems the chart on the y axis, where the work items were shown, had a sum of estimates and not just a number of tasks. It means that changing estimates in retrospect affects the scope and percent complete of the project. It actually means that such changes can affect predicted completion date as well.

    This is called creative accounting and some people went to jail because of that, you know.

    My first question is whether such re-estimation changes real status of a project in terms of how much functionality is ready, what is done, how many bugs are fixed or lines of code written or any other creative, crazy or dumb measure you can come up with to say how much work has been done. Or does it change how much more work is there to be done?

    No! Double no, actually. It’s just a trick to tell us we aren’t screwed up that much. Actually, I accept the fact that we might have been OK in the first place and the chart was wrong. That would be awesome. But fixing the chart in such way, one, doesn’t change the status of work in any way and, two, just covers the real issue so it is harder to address it.

    What is the real issue then? Well, there are a couple of them. First, using time-based estimates to show how much work is to be done is asking for troubles. Unless you are a freaking magician and you can make your estimates right 5 months before you even start working on the task, that is. If you’re just a plain human, like me, and you assume your estimates are wrong using them as a basis for tracing project progress seems sort of dumb to me.

    It would be much better to count features or, if they vary much in size, count weight of features. Say S size is 3 times smaller than M and this is 3 times smaller than L or something like this. By the way, as you gather historical data you can pretty much fix these factors learning from past facts.

    Second, even if you decided to go with estimates to judge how much work is to be done, what makes you thinking that fixing estimates in retrospect pushes you forward for an inch in terms of the next project you’re going to run? Do you expect to know exactly, in advance how much time will take you to build features in future projects? Because this kind of knowledge you’re applying now to “fix” your estimates in current project.

    I would rather prefer a discussion on how to judge the scope better at the beginning of projects because this is going to be your benchmark. In this case precise estimates are almost useless. I will likely be pretty close in terms of telling how many features we have to build. It’s going to be trickier to say which of them will be small, medium or large. But I refuse to guess how many freaking hours each and every feature will take to build because such effort is utterly futile. It just so happens that I’ve forgotten to take my damn crystal ball with me, so sorry, that’s not going to work.

    In this case estimation brings us to a trap. Knowing exactly how much time each work item has taken it is easy to track the progress in retrospect. Average 8-year old child would be able to connect the dots. However, unless you’re a bloody superhero and you’re going to have such data at the beginning of your next project don’t treat it as viable method of tracking progress.

    Use any data that will be available in high quality at the beginning of a project. Number of features, maybe sized in some way if your team have some experience in sizing and you understand variability of work.

    Anyway, whatever you do, just don’t change the benchmark in retrospect as it’s going to mess your data and cover a real problem, which is that you should improve the way you set the benchmark in the first place.

    By the way: if you happen to work on time and material basis you can perfectly ignore the whole post, you lucky bastard. Actually, I doubt you even reached to this point of the post anyways.

  • Visualization Should Be Alive

    I’ve had a great discussion recently. A starting point was information from Kanban board – basing on my knowledge it wasn’t up to date and, as it appeared later, it wasn’t without a reason. A way of visualizing how situation looked like in a team was sort of tricky.

    We used the situation to discuss in details what is happening and how we should visualize it. Anyway, one thing struck me in retrospect – the less visualization changes the fewer chances we have to start such discussions.

    A good (or rather a bad) example is my portfolio Kanban board. Considering I try to visualize there projects of different sizes it’s not that uncommon when, in the long run, there are few changes on the board. On one hand, this is acceptable and even expected. On the other, there aren’t enough “call for action” situations when people are expected to do something, like moving stickies etc. Those situations that trigger important discussions.

    This is also why I prefer Kanban boards that are built of rather small-sized work items than those that are filled with huge tasks. They just aren’t that lively. They tend to stall.

    And when visualization stalls its value diminishes. People tend to develop a specific kind of blindness. They start treating their information radiator just like another piece of furniture, which results in the board being exactly that – a piece of furniture. Not useful in terms of improving team’s work.

    So remember this: as long as you expect visualization to generate value, it should live. If it doesn’t, think how you can make livelier. You won’t regret.

  • On Feedback

    I’m not a native English speaker, which basically means my English is far from perfect. Not a surprise, eh? Anyway, it happens sometimes when one of natives I’m talking with corrects me or specifically points one of mistakes I keep making.

    And I’m really thankful for that.

    I’m thankful most of the time such feedback happens instantly so I can refer to the mistake and at least try to correct it somehow.

    This is what happened recently when one of my friends pointed one of pronunciation mistakes I keep making. It worked. It did because feedback loop was short. It worked even better because it was critical feedback. I didn’t get support for all the words I pronounce correctly. It was just a short message: “you’re doing this wrong.”

    Of course it is my thing to decide whether I want to do something about this. Nevertheless I can hardly think of positive feedback I could receive that would be that helpful.

    When you think about this, it is contradictory to what we often hear about delivering feedback. It isn’t uncommon that we are thought how we should focus on positives because this is how we “build” people and not “destroy” them. Even more, delivering positive feedback is way more pleasant and for most people easier as well. It is tempting to avoid the critical part.

    When we are on feedback loops I have one obvious association. Agile in its core is about feedback loops, and short ones. We have iterations so we deliver working software fast and receive feedback from clients. Or even better, we have steady flow so we don’t wait till the end of sprint to get this knowledge about the very next feature we complete. We build (and possibly deploy too) continuously so we know whether what we’ve build is even working. And of course we have unit tests that tell us how our code works against predefined criteria.

    It is all about feedback loops, right?

    Of course we expect to learn that whatever we’ve built is the thing clients wanted, our code hasn’t broken the build and all the tests are green. However, on occasion, something will be less than perfect. A feature will work not exactly the way a client expected, a build will explode, a bunch of tests will go red or pronunciation of a word will be creepy.

    Are we offended by this feedback?

    Didn’t think so. What more, it helps us improve. It is timely, specific and… critical. So why, oh why are we that reluctant to share critical feedback?

    It would be way more harmful strategy to wait long before closing a feedback loop, no matter what the feedback is. Would it really tell you something if I pointed you this two-line change in code you did 4 months ago, that broke a couple of unit tests? Meaningless, isn’t it? By the way: this is why I don’t fancy performance reviews, even though I see the point of doing them in specific environments.

    Whenever you think of sharing feedback with people think about feedback you get from your build process or tests – it doesn’t matter that much whether it is positive or critical; what makes the difference is the fact it is quick and factual.

    You can hardly go wrong with timely and factual feedback, no matter whether it is supportive or not.