≡ Menu

Pawel Brodzinski on Software Project Management

The Kanban Story: Coarse-Grained Estimation

Kanban

Recently I told you what a screwed way we chose to measure lead time in our team. Unfortunately I’ve also promised to share some insight how we use lead times to get some (hopefully reliable) estimates. So here it goes.

Simplifying things a bit (but only a bit) we measure development and deployment time and call it lead time (and don’t feel bad at all about that). So how do I answer to our customers when they ask me when something will be ready?

That’s pretty easy. If we talk about a single feature and there is no other absolutely top priority task, I take a look at the board to track down a developer who will be first to complete his current task and discuss with him when he expects to finish it. Then I know when we can start working on the new, super-important feature ordered by the client. Then it’s enough to add two weeks (which is our average lead time) and make some effort to look oh, so tired as I’ve just completed such a difficult task of estimation. Oh, I need to tell the customer when they’re going to get the darn feature too, of course.

This however happens pretty rarely. We try to keep our MMFs (Minimal Marketable Features) stick to their name which means they are usually small. This also means that described situations, when client wants just one feature from us, are pretty much non-existent. That’s why you might have noticed I didn’t take into consideration a size of a feature in described scenario. In the real life we usually talk about bigger pieces of functionality. What we do then is we split the scope into small MMFs and then use two magic parameters to get our result.

One of parameters you already know – it is an average lead time. However there’s one more needed. It takes 13 days for the feature to get from the left to the right side of the board, but how many features are on the board at the same time? I took a crystal orb and it told me that on average we have 4,5 MMFs on the board. OK, I actually did a simple analysis of a longer period of time checking our completed MMFs and I got this result, but doesn’t a crystal orb sound so much better?

Now, the trick I call math. On average we can do 4,5 MMFs in 13 days. Since I have, let’s say, 10 features on my plate I have to queue them. If I worked with iterations it would take 2,2 iterations (10/4,5) which basically means 3 iterations. But since we don’t use time-boxing I can use 2,2 and multiply it by 13 days of my lead time and get something about 30 days. Now I don’t start with empty board so I have to add some time to allow my team to finish current tasks. On average it would be half of lead time so we should be ready in, say, 37 days.

And yes, this is rough estimate so I’d probably go with 40 days to avoid being blamed for delivering estimate which looked precise (37 looks very precise) but was just coarse-grained estimate.

That’s basically what I went through before telling my CEO we’re going to complete management site for the product we work on in 3 months. Well, actually I didn’t told him yet, but 3 months there are. No less, no more.

Although this post may look as it was loosely coupled with the Kanban Story it is definitely belongs there. So go, read the whole story.

in kanban
3 comments

The Kanban Story: Measuring Lead Time

Kanban

During AgileCE conference I had a discussion with Robert Dempsey about measuring lead time. I never really thought much about the way we count lead time in our team and talk with Robert triggered some doubts.

What We Measure

As you already know on our Kanban board we have backlog, todo queue, several steps describing our development and deployment process and finally done station.

OK, so when do we stamp the starting date for the feature? We do it when the card goes from todo queue into design station, which is the very first column in our development process.

When do we stamp ending date then? You may take your best guess and um… you’ll be wrong. No, not when the sticky note is moved to done column. Actually we mark ending date when a feature makes its way to live column which is third station from the right.

And this is different from what you may have heard from Kanban gurus out there. Don’t blame me, I’ve already told you thought-leaders don’t know it all.

What we measure is time which passes before we start actual work on feature to the moment it goes live.

What We Don’t Measure

What is left behind then? First and the most important thing is the time feature spends in todo queue waiting for some developer to become free and start working on it. If you were trained by the father of Kanban – David Anderson – you’ve probably heard something different but stay with me, I have a good explanation. Or so I guess.

Another thing which is left outside is the last part of our process. There is documentation station where we (surprise, surprise) update documentation. This is done after pushing new version to production.

It looks like we cut something on both ends to make our lead times look better, doesn’t it? Anyway what we gather as lead time doesn’t really describe time which passes from the moment of decision to build the feature to the moment it is done-done. Thus another question arises.

Why, Oh Why?

The way we measure lead time came in natural way but chat with Robert forced me to make some explanation up to justify our practice.

Time Spent in Todo Queue

We left time spent in todo queue out basically because content of this station is changing pretty often. Sometimes feature lives there just for a day or so just to go back to backlog when priorities change. And believe me they do change every now and then. Sometimes a feature stays in todo queue for a longer time as it is always pushed to the second or third place because, well, priorities change.

There is another reason too. The basic reasoning for adding time spent in todo queue to lead time is that you should be able to tell your customer how long it would take from the day 0 (when they tell you they want the feature) to the moment they get it in production. It is pretty rare case when developers are able to start working on a new feature immediately so it is natural that some delay will appear when feature is waiting for a free developer.

I’m not convinced though. Actually very much depends on additional circumstances. The feature the client is asking for may be the only high priority thing to do at the moment but it is pretty unlikely. If the client asks just for one feature lead time will be different than if they asked about a list of ten features. If you have limit of 3 in todo queue you would need to put 7 features in backlog anyway and time they spend in backlog won’t be measured.

If you have a few clients and you need to balance your workload among them it becomes even more complicated since product owner (or whoever is your priority setter) has to decide which feature can be put on the top of the todo queue, which can occupy second or third place and which has to go into backlog.

Basically in any situation but the simplest measuring time spent in todo queue wouldn’t help us much and that’s why we decided to exclude it from lead time.

Time Spent on Documentation

With documentation the situation is a bit different. Documentation isn’t a real part of our product. We create it for our internal reasons – to make life of our sys admin easier and to avoid problems if he was hit by the bus. This means that, from client perspective, our product is complete as soon as it goes live. Even though we have to do some housekeeping later it doesn’t affect our ability to deliver.

Theoretically it might be a problem if documentation phase would be time-consuming and would steal time we’d prefer to spend for something else, namely deployment or testing. However looking at the past in the team it isn’t a problem so we may safely throw documentation out of lead time.

The next thing is using lead time to create some estimates but that’s the subject for another post.

If you liked this post you should like the whole Kanban Story too.

in kanban
8 comments

Why We Don’t Write Test Cases Anymore

Sticky Note

Almost a year ago I shared an advice to use test cases. Not because they are crucial during testing or dramatically improve the quality of the product (they are not), but because of value you get when you create test cases.

A confession (and yes, you’d guess it anyway if you read the title): we don’t write test cases anymore.

We stopped using them and the reason isn’t on the list of shortcomings of test cases. Actually I was aware of these shortcomings a year ago and I were all “test cases are great” anyway. What have changed then?

We dropped test cases as a side effect of implementing Kanban, but you can perfectly use both if you like. In our case one of effects of switching to Kanban was making our pieces of functionality pushed (pulled actually) to development smaller. Before the switch we had pretty big features which were split into several (8-15) detailed user stories. After the switch we have much smaller features which would make 2 or 3 detailed user stories if we didn’t drop writing user stories at all.

And the reason for making features smaller was simple – smaller features, smoother and more flexible workflow.

Initially we were connecting test cases to features, not user stories. It was so because pretty often one testing flow was going through a few different user stories. I told you they were detailed. When standard feature-size went down we realized there’s much less value in preparing test cases.

Once again: main value of creating test cases is thinking about specific usage scenarios, looking for places forgotten during design. The more complex feature the bigger are chances there is something screwed up. With small features test cases lost much of their value for us since most problems we were able to locate instantly and fix problems without additional technique. And yet the effort needed to create and maintain test cases was still significant. So we dropped the practice.

It looks like my advice is: make your features smaller, and then you’ll be able to drop user stories and test cases. And I must say it does make sense, at least for me. What do you think?

in kanban, project management
9 comments

Which Engineering Practices You Should Use

Agile Best Practices

XP is a “software development through the eyes of an engineer” kind of methodology. It focuses heavily on engineering practices.

On contrary, neither Scrum nor Kanban seems to care much about best software development practices. But wait, if you read about Kanban a bit you’ll quickly find an advice to focus on your engineering practices too as Kanban (or Scrum) alone is not enough.

Actually I can’t recall any project management approach which says “when it comes to code, do whatever – this whole programming thing doesn’t really matter.

So we’re back here again – a set of best software development practices is important. Yet, there is plenty of them, how to choose the right ones?

You may choose a set of tools you believe are most valuable. However if you don’t have comfort of choosing toolbox first and then looking for people who are familiar with tools (or at least willing to learn how to wield them) you’re likely to fail.

Every change is difficult and developers tend to be pretty stubborn. Yes, they will do this whole code review if they really have to, what a big deal anyway, but don’t expect to get decent results unless developers believe it is a valuable technique. They will hate it probably as much as filling data in time tracking app, which isn’t even close to what you wanted to achieve, right?

And this brings me to another approach: let engineers choose which engineering practices they want to employ. Let them argue. Let them look for consensus. Help them in facilitating discussion if it’s necessary but don’t enforce any specific technique. Throw in a few ideas but if they don’t catch up don’t try to use your magic power: “I can force you to do so.” If you’re a team leader or (even worse) a manager it’s not you who will be doing this darn thing every single day for next couple of years so just shut up, OK?

The best set of engineering practices is the one which will be adopted by engineers. And yes, this means it will be changing over time. The more mature the team is the more practices people are able to adopt.

The same rule works in other areas too. Product management? Well, don’t you have a product owner or something? Let her decide. Testing procedures? Shouldn’t you agree to whatever your QA guys want?

When it comes to discussion on standards a manager should take a step back and let a decision to be made by people who will be affected.

There’s one trick here however. If you happen to work with inexperienced or just average people the consensus may be “let’s just hack some code instead of wasting time on this stuff, we’re Cowboy Coding Wizards and nothing can beat our code.” But then you have a bigger problem than deciding which of best practices your team would use. You better start to evangelize your team a bit or look for another job, whichever looks easier.

There’s another trick too. What to do if you have hundreds or thousands of developers? Well, different toolboxes should emerge in different teams and it would be pretty stupid to try to standardize all of them.What if nothing emerges despite a multiple teams working on software development?” you may ask. Well, running away while screaming would be a pretty good option there I guess.

You didn’t really expect to see here The Big List of the Best Engineering Practices Every Team Should Adopt, do you?

in software development
6 comments

Trust Isn’t Measurable

Trust

I have a question for you. And yes this is one of this dumb black-or-white questions which don’t take into consideration the world is just gray.

If you had to choose a vendor among the one which you trust more and the one which can be paid less what would be your choice?

I pretty much expect most of us would say we would choose the trusted one. However what I see everyday people do the opposite. They tend to base on a price heavily.

Of course the question is flawed since it assumes that everything else is equal which is never true. However the message I’m trying to send here is that, despite what we say, we tend to make our decision basing on things we are able to measure. We can easily say this offer is $10000 cheaper than the other; we can easily say that this schedule is a month shorter than that etc.

Unfortunately we can’t say that our trust for the company A is at 5 and for the company B is at 7 (whatever 5 and 7 means). Personally I would probably be able to state that I trust one vendor somewhat more than another but it would be totally personal and your opinion about these companies will likely to differ much. And even if we both agreed we would have hard time trying to describe what exactly “somewhat more trust” means and why it is worth ten thousands more to our decision-makers.

And that’s why I’m not really surprised we tend to act differently than we use to say we’d do. The reason is simple.

Trust isn’t measurable.

Every time we face the task to compare few things we tend to base on aspects we can measure and that’s where trust falls short.

Luckily enough sometimes we are able to forget about this whole comparison thing and decide we just want to do business with a trusted partner. Even if they would be more expensive if we took an effort to compare their offer to others, which we don’t do anyway because, well, we decided to go with these trusted folks in the first place.

With trust in place business relationships tend to be significantly better. And yes, I can explain it. More trust means more transparency. More transparency means more information shared. More information shared means better knowledge about the situation. Better knowledge about the situation means better planning. Better planning means better outcomes. And better outcomes usually strengthen business relationship.

I would choose trust over price. If I stated I’d do it every single time I would lie (I did actually) but when it’s my own call or I’m strong enough to defend the decision trust trumps the price.

in software business
8 comments

Should You Encourage People to Learn?

Learn

A very interesting discussion followed one of my recent posts about people not willing to learn. There were a few different threads there but the one brought by David Moran is definitely worth its own post.

David pointed it is manager’s responsibility to create learning opportunities and incentives for people to exploit them.

At the first thought I wanted to agree with that. But after a while I started going through different teams and people I worked with. I recalled multiple situations when opportunities were just waiting but somehow barely anyone was willing to exploit them. The rest preferred to do nothing.

I believe most of the time it is not the lack of opportunities which is a problem but lack of will. Now the question is: whether a manager or a leader should create incentives for people around? If so what kind of incentives should it be?

First of all, I don’t believe in all kinds of extrinsic incentives which are aimed to encourage people to learn. If you set a certification or exam passed as a prerequisite for someone to be promoted people would get certification just to get promotion. They won’t treat it as a chance to learn but as one of tasks on ‘getting promoted’ checklist. You get what you measure. If you measure a number of certificates you will get a lot of these.

The results are even worse when you create a negative incentive, i.e. you don’t get bonus money (you’d earned) unless you submit your monthly article to knowledge base (seen that). What you get there in majority of cases is just a load of crap which looks a bit like knowledge base article. After all no one will read it anyway so why bother?

What options do you have then? Well, you can simply talk with people encouraging them to learn. “You may find this conference interesting.” “Taking language course would be a great for you.” “I’d appreciate that certification.Unfortunately it usually works with people who are self-learners in the first place and don’t really need an incentive – the opportunity is enough (and they probably find opportunities by themselves anyway). The rest will most likely agree with you but will still do nothing.

You may of course promote self-learners over the rest and most of us probably do it since people who feel an urge to learn are generally considered as great professionals. Unfortunately this mechanism isn’t completely obvious and is pretty hard to measure (how would you measure self-learning attitude?) so its educational value is close to zero.

Coming back to the point, I don’t think that it is manager’s responsibility to build incentives for people to learn. I think the role of a leader ends somewhere between supporting everyone’s efforts to learn and creating opportunities. Besides if learning is enforced it won’t build any significant value.

And yes, it is manager’s role to have a knowledgeable and ever-learning team but forcing people to learn is neither the only nor the best available approach.

in personal development, team management
22 comments

The Kanban Story: Kanban Boosters

Kanban

During my talk at AgileCE I mentioned three things as biggest Kanban boosters in our case:

One of comments I heard about this part was: “Pawel, these things aren’t Kanban-related – they would work in any environment.

Well, I’ve never said they’re exclusive for Kanban. My point is: Kanban is pretty simple approach – it really tells you to do just a few simple things leaving most of other aspects open. This means you may (and should) use some help from other techniques/practices which set additional constraints or organize other areas of your process. And boosters I mentioned work exactly this way.

Co-location allows you to keep your Kanban process as simple as possible. You don’t need to use any software to visualize current state of the project – old-school, hardware whiteboard is enough. It also helps to exchange information between team members which is always important but with minimal formal information exchange in place it plays even more crucial role.

No-meeting culture brings us “do it now” attitude and saves a lot of time usually wasted on meetings. And we discuss different things more often than we would otherwise because launching a discussion is totally easy: you just start talking.

Best engineering practices help us to keep the code quality under control. That’s a thing which is basically omitted by Kanban itself so you need other tools to deal with it.

Now, you can perfectly take your Scrum (or whatever kind of) team and use the same techniques and it would most likely work. Oh, it would as long as your team isn’t too big (co-location and no-meeting culture don’t scale up very well) and you don’t have a list of engineering practices enforced by your approach anyway (like in XP).

So no, these concepts aren’t exclusive to Kanban. They just work in specific environments. Yours either fit or not. It doesn’t really matter if your process name starts with S or K or X or P or whichever letter sponsors your day.

After all when I think about Kanban I don’t isolate the method from the rest of our environment – that would be stupid. If Kanban works for us it is because the whole setup is good enough, not just its Kanban part. And these boosters are a part of the story.

Read the whole Kanban Story.

in kanban
4 comments

Why We Don’t Write User Stories Anymore

Sticky Note

There was a time when we were writing user stories to describe requirements. I’d say they worked fairly well for us. But we don’t do this anymore.

We were using user stories as a technique which allowed us to describe bigger chunks of functionality. There was one bigger sub-project or module and it had more than 10 user stories attached (usually closer to 20) and a handful of non-functional requirements. During development we were often going through several stories at once as technical design didn’t map directly to the stories. The stories were more of input to design session and a base for test cases than stand-alone bites of functionality.

Then we switched to Kanban. One of consequences was we reduced the size of average feature which was going to development. It no longer had 15 stories attached, but it wasn’t a single-story task either. If we were still writing user stories to each Minimal Marketable Feature we would probably have few of them. My guess is 2 or 3 most of the time.

At this level stories become pretty artificial. I mean if you think about 2 stories connected with one feature, i.e. administrator can configure this magic widget and user can use this magic widget to do, well, the magic, you can pretty much tell these stories intuitively in your head. Writing them down becomes overkill.

Besides that I think the often cited role of user stories which make usage scenarios completely clear is overrated. If you can talk with developers in language closer to the code, and functionality description is much closer to the code than telling user story, you’ll be better understood. The standard problem here was that functionality description wasn’t precise and it often became disconnected with usage scenarios.

The answer for this problem is: make features as small as possible (but only as small as they still does any difference). Small features are easy to define, easy to understand (even for a developer) and easy to chew. It is pretty hard to screw them.

There’s one more reason why I don’t consider user stories as a must-have. If you happened to create software which will be used by other developers or administrators at best, like some magic box with a lot of APIs and command line interface as the only UI, you should know what I’m talking about. If you write stories for this kind of software you end up with a bunch of “as a developer I want to call the magic function which does the magic” stories. Doesn’t API specification make more sense?

I don’t say user stories are bad. They aren’t. But they don’t add value all the time and in every situation. This is just another practice which should be used only as long as it does make sense in your specific situation.

in project management
12 comments

People Don’t Want to Learn

Learn

I attended a few meetings recently. They all were one thing in common: someone made some effort to create opportunity for others to learn. It doesn’t really matter if that’s downloading Mike Cohn’s video or preparing and delivering a presentation in person. It is the effort addressed to others. It’s like saying: “Hey, I found this presentation valuable and believe we have a lot to learn from it. I will find a room where we can see it and discuss it.

And then just 5 out of few dozens of invited people come.

That’s because, in general, people don’t care if you want to (and can) teach them something. They don’t want to learn. Chances are you don’t agree you are alike. That’s fine. But in this situation face it: you’re a minority.

If you belonged to majority you wouldn’t give a damn about your colleague inviting you to a local developers’ meet-up. You wouldn’t feel like watching video from the major conference from your area of interests. And when I say majority I think like 90-95% of people.

That’s right. I believe barely one tenth people care to learn even when they can do it effortlessly. This is by the way rejecting to become a better professional.

But at least one third, if not a half, will complain how limited their learning options are. How they can’t meet with authorities in workplace or how they weren’t allowed to attend an overpriced course.

But there’s a good news too. It’s pretty easy to stand out the crowd – we just need to use opportunities to learn we have.

in personal development
33 comments

Manual Testing Is a Must

Manual Testing

The other day we were discussing different techniques aimed at improving code quality. Continuous integration, static code analysis, unit testing with or without applying test-driven development, code review – we’ve went through them all. At some point I sensed someone could feel that once they employ all this fine practices their code will be ready to ship as soon as it is all green after build on a build server. I just had to counter strike:

“I’m not going to ship any code, no matter how many quality-improving techniques we use, unless some human tester puts their hands on it and blesses it as good. Not even a chance.”

I’ve seen the situation quite a number of times: a team of developers who are all into the best engineering practices, but unfortunately at the same time sharing a belief this is enough to build top-notch applications. I have a bad news for you: if the answer for a question about quality assurance team is “we don’t need one” I scream and run away. And don’t buy any of your apps.

It just isn’t possible to build quality code with no manual testing at all. You may build something and throw it against your client hoping they will do the testing for you. Sometimes you can even get away with this approach. Be warned though – you’re going to fail hard way soon. The next customer may not like the idea of doing beta-tests for you, actually most of them won’t, and then you’re screwed.

Manual and automatic testing is fundamentally different. With automated testing, no matter if we’re talking about unit, stress, regression, integration or end-to-end tests, you go through predefined tracks. You had to create the test in the first place so it does exactly what you told it to do.

On the other hand humans pretty often had their own minds which they happen to use in unpredictable ways. They may for example make up some test scenarios on the fly or they may sense a subtle scent of a big scary bug hiding around and follow their instinct to hunt it down. Every now and then they will abandon beaten tracks to do something unexpected, i.e. find a bug which would be missed otherwise.

Note: I don’t say what kind of procedure you should follow in manual testing. Personally I strongly believe in value of exploratory testing, but any good tester will do the job, no matter what your manual testing procedure look like.

Actually this post was triggered by recent discussion between a few bloggers including Lisa Crispin, James Shore, Ron Jeffries, George Dinwiddie and Gojko Adzic. The discussion, while interesting and definitely worth to read, is a bit too academic for me. My approach is that there is no universal set of techniques which yields indisputably best results. While I like James arguments why they dropped acceptance testing and support his affection for exploratory testing I share Lisa point that even in stable environment our “best” approach will evolve and so would techniques we employ.

You may exchange acceptance testing with exploratory testing and that’s perfectly fine. As long as some fellow human put their hand on the code you want your users to touch with mine that I’m good with that. I don’t care much which approach you choose or how you call it. It is way more important who does the job. Experienced quality engineer with no plan at all will do way better job than poor or inexperienced tester with the best plan.

Just remember if you happen to skip the manual testing at all I’m afraid your app may be good only for machines as only machines tested it.

in project management
8 comments