Category: project management

  • Internal Audit: Lessons Learned

    Last week we had an internal audit in our team. If you automatically think “ISO 9001” when you hear “internal audit” let me just state it had nothing to do with ISO. We have a few so called quality requirements in the company and every team should follow them but these aren’t anything like ISO procedures.

    The reason why I was interested how the audit would go is we work pretty differently than the rest of the organization so quality requirements aren’t really adjusted to our specifics. I was also curious how our non-standard process would look like in comparison to other teams.

    I caught a few things which are worth sharing.

    Small Features versus Big Features

    Our currency for functionality is MMF (Minimal Marketable Feature). MMF itself probably doesn’t tell you much and I know different teams use different sizes for MMFs. In our case average MMF can be coded by a single developer in about a week, so they are pretty small.

    On the other hand in other projects typical currency for functionality is requirement which is significantly bigger and, as such, less detailed. I’d say that on average it is few times bigger than MMF.

    Where’s the difference? I mean, besides the size itself. One very visible difference is we do significantly less paperwork. With features small enough that they can be easily analyzed and decomposed by product owner, developer and tester we don’t need to write low-level design documents or test cases or user stories. We can spend more time doing the real work instead of creating a pile of documents which tend to become outdated as soon as you hit the save button.

    Frequent Deployment versus Not-So-Frequent Deployment

    I guess there aren’t many teams which still treat deployment as one-time effort done at the end of project. But still, frequent deployment is a painful thing, especially when developers work in different team than QA engineers and QA engineers work in different team than release managers. An average project isn’t deployed biweekly. Not even close.

    On the contrary we deploy, on average, once every 9 days and I mean calendar days here. Of course it didn’t look like that from the very beginning and it cost us a bit to get there. I would say it took us about half a year to gain this pace.

    Where’s the difference? I could tell you about time-to-market, but it really depends on project whether you need time-to-market counted in days or months. The cool thing is somewhere else. With such a short cycle time we don’t have a temptation to incur technical debt. We don’t leave unfixed bugs. We just don’t. After all we might have been discussing about a single low priority bug, not a few dozens of them so the fixing cost would be so low that we don’t even start the discussion in the first place. Even if we hit a dead-end redoing the whole darn feature doesn’t take us a couple of months – it’s just a few days so it’s much easier to accept it.

    Everyone on the Same Page versus Formalized Process

    We’re small team and we’re co-located so we have fairly few communication issues. The process we follow is very simple and, what is even more important, crafted by our team. We all are at the same page.

    Most of the teams in the company follow more formalized process. There are different functional teams which are meant to cooperate, there are specific documents expected from each other by people working on projects etc. On the top of that there’s some office politics too and formalized process is used as a weapon.

    Where’s the difference? This one was pretty much a surprise for me, but it looks like our process appears as mature and effective even though it isn’t even recorded in any form. We just know what everyone should do. Whenever some obstacle appears on the way people naturally try to help each other as they see and understand what’s happening. And yes, I’m over the top just because no one forced me to write the process description down. On the other hand formalized process is often artificial for other teams who have to follow it and they sometimes focus on being compliant to the process (or fighting with it) instead of building software.

    What Does It Mean For You?

    I’m not trying to say here you should now automatically cut every feature you work on in half (at least), deploy three times more frequently than you do now and throw every procedure out. It would probably end with chaos, which your managers might have not appreciated.

    I’m not trying to say it was easy to get where we are now. It took great people, some experience, a lot of experimenting and enough time to get better at what we started doing. A year ago I would have hard time trying to point how our approach might be better than what you could find in the rest of the company.

    And I’m not trying to say you should read these advices in “this over that” manner known from Agile Manifesto. It’s not only our team which is pretty different than average team in the company but a product is also far from standard projects we build in organization so I compare pretty different environments. I wouldn’t even say that every single team in the company would benefit from implementing our approach, although a few of them definitely would.

    These are just things I learned while talking with Ola who audited our team (and many other teams in the company). For me these lessons weren’t obvious. I mean connection between frequent deployment and short time-to-market is pretty clear for anyone who understands what “deployment” and “time-to-market” means, but I’ve never really considered that very short production cycle may help to avoid technical debt too.

    And if you’re interested what the audit result was, I’d say it was pretty good. I mean, everyone likes to listen to some compliments from time to time, so we may want to ask for another audit soon.

  • Why We Don’t Write Test Cases Anymore

    Almost a year ago I shared an advice to use test cases. Not because they are crucial during testing or dramatically improve the quality of the product (they are not), but because of value you get when you create test cases.

    A confession (and yes, you’d guess it anyway if you read the title): we don’t write test cases anymore.

    We stopped using them and the reason isn’t on the list of shortcomings of test cases. Actually I was aware of these shortcomings a year ago and I were all “test cases are great” anyway. What have changed then?

    We dropped test cases as a side effect of implementing Kanban, but you can perfectly use both if you like. In our case one of effects of switching to Kanban was making our pieces of functionality pushed (pulled actually) to development smaller. Before the switch we had pretty big features which were split into several (8-15) detailed user stories. After the switch we have much smaller features which would make 2 or 3 detailed user stories if we didn’t drop writing user stories at all.

    And the reason for making features smaller was simple – smaller features, smoother and more flexible workflow.

    Initially we were connecting test cases to features, not user stories. It was so because pretty often one testing flow was going through a few different user stories. I told you they were detailed. When standard feature-size went down we realized there’s much less value in preparing test cases.

    Once again: main value of creating test cases is thinking about specific usage scenarios, looking for places forgotten during design. The more complex feature the bigger are chances there is something screwed up. With small features test cases lost much of their value for us since most problems we were able to locate instantly and fix problems without additional technique. And yet the effort needed to create and maintain test cases was still significant. So we dropped the practice.

    It looks like my advice is: make your features smaller, and then you’ll be able to drop user stories and test cases. And I must say it does make sense, at least for me. What do you think?

  • Why We Don’t Write User Stories Anymore

    There was a time when we were writing user stories to describe requirements. I’d say they worked fairly well for us. But we don’t do this anymore.

    We were using user stories as a technique which allowed us to describe bigger chunks of functionality. There was one bigger sub-project or module and it had more than 10 user stories attached (usually closer to 20) and a handful of non-functional requirements. During development we were often going through several stories at once as technical design didn’t map directly to the stories. The stories were more of input to design session and a base for test cases than stand-alone bites of functionality.

    Then we switched to Kanban. One of consequences was we reduced the size of average feature which was going to development. It no longer had 15 stories attached, but it wasn’t a single-story task either. If we were still writing user stories to each Minimal Marketable Feature we would probably have few of them. My guess is 2 or 3 most of the time.

    At this level stories become pretty artificial. I mean if you think about 2 stories connected with one feature, i.e. administrator can configure this magic widget and user can use this magic widget to do, well, the magic, you can pretty much tell these stories intuitively in your head. Writing them down becomes overkill.

    Besides that I think the often cited role of user stories which make usage scenarios completely clear is overrated. If you can talk with developers in language closer to the code, and functionality description is much closer to the code than telling user story, you’ll be better understood. The standard problem here was that functionality description wasn’t precise and it often became disconnected with usage scenarios.

    The answer for this problem is: make features as small as possible (but only as small as they still does any difference). Small features are easy to define, easy to understand (even for a developer) and easy to chew. It is pretty hard to screw them.

    There’s one more reason why I don’t consider user stories as a must-have. If you happened to create software which will be used by other developers or administrators at best, like some magic box with a lot of APIs and command line interface as the only UI, you should know what I’m talking about. If you write stories for this kind of software you end up with a bunch of “as a developer I want to call the magic function which does the magic” stories. Doesn’t API specification make more sense?

    I don’t say user stories are bad. They aren’t. But they don’t add value all the time and in every situation. This is just another practice which should be used only as long as it does make sense in your specific situation.

  • Manual Testing Is a Must

    The other day we were discussing different techniques aimed at improving code quality. Continuous integration, static code analysis, unit testing with or without applying test-driven development, code review – we’ve went through them all. At some point I sensed someone could feel that once they employ all this fine practices their code will be ready to ship as soon as it is all green after build on a build server. I just had to counter strike:

    “I’m not going to ship any code, no matter how many quality-improving techniques we use, unless some human tester puts their hands on it and blesses it as good. Not even a chance.”

    I’ve seen the situation quite a number of times: a team of developers who are all into the best engineering practices, but unfortunately at the same time sharing a belief this is enough to build top-notch applications. I have a bad news for you: if the answer for a question about quality assurance team is “we don’t need one” I scream and run away. And don’t buy any of your apps.

    It just isn’t possible to build quality code with no manual testing at all. You may build something and throw it against your client hoping they will do the testing for you. Sometimes you can even get away with this approach. Be warned though – you’re going to fail hard way soon. The next customer may not like the idea of doing beta-tests for you, actually most of them won’t, and then you’re screwed.

    Manual and automatic testing is fundamentally different. With automated testing, no matter if we’re talking about unit, stress, regression, integration or end-to-end tests, you go through predefined tracks. You had to create the test in the first place so it does exactly what you told it to do.

    On the other hand humans pretty often had their own minds which they happen to use in unpredictable ways. They may for example make up some test scenarios on the fly or they may sense a subtle scent of a big scary bug hiding around and follow their instinct to hunt it down. Every now and then they will abandon beaten tracks to do something unexpected, i.e. find a bug which would be missed otherwise.

    Note: I don’t say what kind of procedure you should follow in manual testing. Personally I strongly believe in value of exploratory testing, but any good tester will do the job, no matter what your manual testing procedure look like.

    Actually this post was triggered by recent discussion between a few bloggers including Lisa Crispin, James Shore, Ron Jeffries, George Dinwiddie and Gojko Adzic. The discussion, while interesting and definitely worth to read, is a bit too academic for me. My approach is that there is no universal set of techniques which yields indisputably best results. While I like James arguments why they dropped acceptance testing and support his affection for exploratory testing I share Lisa point that even in stable environment our “best” approach will evolve and so would techniques we employ.

    You may exchange acceptance testing with exploratory testing and that’s perfectly fine. As long as some fellow human put their hand on the code you want your users to touch with mine that I’m good with that. I don’t care much which approach you choose or how you call it. It is way more important who does the job. Experienced quality engineer with no plan at all will do way better job than poor or inexperienced tester with the best plan.

    Just remember if you happen to skip the manual testing at all I’m afraid your app may be good only for machines as only machines tested it.

  • Testers Shouldn’t Stick to Specs

    Jennifer Bedell wrote recently at PMStudent about testers goldplating projects. Definitely an interesting read. And pretty hot discussion under the post too.

    I’d say the post is written in defense of scope. Jennifer points that testers should test against the specification only because when they step out of the specs it becomes goldplating. In other words quality assurance should verify only whether a product is consistent with requirements.

    That’s utterly wrong.

    It might work if we worked with complete and definite specifications. Unfortunately we don’t. And I mean we don’t ever work one of these. They just don’t exist. What we work with is incomplete vague and ambiguous.

    And yes, you can deliver software which sticks perfectly to specs and at the same time is unusable piece of crap.

    You should actually encourage testers to get out of the box (specifications) and wander free through the application using every weirdest scenario they can think of. Why? Because your beloved users won’t follow specs. They won’t read it. They won’t even care whether there is something called with the name. This is artificial artifact created to make discussion between product manager and developers easier. Users couldn’t care less about specs.

    This means they don’t use application in a way which was specified in requirements, user stories or whatever you happen to create. They use the app using all kinds of approaches, even the weirdest ones. You don’t want to face it unprepared. So forget about damn specs when testing and try everything you can think of (and some of that you can’t).

    I’ll go even further: forget about test cases too if you happen to have them. OK, forget about them during the first iteration of tests, because you have more than one, don’t you? The reason is exactly the same as above.

    And yes, I can imagine some extreme bugs which will be submitted because of this approach. Bugs which would take years to fix and they are so unlikely to appear in real life it would be plain stupid to fix them. This only means you should monitor incoming stream of bugs, hand-pick these few you aren’t going to fix and throw them out.

    This isn’t a reason to consciously decrease quality by sticking to specs during testing.

  • Random Thoughts on Estimation

    A lot of discussion on estimation recently. A lot of great arguments but a lot of good old mistakes we’ve already went through. This brought me to a few random thoughts on estimating techniques.

    1. Estimation technique which involves discussion between different people is better than one which just simply uses their estimates as input.
    2. Using factual recorded historical data is better than basing just on experience.
    3. Smaller tasks can be estimated way better than big chunks of work.
    4. Every estimate starts with some kind of guess at the very beginning.

    I know these should be obvious yet I’m surprised how often people:
    – forget about them
    – deny these are true

    Then they head towards wild guesses with some magic number applied, which may have some sense but not when used instead of real estimation.

  • There Is No Single Flavor of Agile

    Agile is such a capacious term. Under the flag of agile we do different things. Scrum is agile. And XP is agile. Scrum-in-the-name-only is also agile. We go with no plan whatsoever and that’s so damn agile is agile too.

    Yes, you say the first two are true agile and others are fakes just tainting The Holy Agile Name. That reminds me… wasn’t it exactly the same with waterfall and others bad, bad techniques? Wasn’t they tainted by poor implementations which got everything wrong so we had to come with new better methods? Well, just a food for thought.

    Coming back to agile. With all these good and bad implementations I can safely say that there’s no single flavor of agile. There never was. I believe it was never intended to be the only one. I recently read notes on the writing of the agile manifesto by Alistair Cockburn and one thing stuck with me:

    I came in through the doorway marked “Efficiency” not the doorway marked “Change” – because my background was in fixed-price fixed-scope projects. (…) Other people came to agile through the doorway marked “Change”, and that’s fine for them.

    So although agile gets billed most of time through Kent Beck’s “Embrace Change” moniker, I’m not happy encouraging people to just change stuff all the time – it’s more efficient to think for a while and try to make good decisions early – the world will supply enough change requests without us adding to the list by not thinking.

    Different experiences, different projects, different backgrounds. Different accents when talking about the Manifesto. I can’t say I haven’t expected that at all but still. There was never one universal interpretation of agile principles yet we see every now and then people selling the only right and proper method of being agile. How come?

    On a side note: I wrote this piece a few days ago. In the meantime The Big Ugly Change came and kicked my ass big time. “The world will supply enough change requests.” Couldn’t agree more these days.

  • Is It Possible to Over-Communicate In Project?

    While explaining another thing which I thought was obvious for everyone in the team but appeared as not clearly communicated the question came back to me: is it possible to over-communicate in project? I dropped the question on Twitter and expected answers like “Hell no!” Or “Maybe it is possible but no one seen that yet.

    Responses surprised me though. Author of Projects with People found problems of being too detailed for the audience or revealing facts too early. Well, what exactly does “too early” mean? When people already chatter on the subject at the water cooler is it too early? When managers finally become aware of chatter is it still too early? Do we have to wait until management is ready to communicate the fact (which is always too late)?

    Actually gossips are powerful and spread fast. The only way to cut them is bring official communication on the subject as soon as possible. Hopefully before gossiping is started. Which does mean early. Earlier than you’d think.

    Another thing is being too detailed. This can be considered as unnecessary or even clutter. Clutter is an issue raised by Danie Vermeulen. If something doesn’t bring added value it shouldn’t be communicated. If we kept this strict we could never post any technical message on project forum since there always would be someone who isn’t really interested which framework we’re going to use for dependency injection or how we prevent SQL injection and what the heck is the difference between these two. And how do you know what is a clutter for whom anyway.

    John Moore looks at the problem from different perspective – over-communication can be bad when it hurts morale. I must say I agree with the argument to some point. Some bad news isn’t necessarily related with people’s work (e.g. ongoing changes in business team on customer side) and can be due to change. Then keeping information for you may be a good idea. However if bad news is going to strike us either way the earlier means the better. One has to judge individually on each case.

    Although I don’t see easy way to deal with above issues they remain valid. Actually I can agree it is possible to over-communicate yet there’s no concrete border or clearly definable warning which yells “This email is too much! You’re over-communicating!” at you whenever you’re going to send unnecessary message.

    The best summary came from Lech who pointed that risk of over-communicating is lower than risk of under-communicating. I’d even say that much, much lower. How many projects with too extensive communication have you seen? One? Two? Personally I’ve seen none. On the other hand how many projects suffered because of insufficient communication? I’ve seen dozens of them.

    On general we still communicate too little. Yes, we can over-communicate from time to time but I accept the risk just for the sake of dealing a bit better with insufficient communication which is a real problem in our projects.

    How does it look like in your teams?

  • Sure Shot Method to Improve Quality of Your Estimates

    That’s the old story: we suck at estimating. Our schedules tend to be wrong not only because there are unexpected issues but mostly because we underestimate effort needed to complete tasks. There’s one simple trick which allows improving quality of estimates. It’s simple. It’s reliable. It’s good. On the minus side you need some time to execute it.

    Split your tasks to small chunks.

    If a task lasts more than a couple of days it’s too big – split it. If it’s still too big – do it once again. Repeat. By the way that’s what agilists came with over years – initial size of user stories was intended to be a few weeks long, now it’s usually a few days at most.

    Why it works?

    • Smaller tasks are easier to process in our minds. As a result we understand better what we’re going to do. As a result we judge better what means are needed. As a result our estimates are better.

    • Smaller tasks limit uncertainty. In smaller room there are fewer places to hide. The fewer places to hide the fewer unpredicted issues there are. The fewer unpredicted issues the more exact are estimates.

    • Small tasks mean small estimates. A month-long task should be read as “pretty long but it’s hard to say exactly how long” task. Bigger numbers are just less precise. Splitting tasks ends up with smaller chunks of work. Small chunks of work end up with small-sized estimates. Small-sized estimates mean more precise estimates.

    As a bonus, during task-splitting exercise, you get better understanding what should be done and you early catch some problems which otherwise would appear much later and would be more costly to fix.

    And yes, the only thing you need is some time.

  • Great Performances in Failed Projects

    It’s always a difficult situation. The last project was late and I don’t mean a few days late. People did a very good job trying to rescue as much as they could but by the time you were in the half you knew they won’t make it on time. Then it comes to these difficult discussions.

    – The project was late.
    – But we couldn’t make it on time even though we were fully engaged. You know it.
    – You didn’t tell me that at the beginning. Then I suppose you thought we’d make it.
    – But it appeared to be different. We did everything by the book and it didn’t work.
    – The result is late. I can’t judge the effort with complete disconnection from the result.

    How to judge a project manager? Final effect was below expectations. Commitment on the other hand went way above expected level. Reasons for failure can be objectively justified. Or can’t they?

    Something went completely wrong. Maybe initial estimates were totally screwed, maybe it was unexpected issue which couldn’t be predicted, or maybe we didn’t have enough information about the way customer would act during implementation. Who should take responsibility?

    It is said that while success has many fathers failure is an orphan. There’s no easy answer, yet manager has to come with one.

    I tend to weigh more how people acted (their commitment and effort) than result (late delivery) but I treat them as interconnected measures. In other words great performer from failed project will get better feedback than underperformer from stunning-success-project. Here’s why:

    I prefer to have committed team even when they don’t know yet how to deal well with the task. They’ll learn and outgrow average teams which already know how to do the job.

    I wouldn’t like to encourage hyena-approach, when below-average performers try to join (already) successful projects. It harms team chemistry.

    If there’s a failure I (as a manager) am responsible for it in the first place. If I did my job well me team would probably be closer to success.

    Punishing for failure makes people play safe. Team will care more about keeping status quo than trying to improve things around.

    Lack of appreciation for extraordinary commitment kills any future engagement. If I tried hard and no one saw it I won’t do that another time.