Tag: project management

  • Manual Testing Is a Must

    The other day we were discussing different techniques aimed at improving code quality. Continuous integration, static code analysis, unit testing with or without applying test-driven development, code review – we’ve went through them all. At some point I sensed someone could feel that once they employ all this fine practices their code will be ready to ship as soon as it is all green after build on a build server. I just had to counter strike:

    “I’m not going to ship any code, no matter how many quality-improving techniques we use, unless some human tester puts their hands on it and blesses it as good. Not even a chance.”

    I’ve seen the situation quite a number of times: a team of developers who are all into the best engineering practices, but unfortunately at the same time sharing a belief this is enough to build top-notch applications. I have a bad news for you: if the answer for a question about quality assurance team is “we don’t need one” I scream and run away. And don’t buy any of your apps.

    It just isn’t possible to build quality code with no manual testing at all. You may build something and throw it against your client hoping they will do the testing for you. Sometimes you can even get away with this approach. Be warned though – you’re going to fail hard way soon. The next customer may not like the idea of doing beta-tests for you, actually most of them won’t, and then you’re screwed.

    Manual and automatic testing is fundamentally different. With automated testing, no matter if we’re talking about unit, stress, regression, integration or end-to-end tests, you go through predefined tracks. You had to create the test in the first place so it does exactly what you told it to do.

    On the other hand humans pretty often had their own minds which they happen to use in unpredictable ways. They may for example make up some test scenarios on the fly or they may sense a subtle scent of a big scary bug hiding around and follow their instinct to hunt it down. Every now and then they will abandon beaten tracks to do something unexpected, i.e. find a bug which would be missed otherwise.

    Note: I don’t say what kind of procedure you should follow in manual testing. Personally I strongly believe in value of exploratory testing, but any good tester will do the job, no matter what your manual testing procedure look like.

    Actually this post was triggered by recent discussion between a few bloggers including Lisa Crispin, James Shore, Ron Jeffries, George Dinwiddie and Gojko Adzic. The discussion, while interesting and definitely worth to read, is a bit too academic for me. My approach is that there is no universal set of techniques which yields indisputably best results. While I like James arguments why they dropped acceptance testing and support his affection for exploratory testing I share Lisa point that even in stable environment our “best” approach will evolve and so would techniques we employ.

    You may exchange acceptance testing with exploratory testing and that’s perfectly fine. As long as some fellow human put their hand on the code you want your users to touch with mine that I’m good with that. I don’t care much which approach you choose or how you call it. It is way more important who does the job. Experienced quality engineer with no plan at all will do way better job than poor or inexperienced tester with the best plan.

    Just remember if you happen to skip the manual testing at all I’m afraid your app may be good only for machines as only machines tested it.

  • Testers Shouldn’t Stick to Specs

    Jennifer Bedell wrote recently at PMStudent about testers goldplating projects. Definitely an interesting read. And pretty hot discussion under the post too.

    I’d say the post is written in defense of scope. Jennifer points that testers should test against the specification only because when they step out of the specs it becomes goldplating. In other words quality assurance should verify only whether a product is consistent with requirements.

    That’s utterly wrong.

    It might work if we worked with complete and definite specifications. Unfortunately we don’t. And I mean we don’t ever work one of these. They just don’t exist. What we work with is incomplete vague and ambiguous.

    And yes, you can deliver software which sticks perfectly to specs and at the same time is unusable piece of crap.

    You should actually encourage testers to get out of the box (specifications) and wander free through the application using every weirdest scenario they can think of. Why? Because your beloved users won’t follow specs. They won’t read it. They won’t even care whether there is something called with the name. This is artificial artifact created to make discussion between product manager and developers easier. Users couldn’t care less about specs.

    This means they don’t use application in a way which was specified in requirements, user stories or whatever you happen to create. They use the app using all kinds of approaches, even the weirdest ones. You don’t want to face it unprepared. So forget about damn specs when testing and try everything you can think of (and some of that you can’t).

    I’ll go even further: forget about test cases too if you happen to have them. OK, forget about them during the first iteration of tests, because you have more than one, don’t you? The reason is exactly the same as above.

    And yes, I can imagine some extreme bugs which will be submitted because of this approach. Bugs which would take years to fix and they are so unlikely to appear in real life it would be plain stupid to fix them. This only means you should monitor incoming stream of bugs, hand-pick these few you aren’t going to fix and throw them out.

    This isn’t a reason to consciously decrease quality by sticking to specs during testing.

  • The Kanban Story: First Issues

    So we were doing great, everyone was happy and we were delivering on time on budget and on scope. Except it wasn’t exactly how things really looked like.

    First two projects were late. Not much but still. It was expected since these were our first estimates in the team and at that time we decided not to do anything about that yet. Slips were reasonably small, which was a good sign when we talk about our developers’ estimation skills. Nothing to be worried about.

    Another issue however appeared to be a real pain in the ass. One of applications got stuck somewhere in the middle of first iteration of tests. It looked like there was always something more important to do. Everyone was doing high-priority things and the application wasn’t touched even with a stick. There was no mechanism which would add an incentive to finish things and not leave even less-important tasks for later.

    We also had a problem with our Product Owner. The guy was trying to catch many parallel threads and organize team’s work but unfortunately he was pushing in a bit too many new things. At the same time he was losing some of nice ideas on the way. Before some of them could be implemented guy was coming back from another meeting with a client bringing new, better and higher-priority tasks and the old ones were fading into oblivion. For those of you who can’t wait to ask who this crappy Product Owner was the answer is: yes, it was me.

    A general conclusion was we need some more organization not at project level but at team level. Something which would help us finishing things and limit chaos generated by rapidly changing business environment. Specific of our team was we were doing a number of very small projects simultaneously so coordinating these projects was crucial to avoid falling into chaos.

    We were about to do first process improvements…

    Read the whole story.

  • The Kanban Story: Initial Methodology

    You already know how our team looked like and what concerns I had with implementing plain old Scrum. You also know we chose “take this from here and take that from there” kind of approach to running our projects. Our base method was Scrum but we were far, far away from accepting it the orthodox way.

    What we finally agreed on within the team can be described in several rules:

    1. Planning
    For each application at the beginning we were deciding what we want to see there in the next version. We were trying to limit a number of features but didn’t set any limits how long development should take. The longest cycle was something between three and four weeks.

    2. Requirements
    When we’d known what we wanted to do we were creating user stories and non-functional requirements. Each session took whole team, even when someone wasn’t going to develop that specific application. As it usually happens the scope was being changed as we discussed different aspects of planned work. We were adding (more often) or cutting off (less often) features. We were building a coarse-grained architecture as we discussed non-functional requirements. At the end of the day we had a whiteboard full of stories which were a starting point for development.

    3. Tasks
    Since user stories were rather big we were splitting them to smaller pieces – development tasks. This was changing view from feature-wise to code-wise. Development tasks were intended as small pieces of work (possibly not bigger than 16-hour long) which were a complete wholes from a developer’s point of view. Ideally a single user story should have several tasks connected to it and single development task should be connected with only one user story or non-functional requirement. It wasn’t a strict rule however and sometimes we had many-to-many relation since specific changes in architecture (single development task) were affecting many usage scenarios (many user stories). Development tasks should make as much sense as possible for developers so they were defined by developers themselves and by those who were going to do the job. Tasks were stored in a system which was used company-wide.

    4. Estimation
    At the very beginning we had no strict deadlines but we were estimating anyway. There were two goals: learn to estimate better when there aren’t huge consequences for being wrong and getting into a habit of estimating all of planned work. Our estimates were done against developer tasks, not user stories. The main reason why we went this way was that tasks were smaller so estimates naturally were better than they would be if we did them against user stories. A specific, pretty non-agile, thing we were doing with estimation was a rule of estimating each task by a developer who would be doing it. More on this in one of future posts.

    5. Measuring progress and work done
    Current progress could be verified by checking actual state of tasks connected with a specific app. Nothing fancy here. The thing which was important here was writing down how much time it took to complete the task. When completed each task had two values – estimated time needed to complete the task and time it really took to do it. At the end of the day we knew how much our estimates were flawed. The important thing here was official announcement that no one is going to be punished in any way for poor estimates.

    That was pretty much all. No stand-ups since we were all sitting in one room and we discussed issues and work progress whenever someone felt like it. No Scrum Master since, well, we weren’t doing sprints and within specific project things were rather self-organized. A kind of Product Owner emerged, impersonated by me, as we needed a connector between constantly changing business environment and the team working on different projects. You could say we needed one to bring some chaos to our work which would be equally true.

    We were going to build a few small projects that way. At the same time we were prepared to improve the way we create our software as soon as notice flaws and find a way to fix them.

    Keep an eye on the whole Kanban Story.

  • The Kanban Story: The Beginning

    So I found myself in this small team, part of a bigger company but working pretty independent of the rest of people here. Kind of start-upish environment. I knew every single person I worked with. Well, actually I’d chosen most of them and hired them here. The team was great (if you guys are reading this: no, it is not a reason to come for a raise) and personally I didn’t feel an urge to strictly control everything. On the other hand some order is always a nice thing.

    Initially I gave a lot of thought to project management practices and didn’t come with a solution I was happy with.

    I rejected PMP-based heavy-weight process which works for many projects in the organization. We didn’t need all the hassle they bring. At least not at that stage. A natural idea was Scrum which I didn’t like either. Scrum is pretty formalized methodology too and I wanted to avoid formalisms as much as possible. Actually with a lot of R&D work, pretty much prototyping and priorities changing all the time I needed something more flexible.

    I didn’t want to be time-boxed since we were planning to work simultaneously on a few independent small projects which would be hard to synchronize if we wanted to put them all in one team-wide sprint. Creating independent sprints for each of projects was a complete overkill since you don’t really need Scrum sprint for a one-and-a-half-person project.

    Actually I’d say we were at the point where we were too flexible for agile methods. Now, go burn me in flames for being iconoclast.

    After some discussions we collectively agreed we wouldn’t adopt Scrum as a whole but would use some of techniques used there and some other ideas we thought were good. In the next post in the series you’ll find more details how we initially organized our development.

    Keep an eye on the whole story.

  • Random Thoughts on Estimation

    A lot of discussion on estimation recently. A lot of great arguments but a lot of good old mistakes we’ve already went through. This brought me to a few random thoughts on estimating techniques.

    1. Estimation technique which involves discussion between different people is better than one which just simply uses their estimates as input.
    2. Using factual recorded historical data is better than basing just on experience.
    3. Smaller tasks can be estimated way better than big chunks of work.
    4. Every estimate starts with some kind of guess at the very beginning.

    I know these should be obvious yet I’m surprised how often people:
    – forget about them
    – deny these are true

    Then they head towards wild guesses with some magic number applied, which may have some sense but not when used instead of real estimation.

  • There Is No Single Flavor of Agile

    Agile is such a capacious term. Under the flag of agile we do different things. Scrum is agile. And XP is agile. Scrum-in-the-name-only is also agile. We go with no plan whatsoever and that’s so damn agile is agile too.

    Yes, you say the first two are true agile and others are fakes just tainting The Holy Agile Name. That reminds me… wasn’t it exactly the same with waterfall and others bad, bad techniques? Wasn’t they tainted by poor implementations which got everything wrong so we had to come with new better methods? Well, just a food for thought.

    Coming back to agile. With all these good and bad implementations I can safely say that there’s no single flavor of agile. There never was. I believe it was never intended to be the only one. I recently read notes on the writing of the agile manifesto by Alistair Cockburn and one thing stuck with me:

    I came in through the doorway marked “Efficiency” not the doorway marked “Change” – because my background was in fixed-price fixed-scope projects. (…) Other people came to agile through the doorway marked “Change”, and that’s fine for them.

    So although agile gets billed most of time through Kent Beck’s “Embrace Change” moniker, I’m not happy encouraging people to just change stuff all the time – it’s more efficient to think for a while and try to make good decisions early – the world will supply enough change requests without us adding to the list by not thinking.

    Different experiences, different projects, different backgrounds. Different accents when talking about the Manifesto. I can’t say I haven’t expected that at all but still. There was never one universal interpretation of agile principles yet we see every now and then people selling the only right and proper method of being agile. How come?

    On a side note: I wrote this piece a few days ago. In the meantime The Big Ugly Change came and kicked my ass big time. “The world will supply enough change requests.” Couldn’t agree more these days.

  • Is It Possible to Over-Communicate In Project?

    While explaining another thing which I thought was obvious for everyone in the team but appeared as not clearly communicated the question came back to me: is it possible to over-communicate in project? I dropped the question on Twitter and expected answers like “Hell no!” Or “Maybe it is possible but no one seen that yet.

    Responses surprised me though. Author of Projects with People found problems of being too detailed for the audience or revealing facts too early. Well, what exactly does “too early” mean? When people already chatter on the subject at the water cooler is it too early? When managers finally become aware of chatter is it still too early? Do we have to wait until management is ready to communicate the fact (which is always too late)?

    Actually gossips are powerful and spread fast. The only way to cut them is bring official communication on the subject as soon as possible. Hopefully before gossiping is started. Which does mean early. Earlier than you’d think.

    Another thing is being too detailed. This can be considered as unnecessary or even clutter. Clutter is an issue raised by Danie Vermeulen. If something doesn’t bring added value it shouldn’t be communicated. If we kept this strict we could never post any technical message on project forum since there always would be someone who isn’t really interested which framework we’re going to use for dependency injection or how we prevent SQL injection and what the heck is the difference between these two. And how do you know what is a clutter for whom anyway.

    John Moore looks at the problem from different perspective – over-communication can be bad when it hurts morale. I must say I agree with the argument to some point. Some bad news isn’t necessarily related with people’s work (e.g. ongoing changes in business team on customer side) and can be due to change. Then keeping information for you may be a good idea. However if bad news is going to strike us either way the earlier means the better. One has to judge individually on each case.

    Although I don’t see easy way to deal with above issues they remain valid. Actually I can agree it is possible to over-communicate yet there’s no concrete border or clearly definable warning which yells “This email is too much! You’re over-communicating!” at you whenever you’re going to send unnecessary message.

    The best summary came from Lech who pointed that risk of over-communicating is lower than risk of under-communicating. I’d even say that much, much lower. How many projects with too extensive communication have you seen? One? Two? Personally I’ve seen none. On the other hand how many projects suffered because of insufficient communication? I’ve seen dozens of them.

    On general we still communicate too little. Yes, we can over-communicate from time to time but I accept the risk just for the sake of dealing a bit better with insufficient communication which is a real problem in our projects.

    How does it look like in your teams?

  • Sure Shot Method to Improve Quality of Your Estimates

    That’s the old story: we suck at estimating. Our schedules tend to be wrong not only because there are unexpected issues but mostly because we underestimate effort needed to complete tasks. There’s one simple trick which allows improving quality of estimates. It’s simple. It’s reliable. It’s good. On the minus side you need some time to execute it.

    Split your tasks to small chunks.

    If a task lasts more than a couple of days it’s too big – split it. If it’s still too big – do it once again. Repeat. By the way that’s what agilists came with over years – initial size of user stories was intended to be a few weeks long, now it’s usually a few days at most.

    Why it works?

    • Smaller tasks are easier to process in our minds. As a result we understand better what we’re going to do. As a result we judge better what means are needed. As a result our estimates are better.

    • Smaller tasks limit uncertainty. In smaller room there are fewer places to hide. The fewer places to hide the fewer unpredicted issues there are. The fewer unpredicted issues the more exact are estimates.

    • Small tasks mean small estimates. A month-long task should be read as “pretty long but it’s hard to say exactly how long” task. Bigger numbers are just less precise. Splitting tasks ends up with smaller chunks of work. Small chunks of work end up with small-sized estimates. Small-sized estimates mean more precise estimates.

    As a bonus, during task-splitting exercise, you get better understanding what should be done and you early catch some problems which otherwise would appear much later and would be more costly to fix.

    And yes, the only thing you need is some time.

  • Great Performances in Failed Projects

    It’s always a difficult situation. The last project was late and I don’t mean a few days late. People did a very good job trying to rescue as much as they could but by the time you were in the half you knew they won’t make it on time. Then it comes to these difficult discussions.

    – The project was late.
    – But we couldn’t make it on time even though we were fully engaged. You know it.
    – You didn’t tell me that at the beginning. Then I suppose you thought we’d make it.
    – But it appeared to be different. We did everything by the book and it didn’t work.
    – The result is late. I can’t judge the effort with complete disconnection from the result.

    How to judge a project manager? Final effect was below expectations. Commitment on the other hand went way above expected level. Reasons for failure can be objectively justified. Or can’t they?

    Something went completely wrong. Maybe initial estimates were totally screwed, maybe it was unexpected issue which couldn’t be predicted, or maybe we didn’t have enough information about the way customer would act during implementation. Who should take responsibility?

    It is said that while success has many fathers failure is an orphan. There’s no easy answer, yet manager has to come with one.

    I tend to weigh more how people acted (their commitment and effort) than result (late delivery) but I treat them as interconnected measures. In other words great performer from failed project will get better feedback than underperformer from stunning-success-project. Here’s why:

    I prefer to have committed team even when they don’t know yet how to deal well with the task. They’ll learn and outgrow average teams which already know how to do the job.

    I wouldn’t like to encourage hyena-approach, when below-average performers try to join (already) successful projects. It harms team chemistry.

    If there’s a failure I (as a manager) am responsible for it in the first place. If I did my job well me team would probably be closer to success.

    Punishing for failure makes people play safe. Team will care more about keeping status quo than trying to improve things around.

    Lack of appreciation for extraordinary commitment kills any future engagement. If I tried hard and no one saw it I won’t do that another time.