Tag: testing

  • Manual Testing Is a Must

    The other day we were discussing different techniques aimed at improving code quality. Continuous integration, static code analysis, unit testing with or without applying test-driven development, code review – we’ve went through them all. At some point I sensed someone could feel that once they employ all this fine practices their code will be ready to ship as soon as it is all green after build on a build server. I just had to counter strike:

    “I’m not going to ship any code, no matter how many quality-improving techniques we use, unless some human tester puts their hands on it and blesses it as good. Not even a chance.”

    I’ve seen the situation quite a number of times: a team of developers who are all into the best engineering practices, but unfortunately at the same time sharing a belief this is enough to build top-notch applications. I have a bad news for you: if the answer for a question about quality assurance team is “we don’t need one” I scream and run away. And don’t buy any of your apps.

    It just isn’t possible to build quality code with no manual testing at all. You may build something and throw it against your client hoping they will do the testing for you. Sometimes you can even get away with this approach. Be warned though – you’re going to fail hard way soon. The next customer may not like the idea of doing beta-tests for you, actually most of them won’t, and then you’re screwed.

    Manual and automatic testing is fundamentally different. With automated testing, no matter if we’re talking about unit, stress, regression, integration or end-to-end tests, you go through predefined tracks. You had to create the test in the first place so it does exactly what you told it to do.

    On the other hand humans pretty often had their own minds which they happen to use in unpredictable ways. They may for example make up some test scenarios on the fly or they may sense a subtle scent of a big scary bug hiding around and follow their instinct to hunt it down. Every now and then they will abandon beaten tracks to do something unexpected, i.e. find a bug which would be missed otherwise.

    Note: I don’t say what kind of procedure you should follow in manual testing. Personally I strongly believe in value of exploratory testing, but any good tester will do the job, no matter what your manual testing procedure look like.

    Actually this post was triggered by recent discussion between a few bloggers including Lisa Crispin, James Shore, Ron Jeffries, George Dinwiddie and Gojko Adzic. The discussion, while interesting and definitely worth to read, is a bit too academic for me. My approach is that there is no universal set of techniques which yields indisputably best results. While I like James arguments why they dropped acceptance testing and support his affection for exploratory testing I share Lisa point that even in stable environment our “best” approach will evolve and so would techniques we employ.

    You may exchange acceptance testing with exploratory testing and that’s perfectly fine. As long as some fellow human put their hand on the code you want your users to touch with mine that I’m good with that. I don’t care much which approach you choose or how you call it. It is way more important who does the job. Experienced quality engineer with no plan at all will do way better job than poor or inexperienced tester with the best plan.

    Just remember if you happen to skip the manual testing at all I’m afraid your app may be good only for machines as only machines tested it.

  • Testers Shouldn’t Stick to Specs

    Jennifer Bedell wrote recently at PMStudent about testers goldplating projects. Definitely an interesting read. And pretty hot discussion under the post too.

    I’d say the post is written in defense of scope. Jennifer points that testers should test against the specification only because when they step out of the specs it becomes goldplating. In other words quality assurance should verify only whether a product is consistent with requirements.

    That’s utterly wrong.

    It might work if we worked with complete and definite specifications. Unfortunately we don’t. And I mean we don’t ever work one of these. They just don’t exist. What we work with is incomplete vague and ambiguous.

    And yes, you can deliver software which sticks perfectly to specs and at the same time is unusable piece of crap.

    You should actually encourage testers to get out of the box (specifications) and wander free through the application using every weirdest scenario they can think of. Why? Because your beloved users won’t follow specs. They won’t read it. They won’t even care whether there is something called with the name. This is artificial artifact created to make discussion between product manager and developers easier. Users couldn’t care less about specs.

    This means they don’t use application in a way which was specified in requirements, user stories or whatever you happen to create. They use the app using all kinds of approaches, even the weirdest ones. You don’t want to face it unprepared. So forget about damn specs when testing and try everything you can think of (and some of that you can’t).

    I’ll go even further: forget about test cases too if you happen to have them. OK, forget about them during the first iteration of tests, because you have more than one, don’t you? The reason is exactly the same as above.

    And yes, I can imagine some extreme bugs which will be submitted because of this approach. Bugs which would take years to fix and they are so unlikely to appear in real life it would be plain stupid to fix them. This only means you should monitor incoming stream of bugs, hand-pick these few you aren’t going to fix and throw them out.

    This isn’t a reason to consciously decrease quality by sticking to specs during testing.

  • Shortcomings of Test Cases

    I’ve already told you that writing test cases is worth the effort. What I haven’t stressed are shortcomings of test cases.

    If you take time to prepare test cases you most likely do it for the reason and I guess not for “my manager will fire me if I don’t” one. You want to invite some organization to your testing and improve overall quality of the product at the end of the day. I don’t want to go deeper for motivations since it isn’t the point of this article.

    The thing you do is preparing some scenarios up front, definitely before you actually start testing. Generally the earlier the better but either way test cases are before test execution, that’s for sure. Basically when you prepare test cases you imagine how you’d like the application to be tested and you write it down. Do this, verify that, expect something other and proceed to the next step.

    Now what’s wrong with that?

    1. You suck at predicting how your users would use the products. We all do. Applications are built by developers who don’t even try to imagine how users will interact with software. Then they’re tested by quality engineers who are also far away from average users. Then they’re deployed by people who actually talk with users but can’t influence application development in almost any way. The whole process is driven by product managers whose job is to think about general concepts, not detailed usage flows. Who cares about usage scenarios then? Well, pretty much no one.

    2. Test cases are closed scenarios by definition. When designing test cases you don’t have “flow with your gut feeling” block at hand to put it in the diagram. You actually have to make it possible to verify test results thus you’re limited to some specific path – one among hundreds or thousands. This means you won’t go through the vast majority of cases yet you will decide whether a new feature works well or not.

    3. It’s not possible to have a complete set of test cases. A typical method which takes only an integer as an argument can be called in possibly 2^32 different ways which is basically infinity. If you add that you may need to call another function as a prerequisite you end up with a number which blows my head up. And we’re talking only about a single function, not about whole application. Test case is always a dramatic simplification in terms of possible usage scenarios.

    4. It’s easy to complete test cases and call it a day. Hey, after all test cases are for quality engineers, aren’t they? QA people should base their activities on test cases. Except they shouldn’t limit their work just to test cases. And you want it or not there’s always a bit of incentive to turn off your thinking and just follow the plan instead of going an extra mile. I’ve already said that a couple of times, but QA team is no place for follow-the-book people. Creativity is a key trait for QA engineers, yet we don’t live in an ideal world and you don’t have only the best and the most creative people in QA team, no matter what your company website says.

    With all these shortcomings I still strongly advise you to write test cases. You may try to do them as precise as possible, or you may just use general flows leaving some autonomy for testers but either way it will pay off. Especially if you understand where test cases fall short and find a way to deal with that using other tools.