Category: software development

  • Testers Shouldn’t Stick to Specs

    Jennifer Bedell wrote recently at PMStudent about testers goldplating projects. Definitely an interesting read. And pretty hot discussion under the post too.

    I’d say the post is written in defense of scope. Jennifer points that testers should test against the specification only because when they step out of the specs it becomes goldplating. In other words quality assurance should verify only whether a product is consistent with requirements.

    That’s utterly wrong.

    It might work if we worked with complete and definite specifications. Unfortunately we don’t. And I mean we don’t ever work one of these. They just don’t exist. What we work with is incomplete vague and ambiguous.

    And yes, you can deliver software which sticks perfectly to specs and at the same time is unusable piece of crap.

    You should actually encourage testers to get out of the box (specifications) and wander free through the application using every weirdest scenario they can think of. Why? Because your beloved users won’t follow specs. They won’t read it. They won’t even care whether there is something called with the name. This is artificial artifact created to make discussion between product manager and developers easier. Users couldn’t care less about specs.

    This means they don’t use application in a way which was specified in requirements, user stories or whatever you happen to create. They use the app using all kinds of approaches, even the weirdest ones. You don’t want to face it unprepared. So forget about damn specs when testing and try everything you can think of (and some of that you can’t).

    I’ll go even further: forget about test cases too if you happen to have them. OK, forget about them during the first iteration of tests, because you have more than one, don’t you? The reason is exactly the same as above.

    And yes, I can imagine some extreme bugs which will be submitted because of this approach. Bugs which would take years to fix and they are so unlikely to appear in real life it would be plain stupid to fix them. This only means you should monitor incoming stream of bugs, hand-pick these few you aren’t going to fix and throw them out.

    This isn’t a reason to consciously decrease quality by sticking to specs during testing.

  • Shortcomings of Test Cases

    I’ve already told you that writing test cases is worth the effort. What I haven’t stressed are shortcomings of test cases.

    If you take time to prepare test cases you most likely do it for the reason and I guess not for “my manager will fire me if I don’t” one. You want to invite some organization to your testing and improve overall quality of the product at the end of the day. I don’t want to go deeper for motivations since it isn’t the point of this article.

    The thing you do is preparing some scenarios up front, definitely before you actually start testing. Generally the earlier the better but either way test cases are before test execution, that’s for sure. Basically when you prepare test cases you imagine how you’d like the application to be tested and you write it down. Do this, verify that, expect something other and proceed to the next step.

    Now what’s wrong with that?

    1. You suck at predicting how your users would use the products. We all do. Applications are built by developers who don’t even try to imagine how users will interact with software. Then they’re tested by quality engineers who are also far away from average users. Then they’re deployed by people who actually talk with users but can’t influence application development in almost any way. The whole process is driven by product managers whose job is to think about general concepts, not detailed usage flows. Who cares about usage scenarios then? Well, pretty much no one.

    2. Test cases are closed scenarios by definition. When designing test cases you don’t have “flow with your gut feeling” block at hand to put it in the diagram. You actually have to make it possible to verify test results thus you’re limited to some specific path – one among hundreds or thousands. This means you won’t go through the vast majority of cases yet you will decide whether a new feature works well or not.

    3. It’s not possible to have a complete set of test cases. A typical method which takes only an integer as an argument can be called in possibly 2^32 different ways which is basically infinity. If you add that you may need to call another function as a prerequisite you end up with a number which blows my head up. And we’re talking only about a single function, not about whole application. Test case is always a dramatic simplification in terms of possible usage scenarios.

    4. It’s easy to complete test cases and call it a day. Hey, after all test cases are for quality engineers, aren’t they? QA people should base their activities on test cases. Except they shouldn’t limit their work just to test cases. And you want it or not there’s always a bit of incentive to turn off your thinking and just follow the plan instead of going an extra mile. I’ve already said that a couple of times, but QA team is no place for follow-the-book people. Creativity is a key trait for QA engineers, yet we don’t live in an ideal world and you don’t have only the best and the most creative people in QA team, no matter what your company website says.

    With all these shortcomings I still strongly advise you to write test cases. You may try to do them as precise as possible, or you may just use general flows leaving some autonomy for testers but either way it will pay off. Especially if you understand where test cases fall short and find a way to deal with that using other tools.

  • What Is the Main Benefit of Writing Test Cases?

    Test cases are here for a long time. Used equally willingly by these who work with heavy-weight processes and those who choose the agile way. They can be stored in an appendix to documentation in BDUF (Big Design Up Front) project or be written on yellow cards along with user stories. Where’s their main value then?

    Think about the comparison: test cases for application functionality are the same as unit tests for application code. Except of one thing: it is way harder to write good and complete list of test cases. You may safely assume that’s impossible. This means the main value of test cases isn’t in covering full application functionality because you simply won’t achieve that, no matter how hard you try.

    Well, maybe it is a valuable tool for QA engineers? You know, they have some kind of walkthrough for testing sessions. Hm… show me quality engineers who prefer just to follow test cases instead of explore the application by themselves and I’ll tell who should be fired. Testing is such a creative task that there should be no place for follow-the-manual people in QA teams.

    OK, test cases don’t ensure completeness of tests and don’t help QA people much. So? Show me the money, as one of my customers used to say. Where’s value?

    The main benefit of test cases lies in the activity of writing them down. Or, to be more precise, in the process of thinking about them before they’re written down.

    Test case is not much more than short scenario of application usage. Sometimes pretty weird scenario but still. Unfortunately we often forget about this perspective. We tend think in terms of code or requirements and forget about the very basic thing – no class in the code and no requirement alone makes any sense for the end-user.

    Looking at application while being in end-user’s shoes benefits in two ways:

    – You find black holes of your design which aren’t covered with user stories, or requirements

    – You find usability issues since you see whole usage flow, not just a bunch of atomic functions

    The main value of writing unit tests is actually within the writing part.

  • When Unit Testing Doesn’t Work

    Unit testing is such a nice idea. Developers create tests as they develop working software. Then they (tests, not developers) are executed during every build and it’s verified whether something hasn’t been broken over the way.

    Unfortunately unit tests very often doesn’t work well even when team formally boast they employ this practice. Why?

    1. Unit testing can’t be enforced. Yes, yes, I have heard about tools which measure code coverage. A problem is they tell you whether your code is covered but they don’t tell you with what. It can be covered with quality tests but odds are it is covered with a lot of crap generated just for the sake of achieving target code coverage. Remember you get what you measure.

    2. Writing unit tests is easy – maintaining them is a hard part. Consider you have your code and your tests perfectly working. Now you change something which breaks several tests. Is that a flaw in the code? Or maybe it was done on purpose and now you have to rethink and rewrite all broken tests. Heck lot work to do.

    3. Tests should be added all the time. When you find a bug which wasn’t caught by tests you already have you should add one which will do it in the future. And yes, that means more work on crappy maintenance tasks which is neither funny nor developing.

    4. It takes so long to run all unit tests. Sometimes tests are switched off because build process takes too long and they don’t have to be run each time, right? Well, wrong. They have to.

    5. Good unit test checks border condition while easy-to-write unit test exploits optimistic scenario. It’s pretty easy to write unit tests not using your mind at all. 2 plus 2 should be 4. Task completed. Case closed. How about using 0 or negative numbers or floats? Forget about them – too much work.

    6. Writing unit tests is boring. That’s not amusing or challenging algorithmic problem. That’s not cool hacking trick which you can show off with in front of your geeky friends. That’s not a new technology which gets a lot of buzz. It’s boring. People don’t like boring things. People tend to skip them.

    A key thing to have unit testing implemented well as a practice is right approach of developers. Unfortunately it can’t be trained. These lads and gals either believe unit testing helps them to create good software and they do their best to develop high quality unit tests or they just cover the app with some crap just to get proper code coverage result and pull the problem out of their ass.

    If you’re going to use unit test better be sure your people are the former. Other way around it’s not worth the effort.

  • Good Enough

    My friend asked me to give an opinion about design of application his company had developed. I took a glimpse on their technical presentation with a lot of screenshots. I must admit my attitude was very positive, whole thing looked professionally. That’s what I told my friend. His answer was a bit surprising as he said he would have worked more on UI design. On the beta stage? It was really good enough. Or even better enough.

    The discussion brought me to think about good enough paradigm in software business. One side believes that if software is good enough the job is done. Another wants to polish all the details until they bring their software into perfection. Although I haven’t seen any statistics about that I believe the former group is more numerous one.

    Software constantly conquers new areas of our life including those where there’s hardly place for good enough believers. Hospitals, planes, cars or military areas. I guess you wouldn’t be happy if your good enough software managing your car brakes crashed. Yet still most software which is developed is not mission-critical products. It just sends e-mails, creates documents, pass information, manage some content, store some data etc. There’s a choice between good enough and close to perfection.

    Personally, I’m a good enough guy. I have just one point here. Costs. Imagine a theoretical scale of perfection of software ranging from 0% to 100%. When you start your development you’re in 0% when you finish you’re somewhere in upper half of the scale. On the beginning improvements are easy. You invite first test UI which looks like it was designed by blind monkey. Then you have first iteration of target UI. It’s a huge improvement, probably a couple of dozens percents on the scale. However, when you’ve already made several iterations with UI and it looks decent the next would be probably making extensive usability testing. It’ll take much more effort than the first steps. Results will be probably much less significant. Few percent if you’re lucky. Next step? Maybe user feedback? You employ a number of people, who deal with all e-mails and phones from users, than trying to find a schema in all that mess, then verifying if accidentally some requests are not exactly opposite to others etc. More effort. More money. Results limited. Maybe another percent earned on the scale.

    The example says about UI, but the rule is general. First performance optimizations will be fast and effective, unless you do them randomly. After some successes they’ll become harder and more costly. On the beginning of stabilization you’ll deal with big numbers of issues vastly improving stability of software. On the end the whole deployment will wait for a single bug fix. Of course that’s not a paradigm – there’re exceptions. Sometimes you can find some easy improvements even when you’re high on the scale, but it shouldn’t happen very often.

    That’s why I’m a “good enough” believer. I don’t refuse to make small improvements, even when users can live without them. They’re welcome as far as they’re cheap. Great example was described by Basecamp team. Those changes are close to border separating good enough from better (hard to say on which side they are), but they were quick and cheap. And I guess the more important improvements were done earlier. If the changes were significantly more expensive I think 37signals would leave it as it was.

    I believe that unless you have much time and money to spend you shouldn’t chase the ideal. There’re more important things to do.

  • Almost Completed

    One of projects I’m contributing to is a micro-ISV service. Maybe it’s not the perfect example of micro-ISV, because at least several people are engaged. It’s not single-person business, but the basis is the same. Whole idea was in its origin a world-changing service (or close). The main person working on the idea is a guy who loves it and strongly believes that the service will be great success. The rest of team keeps adding new brainstormed ideas to current concept, keeping ourselves on fire.

    Just like most of micro-ISVs. Just like most of micro-ISVs which failed. I wrote some time ago that faith and will to contribute in the long run are essentials when trying to build a product or service as the micro-ISV. But that’s not all. Another factor which is probably even more important is consistency.

    I’ve seen different “micro-ideas” (like you can call micro-ISV’s ideas for products or services) which had potential at least to earn for themselves. I played my role in those projects as a co-author, as a supporter, as a user. Range of ideas was wide – from a very fresh look on Internet entertainment services to yet another communicator. All of them I can call (more or less) failures. None of them fulfilled author’s expectations in area of revenue, number of customers or users etc. Many weren’t even born to be shown to the world – they died during pregnancy.

    When you have your application completed in 90% it’s usually much less than halfway to successful release. And when you’re 90% completed all the fun is the past. You developed the most enjoyable and the most challenging parts of a code. You exploited all your creative ideas during design. Now you have to focus on boring little improvements of website. Now you have to force your creative soul to think how to make your customers happy instead of thinking how to enjoy yourself with the work. Now you have to fix all those unrepeatable tiny little bugs which allow your frustration to grow. Now you have to deal with The Boring Time. And if you allow yourself to become bored – you lose.

    90% of completeness is just the beginning of the road. In this case 90% finished often means 100% failed.