The Bride of the First House (bride) wrote,
The Bride of the First House

QA: Pairwise Testing

weather: mostly sunny
outside: 23.0°C
mood: ...
With any software, you'll have a set of parameters you need to use as input data or environment. If each of the different combinations of the input constitutes one test case or test scenario, having a lot of parameters quickly balloons into an unrealistic number of test cases that needs to be run.

It's completely unfeasible to exhaustively test every single possible combination. I think we've done quick calculations where even simpler parts of our system would take on the order of YEARS to test if you really did every single combination of everything you can change on the screen.

Given that we can't do every single combination, we cut the test suite down to the things that really matter. Yes, we can get it wrong and problems can get past us because we didn't look at it, but it's an risk analysis exercise - what is the risk of the defect occurring? and what is the impact of its occurrence?

One of the strategies we can use is called Pairwise Testing.

Pairwise testing (or "all-pairs") means you choose the set of test cases that covers all combinations of 2 parameters and each value of all other parameters at least once. Defects that involve interactions between three or more parameters have been shown to be progressively less common.

I don't think it's a new technique, but it's a relatively new topic of discussion in the software community. There aren't a lot of statistics on it yet, but what is available seems to support this.

I've been doing it since I started my first job in Quality Assurance. But I do it manually and based on intuition. I don't use third party software tools to generate my test cases. I thought this was a cool idea and I was curious to see how the tools compared to my intuition-based choices for test cases.

Because we're a Windoze shop, I chose to use a small free tool called jenny (thanks, Bob Jenkins =).

jenny takes a "tuple" parameter to indicate that it should do pairs, triples, quads or whatever other dimension of combinations you want. Then you list the number of values for each parameter. Your parameters are labelled with integers and your values are labelled with letters. The upper limit is every single combination of every single dimension.

I have a Payments module to test. There are six main parameters I'm dealing with:

  • 3 incoming payment methods
  • 5 outgoing payment methods
  • 4 types of customers
  • 3 contract types
  • 25 of the most commonly used currencies for both incoming and outgoing payments (there are actually just over 100 possible currencies that we support)

All possible combinations of those 6 input parameters means 112500 test cases. If I estimate that each one takes on average 5 minutes each to execute, that's 562500 minutes which is 9375 hours or 390.625 days.

We usually have two QA staff per project team. There are about 260 business days in a year. This comes out to 10 months. If all goes well.

Oh, HELL, no.

So, anyway, I set up the parameters and possible values in an Excel spreadsheet, numbered/lettered them off. I plugged numbers into jenny with all my exclusion rules, made the output comma delimited and sed'ed the output to get something that could be pulled into Excel.

In the 625 test cases that jenny gave me, each parameter value will be used at least once. If all goes well, two QA staff testing for 3 business days (8 hours per work day) will complete all 625 test cases.

Keep in mind that I haven't included some of the other parameters. I haven't included other modules that also have to be tested. I haven't included a few other types of customers for reasons that I didn't want to get into for this public example. I haven't included troubleshooting time when something does go wrong. I haven't included the time it takes to log bugs, argue about them with developers or verify fixed bugs.

We usually schedule 2-3 weeks (10-15 business days) for Regression, depending on the complexity of the features and we're still cutting corners where we can.

    Side Note: Automation is not always the answer. This is another post altogether, but the short story is: you're not making a hard problem easier by automating testing. You're making a hard problem equally as hard, but in a different way.

So, 625 is much more do-able than eleventy-frazillion. And I can be reasonably confident that most of the bugs that are there will be found.

However, knowing the system as I do, I can get it down to 32 test cases and still be fairly confident that I've covered the major functionality with that one module.

So, this is fairly consistent with most things in life. A software tool can save you a lot if you don't have any extra knowledge.

But it won't replace human knowledge altogether.

Tags: professional development

  • Kindle 3

    weather : cloudy outside : 1°C mood : ... Three weeks ago, after six years at the now Ex-Work, I started a new job.…

  • Roomba

    weather : cloudy outside : 8°C mood : ... I got a Roomba 562 Pet Series. I figured it would be good for me because…

  • Holiday Week Update

    weather : sunny outside : 24°C mood : content Thus ends a nice holiday week. *bittersweet* =) W and I took the week…

  • Post a new comment


    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded