Blog

The Pesticide Paradox – How to Keep Your Tests Relevant

Nov 22, 2022
10 min read
Test AutomationTest Strategy

The Pesticide Paradox – How to keep your tests relevant

Almost 20 years ago Boris Beizer stated what became known as the Pesticide Paradox:
“Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.”

In plain English this means that as you run your tests multiple times, they stop been effective in catching bugs. Moreover, part of the new defects introduced into the system will not be caught by your existing tests and will be released onto the field.

This principle (or paradox) came up during my conversations a couple of times lately. Once when evaluating a company’s approach to automation, where they created a large suit of tests and assumed it would continue catching all new bugs for eternity. Another time, working with a different team who was looking for the reasons their existing manual testing suit was not detecting all the bugs before releasing their product to the field.

The truth is that test suits require constant maintenance and updating, regardless if they are automated or manual.

There are a number of reasons a perfectly good suit of tests will stop been effective over time:

    1. The Practical Impossibility of Testing all Possible Scenarios.
      Even simple applications require an exaggerated and impractical number of tests in order to verify all possible scenarios and data combinations. This is why we use the help of methodological tools such as equivalent partitioning and model-based testing, but still this is not enough.In the end of the day most teams will use a risk-based testing approach to create a sub-set of scenarios and data-sets, and then use the escaping defects found in the field after the initial release in order to calibrate and patch any holes that may have been left the suit. This means that at the end of the day we don’t test everything.
    2. The functionality of the application changes over time.
      If we introduce new features to the product it may seem trivial that we need to write tests for them. It is less trivial to remember that we also need to modify the tests for the existing features, even if they are only slightly modified by the new additions.

 

  1. We (humans) tend to be especially careful only on places where we feel imminent danger.
    What does this mean? Simply that developers will be extra-careful in places where testers found bugs beforehand, but on the other hand they might not be so careful in places they “feel comfortable” with.

So what do we do with this? How do we assure we are working with an effective and efficient suit of tests?

The key rule is to be objective and to constantly keep reviewing the state of things. In practice I recommend the following:

  1. Keep track of product changes and their indirect effects in your application The direct changes are trivial, but make an effort and do all the structural and functional connections, then think of the new scenarios you need to write to cover these changes.
  2. Discontinue tests that are not effective
    Too many useless tests may be an overhead more than they can help your process.
    For example, if you have 10 tests that cover the same area and none of them have detected bugs in an important number of cycles (the number is up to you!), then review them and cut their number down.
    My rule of thumb is that if test has not reported a bug in the last 5 runs, I add it to my review list and I start verifying its importance and weighting whether I should keep it or move it to my test archive.
  3. Modify your test data
    This one is trivial but we tend to forget it. Many bugs in our products are data-specific, this means that (specially in our automated suits, but not only in them!) we should continually increase and/or modify our test data. This may mean getting more samples of demo databases or creating new batches of social security numbers or whatever that means to your system, but make sure to add randomness to your tests.
  4. Last but not least, don’t put all your trust and weight on formal approaches only Use at least one type of informal testing per cycle (or at least per release). Either execute Bug-Hunts, or Exploratory Testing, or whatever other approach you want to use. Adding the human approach to any testing cycle will automatically increase its effectiveness.

Schedule a Demo

To summarize:
It is both naive and dangerous to assume you can create the “ultimate testing suit”, one that will discover all bugs once and for all.

Even if you create a suit that has a very high coverage percentage and bug discovery success rate (calculated based on the level of escaping defects), you should not lay down your guard. You need to continue reviewing and verifying your test suits and scenarios regularly, and adding testing factors that will assure you continue testing your system in an effective and efficient way.

ABOUT THE AUTHOR
joel

Joel Montvelisky

Joel Montvelisky is a Co-Founder and Chief Solution Architect at PractiTest. He has been in testing and QA since 1997, working as a tester, QA Manager and Director, and Consultant for companies in Israel, the US, and the EU. Joel is a Forbes council member, and a blogger. In addition, he’s the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report, and a Director at the Association of Software Testing. Joel is a conference speaker, presenting in various conferences and forums worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

In This blog

Join our newsletter

Share this article