Pesticide Paradox in Test Automation And How To Avoid It?

Pesticide Paradox in Test Automation

Pesticide Paradox

How Pesticide Paradox can impact Test Automation?

One of the attendee at a recent meetup I hosted(on behalf of MOT KL) asked this interesting question.

Even though I answered it, the time constraint didn’t let me elaborate enough. So let me start with an apology for that and this article is in return to that question, based on my understanding of the concept. I believe this article will help you to understand the concept & moreover gives some ideas on “How to keep our Tests relevant“.

Disclaimer: This article focuses more on Pesticide Paradox in Test Automation whereby accepting the fact that there are more paradoxes around Software testing.


For Starters, let’s understand/revisit what is a Pesticide Paradox?

Pesticide paradox principle of testing goes something like this…

Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual – Boris Beizer on Pesticide Paradox

Corollary to this he also stated that: ‘Test suites wear out’ in the book “Software Testing Techniques

These definitions make us think and I do believe that some tests lose their power to find particular bugs over time. Some tests which were effective, efficient and revealing before, will get wear out because of the modifications made to the product by developers. And in an eco-system which has a better feedback loop, these test suites may wear out faster. Different conditions /situations demand changes in the way we approach them. If we use the same techniques over and over, the tests will no longer have a meaningful impact.

Pesticide Paradox is a metaphor! It’s true & false at the same time. The role of metaphor is to make us think – Michael Bolton

When does Pesticide Paradox in Software Testing happen?

  • When the same tests & test data is repeated
  • When the same test data is reused
  • When more iterations of the same test are executed and there is nothing new to be revealed by that particular test
  • When product code becomes resistant to the test code
  • When new defects are failed to be caught by existing tests and thereby released to production

Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason that stepping in someone else’s footprints minimizes the chance of being blown up by a land mine – James Bach

Pesticide paradox naturally has an effect on Test Automation as well. Let’s understand it better.

Pesticide Paradox in Test Automation

When we automate the checks, it is ideal to understand the application first. Digest what is being tested by the scripts, why is it failing or why is it not failing. If failing what should we do to fix it? We tend to have a cognitive bias called, IKEA effect which makes us more attached to the tests we create. As we add more automated checks, we start relying on those tests more. We keep running them frequently for regression cycle but sparingly conduct a review on the tests which is being executed. And we tend to believe that our tests/checks are good enough to identify all the issues… which is kind of wrong, right??

Let me explain with an example. Imagine there is a regression test suite with 100 test cases which is executed for all releases. In one of the releases, there were few enhancements and we were asked to test it. Since we have “Regression Test Suite” what we do is to execute it, to ensure the change is not impacting the existing areas.

If all the tests are passed, does it mean

  • there are no underlying bugs?
  • there are no new defects introduced by the enhancements?
  • we can confidently release the product?
  • there is no other area left off by regression test suite?

If you ask me, this can have two interpretations:

  1. Yes, provided we ensured proactively that the tests(logic/data/assertions) are modified accordingly to accommodate new enhancements.
  2. No, when we blindly trust our tests without analyzing the enhancement and its impact on the regression test suite.

In my opinion, the automated checks which we create are very narrow compared to the other broader aspects. Automation will always check for the things we intended. It helps to do our work in a smarter way but it’s not smart enough to improvise on itself.

The effectiveness of our Regression Suite as well decreases over a period of time as after several iterations, as developers become more careful to those hotspot areas and fix those bugs as early as possible. As a result, the count of bugs identified (in that area with the available tests) gets reduced. This may give us a (false?) feeling that release being shipped is of good quality.

Automation is a double-edged sword if not maintained properly.

Throughout my career, I have met people who believe automated checks once scripted are done forever! And they don’t require any modifications or maintenance. Really?

This is one of the misconceptions which we have for automation which has been given a “Silver Bullet” status by a considerable part of the Software industry.

Tests do not write, modify or maintain themselves. To make them serve their purpose, maintenance is inevitable. Test automation is a software development activity, which has the same problems & challenges as any other development activity. The more tests we intend to automate, the more would be the cost and time for creating and maintaining those.

Well, what can possibly bring us here?

  • It is practically impossible to test all the application scenarios which go back to the Catch-22 of software testingTesting is potentially endless.
  • The functionality of application changes over time.
  • We tend to cover what is known, what is changed and what is new, leaving the unknowns behind.
  • Getting comfortable, as we didn’t sense any danger from some part of the application.

How can we prevent or overcome Pesticide Paradox in Test Automation?

  • By writing a new test script to evaluate different parts
  • By changing/modifying test script and test data from time to time
  • By updating the techniques & methods used, with a sense of continuous improvement to make tests more effective
  • By constantly aligning yourself with the product changes
  • By constantly analyzing the bugs identified

Is this enough?

We talked about the IKEA effect before right. Let me tell you why I mentioned that. I was a person who likes to see the tests created by me gives a pass status during my initial days because I was more attached to those, and I believed my baby can never go wrong (which itself was wrong ? ).

Now if I see a test which keeps on passing over a few consecutive execution cycles, there will be an intuition in my mind rather than being happy.

Am I doing the right thing? Is there anything I missed? Or Is there anything to be modified?

I think this intuition helped me a lot to keep my tests relevant. But, indeed there are more ideas.

How to Keep our Automated Tests relevant?

  • Don’t assume that we have full coverage with our automated tests, there is always more.
  • Don’t be too formal, sometimes more bugs can be identified by informal ways
  • Peer reviews & sessions to understand more about the features and to get diversified opinions from a fresh perspective.
  • Believe that, Automated Tests can also have bugs.
  • Deleting the irrelevant tests and keep the code clean.
  • Think Outside the Box and add new scripts by changing the usual scripted flows & data
  • Add fuzzing and randomization to tests
  • Create tests which are more focussed on possible user interactions instead of following traditional test case based automation only
  • Think from several levels and different angles to get different perspectives
  • Test our Tests, quite often
  • Review & Verify test suits and scenarios regularly
  • Last but not the least: Step Back, Think Like a Tester, Frame the Test, then Develop the Test

Learn how to Defocus. Defocusing is the solution to the paradox of automation. It means to continuously change your tests, your techniques, and even your test strategy – James Bach

To overcome the Pesticide Paradox in Test Automation firstly we should overcome the bias & misconceptions and start considering automation as software development. As the application evolves, we must be continuously willing to design new test cases & modify the existing ones. And maybe over time, we can adopt a model-based testing technique that allows us to reuse the test code to automatically test new scenarios and code changes.

All these can help to control the impact and avoid pesticide paradox in test automation, but there can be more things of course and this list alone may not help in all contexts. So feel free to think as per your context, and add more based on top of these ideas.

Revisit, Review, Revise & Reframe.

Ideate, Innovate & Iterate.



About the Author:Pesticide Paradox in Test Automation Nithin SS

Nithin is a passionate and enthusiastic QA professional with 6+ years of experience in the field of IT with a focus on Quality Assurance (Automation & Manual) of web & mobile-based applications. Currently attached with Fave, as Senior QA Automation Engineer driving their test automation journey. He is mainly involved in building robust test automation solutions from the ground up. Also, involved in Test & Release strategies to improve software delivery.

He enjoys sharing his learning and experiences through creative writing and loved to be known as a storyteller. An active member of various Software Testing communities and founder of Synapse-QA, a community-driven co-writing space for testers.


P.S.  If you also wish to write for The Test Tribe, You can know more Here-

Written by

One thought on “Pesticide Paradox in Test Automation
  1. Interesting article. I hope it reaches many of them so that automation scripts have not been written for only automating the checks.

Leave a Reply

Your email address will not be published.

Related Posts

Testing Courses at Thrive EdSchool

Advertisement (Know More)

Get Top Community News

    Top Event and other The Test Tribe updates to your Inbox.