There is a persisting myth that testing works like this: you get the spec, then you design test cases with expected results. You run the tests, and then you see if the results match the expectations. And then you automate. But good testing rarely works that way. Even people who tell me that’s how they test don’t actually do that, as I invariably discover when I watch them in action(unless they are really bad at testing).
What I will show in this talk is a report of a product I tested. I will demonstrate how I must learn about the product before I formalize any testing, and how that learning involves interacting with the product (if it is available). I will show how even when I use automation, which is often, that tooling is developed simultaneously with the test designs, and the test designs are simultaneously put into practice. There is no strict sequence of actions, but rather a tangle, or perhaps a weave, of different investigational threads.
James is a founder of the Context-Driven school of testing. He created and teaches the Rapid Software Testing methodology, and has written two books: Lessons Learned in Software Testing (with Cem Kaner and Bret Pettichord) and a book about succeeding without going to a school, called Secrets of a Buccaneer-Scholar.