Interviewing our FailQonf Speaker Dorothy Graham | Failure Stories, Test Automation, Metrics

Dorothy Graham Interview FailQonf

Each of our FailQonf Speakers has years of experience behind them and a crazy amount of knowledge acquired over those years. It would be bad on our part if we restrict their stories to only their FailQonf sessions. We are as eager as you all to know them and their journey better, and hence this Interview Series.

We had a few questions in mind which we wished to get answers from all of them, and there were questions we designed based on the little research we did on their work and life. We so enjoyed the process and now as we have the answers with us, we are enjoying it even more. We are sure you will enjoy this interview too.


In this interview, I (Aakruti Shukla) took the opportunity to ask our FailQonf Speaker Dorothy Graham a few questions about Failures, Lessons learned, and a part of their amazing work in the Industry. We thank Dorothy Graham for their time to answer these and share a part of their life with us.

About Dorothy Graham: Dorothy Graham has worked as an independent consultant and trainer in software testing and test automation for 50 years, and is co-author of 5 books: Software Inspection, Software Test Automation, Foundations of Software Testing, Experiences of Test Automation and A Journey Through Test Automation Patterns (see TestAutomationPatterns.org). She has been a popular and entertaining speaker at hundreds of conferences and events over the years. Dot has been on the boards of conferences and publications in software testing, including programme chair for EuroStar (twice). She was a founder member of the ISEB Software Testing Board and helped develop the first ISTQB Foundation Syllabus. She was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012. Now retired, she enjoys singing in choirs and small groups.

Linkedin | Twitter

 

Aakruti: In your professional experience, out of the different test automation approaches, which ones
did you found it to be the worst?
Dorothy: In our book Software Test Automation, we talk about five different levels of scripting, and the first
and worst is capture-replay, which gives you a linear script with hard-coded values. This approach
makes your automation tightly tied to the tool you are using and gives automation that is brittle
and expensive and hard or impossible to maintain.
If you are wondering, the other scripting techniques are structured scripts, shared scripts, data-
driven scripts and keyword-driven scripts. Levels of abstraction are essential for good automation;
the tool-specific aspects should be confined to only the minimum essential interface modules, the
testware should be designed according to good software design practices to be modular and easy
to maintain.
It is also important to pay attention to the user interface of the automation. If you make an easy-
to-use interface so that testers can easily write and run automated tests, even without knowing
the technical details of the tool, this opens the automation to a much wider set of automation
users, and will give more benefit.
Some of these are summarised in the Design Issue DESIGN DEPENDENCY, in the wiki
TestAutomationPatterns.org.
So the worst approach is capture-replay and automation closely tied to a particular tool; use good
testware architecture, with levels of abstraction, to avoid this.

 

Aakruti: In your experience and if sharable, which is the biggest blunder that you faced in the test
automation?
Dorothy: In my presentation for the conference, I talk about several blunders in test automation, and the
most significant one is that tools are perceived to do testing. Although there are aspects of some
parts of testing that tools can do well, no tool ever does all of the software testing – this is the biggest
blunder, and lies at the root of many others. Testing tools only do what they are programmed to
do; they do not assess what the risks are and what should be tested, they never think of
significant new things while running a test, but they are very fast. But speed is definitely not the
same as quality – for software testing as for anything else.

 

Aakruti: What metrics indicate that Automation is going wrong and/or right ways?
Dorothy: Metrics for either testing or automation should be related to the objectives of the activities. If the
goal of automation is to run tests more quickly, then time to run could be measured. If the goal is
to run them more often, then the number of times they are run would show that. If the goal is to
test more of the software, then coverage could be used.
But it is also important to measure aspects of the automation itself, such as maintainability,
flexibility, robustness, reliability etc. For example, maintainability could be measured by the
average effort to update the tests when the software changes, and this should be going down over
time. These and other measures are discussed in Chapter 8 of the Software Test Automation book,
and examples of measuring automation are also given in the Experiences of Test Automation book.
Mark Fewster of Grove Software Testing has an excellent presentation about automation health
that includes useful measures for automation.

 

Aakruti: If you recall your first professional failure, what was it and how did you respond to it?
Dorothy: My first job was to write a test execution and a test comparison programme (they weren’t called
“tools” back in the 1970s!). I spent around two years at Bell Labs working mainly on these two
programs, written in Fortran. I thought I had done a reasonable job, but when I returned for a
visit to the company a few months after I had left, I was shocked to find out that no one was using
my programs! In fact, I think what I spent two years writing was probably some of the world’s first
shelfware.
I had not had any education or training in software design techniques at this point, and my code
was pure spaghetti. I even had a huge diagram of all the links between subroutines, which was
very spaghetti-like. In my next job, I learned about “structured programming”, which is about
encapsulation and abstraction – the sort of thing that was later called “object-oriented” – basically
just good design for software. Because of my earlier failure, I realised the importance of good
design – and that this also applies to automation code. In recent years, I have seen the
importance of well-structured automation systems, for long-lasting benefit.

 

Aakruti: Any experience you would want to share wherein you learned from someone’s failure and based
on that lesson, you actually avoided a similar failure at work?
Dorothy: There is one failure story that I was allowed to share, provided the company was kept anonymous.
I had previously heard about how successful the company’s inspection programme was – they
were finding lots of bugs very early and saving the company lots of money. I was very impressed
when I heard the story, as they seemed to be doing things really well. There was a strong
champion who was involved with all of the inspections, they protected the authors but also made it
an enjoyable process, and they measured the benefits, which were significant (such as halved system
test time, and positive feedback from customers about quality).
Imagine my surprise to find out only a few years later that inspections had been completely
abandoned! I really wanted to know how this had happened, and I got permission to investigate,
where I discovered some significant factors: a reorganisation meant that the champion was no
longer involved, moderator’s time was now chargeable. the training was skimped or abandoned, and
managers had not known the value of what they had killed (or had allowed dying). However, I
also found out that inspections had gone “underground” – those who had realised their value were
secretly organising them with trusted colleagues as moderators.
Many of the lessons learned from this failure were already things that the book Software
Inspection recommends, but this company didn’t follow them – this gave me confidence about the
advice in our book. But it also highlighted how fragile good practices can be, especially when the
value is not communication (well enough) to high-level managers.


We hope you enjoyed reading this amazing interview. Let us know your thoughts in the Comments section.

We can guarantee that you are going to enjoy FailQonf even more. Have yourself enrolled here if you have not done it so far. Please note there is a Free Pass option for the ones who cannot afford the Paid one in these difficult times. See you there.

 

About the Host:

Aakruti is working with Xoriant, Pune as Senior Test Engineer​​. She loves testing and had avoided many other paths that came across her career and chose to be a Tester, happily and proudly.

She likes to explore and experiment with new concepts, ideas, and thoughts, which can help in performing the tasks more efficiently and bring more quality to the product.

LinkedIn | Twitter

Written by

Leave a Reply

Your email address will not be published.

Related Posts