Testflix 25 Speaker Page - Johanna Rothman - The Test Tribe

Atomic Talk Title: Effective Public Speaking - How to Show Your Human Value in an Age of AI

About Topic

Too often, managers think they can replace any knowledge worker with an AI of some sort. However, people, learning with and from others, create the innovative products that companies need to sell. Instead of worrying about replacement, all knowledge workers can choose to show their value through public speaking. In this presentation, I’ll offer three secrets of effective public speaking, from the perspective of testers, to help you show your human value.

About Speaker

Johanna Rothman

Johanna Rothman

President, Rothman Consulting Group, Inc.

Johanna Rothman, the “Pragmatic Manager,” offers frank, practical advice that you can immediately apply to your product development challenges.

She helps leaders and teams see their current reality. Because one size never fits all, she helps them explore options for what and how to change. The results? Leaders and teams learn to collaborate and focus on outcomes that matter.

Her clients and readers appreciate both her trademark practicality and humor. Explore her 20+ books and writing at jrothman.com and createadaptablelife.com.

Other Speakers at Testflix 2025

All

Testing the AI

Career Paths within and beyond Testing

Test Leadership

AI in Testing

Automation

Testing Skills & Mindset

Agentic AI

Testing Mindset

LIVE Fireside chat

Included Artifact

How to Measure Impacts of AI

In this fireside chat with Jaydeep, we’ll dive into how AI is changing the way we measure success in both QA processes and live generative AI bots. On the QA side, we’ll look at cycle time reduction—the “time goalie” metric that shows how quickly we move from discovering a bug to fixing it. We’ll also talk about predictive quality accuracy, which shifts QA from being reactive to proactive by predicting which code changes are most likely to introduce bugs. And of course, we’ll touch on test creation velocity—how much faster teams are able to create meaningful automation with AI’s support.

On the generative AI side, we’ll explore what success really looks like when these bots go live. That means checking task completion rate—did the bot actually help users achieve what they wanted? We’ll also cover user trust score, which combines user satisfaction signals like thumbs up/down with the bot’s error or hallucination rate. Finally, we’ll discuss adoption and retention, the ultimate indicators of long-term value: are people not only trying the bot but also returning to use it again?

jaydeep

Jaydeep Chakrabarty

Testing AI

LIVE SESSION

Included Artifact

RAG to RIGOR – A Tester’s Playbook for Evidence-Backed, Policy-Safe AI

Retrieval-Augmented Generation (RAG) may sound convincing while still being wrong. Testers need sharper lenses than “hit@k” and intuition. This talk turns stochastic answers into evidence-linked verdicts by testing the full chain – ingestion, indexing, retrieval, grounding, and answerability – so every claim ties back to a cited span, the latest allowed document, and the right policy for the user’s role. Expect practical oracles (rationale-overlap, abstention SLOs), robustness drills (typo/paraphrase invariants, counterfactual twins), and drift guards (embedding/index overlap). We’ll stress-test AI with chaos (timeouts, stale caches), detect poisoning, and enforce “smart silence” when answers lack support. The outcome: a CI-ready harness that upgrades RAG from demo magic to auditable reliability – giving QA teams crisp signals, provable artifacts, and confidence to ship.

Aparana Gupta

Aparana Gupta

Testing AI

LIVE AMA

Included Artifact

Leadership in Post AI Era

A.I. in many organisations is bringing everyone to the same level. Everyone is a fresher today. By that I mean everyone is looking at it with a fresh pair of eyes. What can this do? What can this not do? How does it impact the business and customers? How does it impact people and their roles in this org and others? These are the questions in everyone’s mind.

Leaders of all orgs, their reports, their report’s reports – all of them have the same question and some speculation of what the answer could be. How then should someone lead others in this situation?

On the other side – leadership is not always about leading people. It is leading an org or a team from problem to solution. From chaos to peace. From danger to safety. From uncertainity to certainity. A.I. has created all things required for the LHS of the equation. It is the onus of the leaders to figure out the RHS. Finding the RHS requires a certain thought and leadership that the world wasn’t prepared for.

Pradeep Soundararajan 1

Pradeep Soundararajan

Test Leadership

LIVE AMA

Included Artifact

Harnessing AI to Elevate SDLC Quality

As software systems grow more complex and business cycles demand faster releases, traditional approaches to quality engineering are no longer enough. AI brings a new dimension to the Software Development Life Cycle—augmenting decision-making, predicting risks, automating quality checks, and continuously assuring value delivery. Armed with GenAI and Agents we can reshape how we think about software quality. This discussion explores practical ways to embed AI across the SDLC to accelerate delivery, reduce risk, and achieve a step-change in quality outcomes.

Mallika Fernandes

Mallika Fernandes

AI in Testing

LIVE SESSION

Included Artifact

What AI Can Do, What Testers Must Do: Partnering with AI in Automation

AI is transforming the way we think about software testing, but hype alone won’t deliver quality. In this talk, Ronak Ray—Vice President of QA & AI Strategy at Forbes—shares a pragmatic view of where AI truly adds value in automation, and where human testers remain indispensable.

This talk addresses these questions by separating what AI can realistically do today from what testers must continue to own. It will outline a practical partnership model where AI augments testers—generating test scenarios, accelerating automation, and enabling large-scale parallelization—while testers provide judgment, context, and oversight.

Ronak Ray

Ronak Ray

AI in Testing

LIVE AMA

Included Artifact

Vibe Coding – Emergence, Present & Future

In this session, Andrew will trace Vibe Coding’s journey—from emergence to current impact—exploring how it has got us into re-thinking development and testing. He’ll examine today’s tools, real-world use cases, and the cultural shifts teams need to embrace this AI-driven approach.

Key focus areas include:

Using Vibe Coding in Creating Testing Artifacts or Solutions: Exploring how vibe coding can assist testers in generating test cases, scripts, and other quality solutions more effectively.

Testing AI-Generated Code: How testers test (verify, validate and falsify) vibe-coded applications when we all don’t fully understand the underlying code. We’ll explore practical strategies for testing “black box” AI-generated systems.

Testing Frameworks for SDETs: Adapting tools – as applicable – for AI-generated code, maintaining test suites for constantly evolving codebases, and automation strategies for vibe-coded applications.

Future of Testing: Evolution of tester roles in AI-first development, new skills needed, testing personalized AI tools, and version control challenges.

Quality Challenges: Performance and security vulnerability detection, debugging when SDETs may not understand code structure, and establishing new quality standards.

Andrew will share hot takes on myths versus reality and deliver practical advice for getting started.

Z8A3511 removebg preview 2

Andrew Knight

Agentic AI

ATOMIC TALK

Included Artifact

QA and Software Testing Careers in the USA: An X-Ray of Today’s Job Requirements

In this Atomic Talk, the speaker shares findings from research on more than 500 testing and QA job openings in the U.S. The session covers the research process, the data collected, the graphs that bring it to life, and the insights drawn from the analysis. By the end, the audience will gain clarity on the most in-demand testing tools, programming languages, test automation tools, and other key requirements shaping today’s job market.

julio

Júlio de Lima

Career Paths within and beyond Testing

ATOMIC TALK

Included Artifact

Thinking Ahead: SDET Career Progression

The role of the Software Development Engineer in Test (SDET) has expanded far beyond “just writing tests.” Today, SDETs sit at the intersection of quality, development, product and automation—opening doors to a wide range of future opportunities. In this session, we’ll explore the common roles and responsibilities of an SDET, the diverse career paths that stem from this foundation, and how these skills can translate into leadership, architecture, DevOps, or even product strategy. Whether you’re currently an SDET or simply curious about the future of the role, this talk will give you a clear roadmap of what’s possible and how to prepare for it.

David Ingraham

David Ingraham

Automation

ATOMIC TALK

Included Artifact

Wrestling with Business Logic – A Simple Approach for Clarity and Fast Feedback

Business logic is at the heart of every system. It should be easy to understand and evolve—yet in many teams, even small changes are slow, risky, and painful.

If a first-semester computer science student can implement moderately complex rules in hours, why do seasoned Scrum teams with more resources take days or weeks to do the same?

Too often, rules are hidden in tangled code, tested only during slow, costly integration runs, and clarified with stakeholders far too late. The result? Long feedback loops, high costs, rework, and fragile delivery.

In this talk, I’ll show a practical, lightweight way to regain control of business logic: isolate it in clean, zero-dependency functions, capture rules collaboratively with BDD (using Cucumber), and get automated feedback in minutes, not days.

This approach bridges the gap between developers and stakeholders, improves clarity, reduces risk, and builds confidence in every change.

 

(The talk combines Software Architecture, Test Architecture, Agile, DeveloperExperience and Prototyping)

alex schwartz profile pic removebg preview 1

Alex Schwartz

Automation

ATOMIC TALK

Included Artifact

Effective Public Speaking: How to Show Your Human Value in an Age of AI

Too often, managers think they can replace any knowledge worker with an AI of some sort. However, people, learning with and from others, create the innovative products that companies need to sell. Instead of worrying about replacement, all knowledge workers can choose to show their value through public speaking. In this presentation, I’ll offer three secrets of effective public speaking, from the perspective of testers, to help you show your human value.

Johanna Rothman

Johanna Rothman

Career Paths within and beyond Testing

ATOMIC TALK

Included Artifact

Private AI: Gains, Gaps and Gotchas

In this talk, the speaker explores how local LLMs can serve as powerful and secure code assistants for software development and test automation, particularly in environments where data privacy and security are critical. When cloud-based AI tools like ChatGPT or Copilot are not feasible due to client restrictions or compliance requirements, local solutions such as Ollama running Qwen 2.5, integrated with tools like the Continue plugin, provide a safe and effective alternative.

The session walks through the setup, model selection, and use of chat-based interfaces to accelerate BDD creation, automation code generation, and performance scripting—all while maintaining full data control. Real-world observations, including challenges with complex frameworks like Serenity, handling imports, and addressing domain-specific gaps, are also discussed. The talk blends technical insights with practical outcomes, highlighting where local AI delivers the most value and where human oversight remains essential.

Samar Ranjan

Samar Ranjan

AI in Testing

ATOMIC TALK

Included Artifact

Leading and Managing in Dysfunctional Organisations

We have a leadership crisis. People do not know how to lead. They do not know how to manage. They often don’t seem to care.

Most of the basic strategies for leadership and management are simple but since leadership is bad, we need to go over the basics. I’ve worked as a consultant, a manager, a worker, a leader. I will draw on what worked for me and draw on some lessons learned from what I’ve seen fail to work.

There isn’t much time in 10 – 15 minutes so I’ll go right back to fundamentals and cover the core of leadership and management. The Art of War, a Strategy book from 500 BC refers to this as “The Moral Law”.

We’ll also cover how to build trust and respect by Knowledge Sharing and the attitudes and responsibilities of a Leader and Manager.

Alan Richardson removebg preview Picsart AiImageEnhancer

Alan Richardson

Test Leadership

ATOMIC TALK

Included Artifact

Where AI Goes Wrong - The Blind Spots Testers See

AI promises speed. Testers see the cracks.

Behind the buzz, AI tools stumble in hidden ways such as hallucinations, false confidence, blind spots. These are easy to miss, but costly if ignored.

This atomic talk reveals:

  • The subtle failures AI hides from plain sight
  • Why speed without reliability is a trap
  • Strategies to synergize and supervise AI outputs

The role of testers isn’t just about keep up with AI. It’s also in making them trustworthy and useful.

Rahul Parwal

Rahul Parwal

AI in Testing

ATOMIC TALK

Included Artifact

QA Without Firefighting: Build Autonomous, Not Automated Teams

In this Atomic Talk, the speaker explores how QA teams can move beyond reactive testing and constant firefighting by embracing autonomy instead of relying solely on automation. While automation delivers speed and repeatability, it often fails to address upstream chaos, unclear ownership, and late-stage quality challenges. The real transformation happens when QA is embedded early, owns its signals, and drives decisions proactively.

The session covers practical approaches to building autonomous QA systems—intelligent, self-service frameworks that surface quality insights early, reduce coordination drag, and remove dependencies. It highlights how to foster collaboration between engineering and QA, redefine ownership boundaries, and create conditions where quality becomes a shared, systemic outcome rather than a downstream checkpoint. For teams burdened by fire drills and quality debt, this talk offers a fresh perspective and actionable strategies to help QA lead the charge instead of cleaning up the aftermath.

Gaurav Mahajan 2

Gaurav Mahajan

AI in Testing

ATOMIC TALK

Included Artifact

Defining ‘Enough’: Testing in the GenAI Era

In Machine Learning, a model delivering 85% accuracy is often celebrated as a success. Chasing 100% is understood to be unrealistic—the data is messy, the real world is unpredictable, and the final few percentage points usually cost far more than they’re worth.

Yet in software testing—especially in the era of AI and GenAI—the question “Can we test 100%?” still lingers. The reality is that AI outputs are probabilistic: the same prompt can produce different answers, and confidence scores reveal how certain or uncertain the system is. In this context, 100% testing coverage is an appealing idea, but it doesn’t reflect how things truly work.

This talk introduces a different way of thinking. Techniques like Principal Component Analysis (PCA) can reduce the testing space to the dimensions that matter most. Confidence scores can highlight higher-risk areas, helping teams prioritize where to focus. And continuous evaluation, rather than a single pass/fail check, becomes the foundation for building trust.

The session presents a practical approach to answering the question, “Have we tested enough?” in AI projects. It’s not about testing everything—it’s about testing the right things with the right depth, so teams can ship with confidence and rest easy at night.

SatParkash Maurya

SatParkash Maurya

Testing Skills & Mindset

ATOMIC TALK

Included Artifact

Bias in, Bias Out : Knowing various Biases in Testing AI

Everyone says a human’s character depends on how they are brought up — the same holds true for Artificial Intelligence models, especially Large Language Models (LLMs). Building an LLM involves three major steps: collecting training data, training the model with that data, and finally productizing the model for real-world use. At every stage, there are subtle and not-so-subtle opportunities for bias to creep in — be it through the data we choose, the way we train, or the assumptions we bake into the final product. And just like in humans, this “upbringing” has a lasting impact on how the model thinks, responds, and interacts.

In my session, I’ll walk you through the various stages where bias can be introduced — intentionally or unintentionally and how these biases can affect the behavior and fairness of the LLMs we build. While bias is something we can’t completely eliminate, it is something we can actively manage. You’ll gain insights into practical methods to identify, reduce, and balance bias while sampling data, during training, and throughout the development cycle of LLMs. The goal is not perfection, but responsibility building models that are more transparent, inclusive, and trustworthy.

Maheshwaran

Maheshwaran VK

Testing the AI

ATOMIC TALK

Included Artifact

Testing Agentic AI

This talk explores the challenges of testing agentic AI systems—AI that autonomously reacts to events and initiates processes. Drawing on decades of experience, Robert Sabourin emphasizes that testing begins and ends with risk. A three-dimensional model (business impact, technical risk, autonomy) guides evaluation. Testers generate ideas using a broad taxonomy, from capabilities and failure modes to creative and adversarial approaches. Continuous testing and monitoring ensure findings inform business decisions, emphasizing learning over correctness.

rob toronto removebg preview

Robert Sabourin

Agentic AI

ATOMIC TALK

Included Artifact

Agentic QA Workflow

Agentic code generation has made development sprint from days to hours, but most QA delays aren’t compute—they’re coordination: manual planning, slow handoffs, and batched reviews. Meanwhile, capable agents are emerging in silos (e.g., Xylos), without orchestration or governance, so outputs arrive late, go stale, and lack an audit trail. This talk shows how we compose those siloed agents into an AI‑powered STLC that matches modern dev velocity: a one‑time JIRA connect turns an epic into a governed, dependency‑aware workflow (13–19 steps spanning design, scripts, data, execution, analysis, plus security/performance/accessibility). Steps run as soon as they’re unblocked; reviewers get real‑time SSE updates and approve in‑context with artifacts and logs; failures surface explicit diagnostics with scoped re‑runs and retries. The measurable gains come from the workflow, not agent speed: planning time drops from hours to minutes, handoff waits collapse via auto‑triggers, review latency compresses through a single approval surface, and persistence removes rework—typically reclaiming 1.5–2.5 days of non‑compute time per epic.
I will walk through the architecture (Node.js/TypeScript, Express, MongoDB), template and dependency graph design, the human‑in‑the‑loop review UX, and our failure‑handling guardrails (clear JIRA errors, retries, auditable trail). The session ends with an adoption playbook to incrementally transform your STLC—composing existing Xylos (or similar) agents without disrupting developer workflows. This talk is for testers, SDETs, QA leaders, and engineering managers who need a practical, repeatable way to orchestrate agents into a governed, reviewable QA pipeline that keeps pace with AI‑accelerated development

Krishnamoorthy Gurramkonda

Krishnamoorthy Gurramkonda

Agentic AI

ATOMIC TALK

Included Artifact

Breaking Your Own Bots

As AI agents take on critical software testing and automation tasks, their vulnerabilities can become silent ticking bombs. In this talk, we’ll explore how applying “red teaming” ; a concept borrowed from cybersecurity; can expose weaknesses in AI agents before they cause failures in production.
I’ll share practical techniques for stress-testing agents, from prompt injection attacks to adversarial workflows, and how these methods can be used not only for finding flaws but also for hardening systems against future threats. We’ll also look at real-world examples of red teaming in AI testing agents, the patterns that emerge, and the tools you can use to simulate hostile environments.
By the end, you’ll have a blueprint for turning your AI testing agents into resilient, self-correcting systems capable of surviving in unpredictable real-world scenarios.

Robin Gupta

Robin Gupta

Agentic AI

ATOMIC TALK

Included Artifact

Beyond Numbers, Metrics that matter in AI Age

AI has changed software and the way software is tested, yet many teams still rely on outdated metrics like coverage, pass rates and defect counts. These numbers do look great on dashboards but they fall short in answering the most critical question: can we really trust what AI is doing?

This talk, “Beyond Numbers, Metrics That Matter In the AI Age, explores how testers can redefine measurement for AI driven systems. I will introduce 3 essential metrics designed for AI Age: – explainability metrics, robustness metrics and trust metrics.

The attendees will walk away with a practical modern lens on metrics that moves beyond counting tests and proving trust.

Brijesh Deb removebg preview

Brijesh Deb

Testing the AI

ATOMIC TALK

Included Artifact

Before Building AI we should First Understand Natural Intelligence

Imagine you were building the world’s first heavier-than-air powered flying machine. Would you do so without first either trying to understand the principles of flight, or studying those animal species that had already succeeded in flying?

Yet, this is pretty much what many groups are currently trying to do in artificial intelligence. If you’re working in AI, or indeed pushing the frontiers of knowledge in any subject, you should do your best to gain a thorough understanding of the existing knowledge available in your own and related subjects.

Historically, many AI researchers have been highly disparaging of research into human intelligence and its results. This is puzzling, as human and other natural intelligences have already resolved many problems that AI cannot yet tackle.

In this session, Andrew explores human memory and seven of its apparent shortcomings, which many leading AI researchers have been highly disparaging of. He shows that, rather than being shortcomings, these features are effective solutions to adaptive problems humans faced in their evolutionary past, problems that AI has not yet successfully tackled.

He shows how studying these and other apparent shortcomings may lead to solutions of major challenges within AI.

The race to powered flight was won, not by the best funded group, which was the US government and the arms millionaire Hiram Maxim. Instead, it was won by two bicycle engineers, Orville and Wilbur Wright, who did what the US government didn’t bother to do. They studied how birds fly.

The race to true AI may well be won in a similar manner.

Andrew brown 2 2

Andrew Brown

AI in Testing

ATOMIC TALK

Included Artifact

Resilience Testing of a Tester

As testers, we love finding bugs. We enjoy making systems fail. But what makes a tester fail? In these trying times — when health issues are on the rise, job losses are making headlines, and tolerance levels are stretched thin — how do we help testers pass their own resilience test? Testers have to confront, justify their findings, often left out of key discussions, work on skewed timelines. It generates stress and emotional toll.

In this talk, we’ll walk through the 5W1H of emotional resilience — what it is, why it matters more than ever in the age of AI, and how to build it. We’ll visit the ‘test cases’ — those real-life moments that break a tester — and we’ll also debug the fixes that help us bounce back, stronger than before.

So let’s test, debug, and upgrade the most important system of all — ourselves!

Ashwini Lalit 1

Ashwini Lalit

Test Leadership

ATOMIC TALK

Included Artifact

From Copilot to Co-Tester: Guardrails for AI-Written Tests

Are you generating tests using Generative AI? Dimpy does, and she admits that AI can now generate tests of different types instantly. But speed doesn’t always guarantee safety. Without the right checks, there’s a risk of brittle, redundant, or misleading tests that create a false sense of coverage.

In this talk, Dimpy explores a structured “guardrails” framework for validating AI-generated tests so they can be trusted in production. She will walk through both semantic checks (AI-on-AI validation against acceptance criteria) and deterministic checks (code coverage, mutation score, flakiness detection, performance smoke tests, and security scans). Dimpy will also demonstrate a practical framework on how to automate these guardrails in CI/CD, turning raw test outputs into a measurable Test Quality Score that ensures functional, non-functional, and cross-layer coverage.

Dimpy Adhikary

Dimpy Adhikary

Testing the AI

ATOMIC TALK

Included Artifact

Breaking Boundaries: A Tester’s Guide to Freelance and Remote Success

Freelancing is more than a side hustle—it’s the launchpad to global careers. With global pay comes higher earning potential, and with global talent comes exposure to diverse practices and cutting-edge teams.

Manish begins with freelancing as the starting point, where small gigs like automation fixes, bug bashes, and manual cycles on platforms such as Upwork or Fiverr help testers build experience, portfolios, and reviews. From there, he shows how to scale into bigger freelance engagements, taking on long-term contracts in automation, performance, or QA consulting through platforms like Toptal, Braintrust, and Testlio—focusing on specialization, repeat clients, and steady income.

The session then explores the transition into full-time remote roles, where freelance credibility and client references can help land jobs via LinkedIn, AngelList, RemoteOK, or We Work Remotely. These roles provide not just stability but also global perks.

Along the way, Manish highlights the technical skills in highest demand—automation, API, performance, cloud, mobile, and TestOps—as well as soft skills like communication across time zones, self-management, and client handling. He also touches on parallel opportunities in content creation, mentorship, documentation, and community work, while sharing practical disclaimers around taxation, employer policies, payment methods, and compliance.

The session closes with a clear roadmap: start small, scale, transition, and thrive. Freelancing isn’t the end goal—it’s the launchpad to global opportunities.

manish Saini 2

Manish Saini

Testing Skills & Mindset

LIVE SESSION

Included Artifact

Write Once, Test Many: Leveraging AI for Sustainable UI Automation

It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has

It is a long established fact that a reader will be distracted by the readable conte
will be distracted by the readable content.

Amit Rawat

Amit Rawat

Exploratory Testing

Functional Testing

LIVE SESSION

Included Artifact

Write Once, Test Many: Leveraging AI for Sustainable UI Automation

It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has

It is a long established fact that a reader will be distracted by the readable conte
will be distracted by the readable content.

Amit Rawat

Amit Rawat

Exploratory Testing

Functional Testing

LIVE SESSION

Included Artifact

Write Once, Test Many: Leveraging AI for Sustainable UI Automation

It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has

It is a long established fact that a reader will be distracted by the readable conte
will be distracted by the readable content.

Amit Rawat

Amit Rawat

Exploratory Testing

Functional Testing

World’s Leading Software
Testing Conference

Virtual I Free

[sibwp_form id=2]
The Test Tribe Logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.