Testflix 2025 - Leading Global Software Testing Conference | The Test Tribe

World’s Leading Virtual Software Testing Conference

World’s Leading Software Testing Conference

Learn Agentic AI, Gen AI in Testing, Testing the AI, Automation, Leadership. For Free.

The global testing binge returns! Insights across 10+ Themes
Now with Atomic + Live Talks & Expert Panels.

40+

SPEAKERS

5+

THEMES

20K+

SIGNUPS

120+

COUNTRIES

13000+ Testers have already registered for FREE to learn AI, Automation etc.

Testflix Legacy

5

Editions

57,000

Registrations

20,000

Attendees

130+

Countries

500+

Experts

Attendees from World’s Top Companies
amazon
general electric logo
cognizant
Salesforce Logo
microsoft logo 2
hcltech logo
ibm
amadeus logo
capgemini
Truecaller Logo
cred logo 1
showrunnr Logo Dark
phonepe 1
Zepto Logo
SP Global Logo
moolya logo png
hp 2012
groww Logo
HCL
Wipro
tcs
JP_morgan_logo
fidelity logo 1 1 1
moengage
hhaexchange Logo
LTIMindtree Logo
swiggy logo
oracle 6
pepsi
zoomcar Logo
Paytm Insider logo
Logo tiaa
nimblework nimble
Bigbasket logo
Cisco_logo
dish logo
Siemens logo
tech mahindra logo
persistent systems header logo 1
Red Hat Logo
coforge logo
CRED logo
Volvo logo
walmart logo 1
walmart logo 1
Volvo logo
CRED logo
coforge logo
Red Hat Logo
persistent systems header logo 1
tech mahindra logo
Siemens logo
dish logo
Cisco_logo
nimblework nimble
Bigbasket logo
Logo tiaa
Paytm Insider logo
zoomcar Logo
pepsi
oracle 6
swiggy logo
LTIMindtree Logo
hhaexchange Logo
moengage
fidelity logo 1 1 1
JP_morgan_logo
tcs
Wipro
HCL
groww Logo
hp 2012
moolya logo png
SP Global Logo
Zepto Logo
phonepe 1
showrunnr Logo Dark
cred logo 1
Truecaller Logo
capgemini
amadeus logo
ibm
hcltech logo
microsoft logo 2
Salesforce Logo
cognizant
general electric logo
amazon

Why You Can’t Miss Testflix 2025

artificial intelligence 5

Industry veterans and fresh voices on one stage.

machine learning 1

2025 NEW

40+ Sessions: Talks, Expert Panels, Fireside Chats, AMAs

electronic 1

Episodes on Gen AI, Leadership, Career Guidance, and more.

demo

Demonstrations of the latest Testing & AI tools.

live 1

2025 NEW

Live Q&A, polls, and discussions to keep the energy high.

learning 1

2025 NEW

Role & interest-based binge maps for maximum learning ROI.
gift

2025 NEW

Exclusive atomic resources on select talks for live attendees.

achievement

The Test Tribe vibe at its absolute best.

output onlinepngtools 1

100% free to attend. Network, Binge Learn, Grow.

output onlinepngtools

Contests, games, and networking with the community.

output onlinepngtools 3
Participation certificate for every attendee.
output onlinepngtools 4
Access to session recordings packed with how-tos.

Learning for Every Tester

Serving across the experience range. Episodes on:

AI in testing

AI in Testing

Testing the ai

Testing the AI

Agentic AI

Agentic AI

Testing Mindset

Testing Mindset

engineering 2

Automation

data Security

Data & Security

leadership 1

Leadership

Career

Career Progression

AI in testing

AI in Testing

Testing the ai

Testing the AI

Agentic AI

Agentic AI

Testing Mindset

Testing Mindset

engineering 2

Automation

data Security

Data & Security

leadership 1

Leadership

Career

Career Progression

Who’s Taking the Testflix Stage in 2025?

Meet 6th edition’s extraordinary speakers

Aparana Gupta

Director of Engineering

Microsoft

India

Aparana Gupta
Live Talk: RAG to RIGOR – A Tester’s Playbook for Evidence-Backed, Policy-Safe AI
Vanya Seth

Head of Technology

thoughtworks brand logo

India

Vanya Seth removebg preview 1
Atomic Talk: 7 Highly Valuable Approaches for REST API Testing
Ronak Ray

Vice President, QA & AI Strategy

Forbes Logo

United States of America

Ronak Ray
Live Talk : What AI Can Do, What Testers Must Do - Partnering with AI in Automation
Mallika Fernandes

Managing Director

Accenture logo.svg

India

Mallika Fernandes
Live Fireside Chat: Details Coming Soon!
Jaydeep Chakrabarty

Head of AI in Tech

Piramal Finance logo 1

India

jaydeep chakrabarty removebg preview 1
Live Fireside Chat: How to Measure Impacts of AI
Soumya Mukherjee

ML Architect

MAANG

India

soumya virtual enhanced removebg preview
Live Panel Discussion: Details Coming Soon!
Rahul Verma

Senior Consultant & AI Coach

trendig e1700143729757

Germany

Rahul Verma 2
Live Panel Discussion: Details Coming Soon!
Andrew Knight

Sr. Director - Product Management

cycle labs

United States of America

Z8A3511 removebg preview 1
Live AMA: Vibe Coding – Emergence, Present & Future
Pradeep Soundararajan

Founder & CEO

moolya logo png 1

India

Pradeep Soundararajan 1 1 1
Live AMA: Leadership in Post AI Era
David Ingraham

Senior SDET

image removebg preview

United States of America

David Ingraham
Atomic Talk: Thinking Ahead - SDET Career Progression
Karime Salomon Zarate

Senior Director, QE

idt horizontal logo enhanced

Bolivia

KarimeSalomon enhanced removebg preview e1740555063303
Live Panel Discussion: Details Coming Soon!
Robert Sabourin

President

AmiBug.Com, Inc.

Canada

rob toronto removebg preview
Atomic Talk: Testing Agentic AI
Gaurav Mahajan

Tech Director - QE

Globant

India

Gaurav Mahajan 2
Atomic Talk: QA Without Firefighting - Build Autonomous, Not Automated Teams
Rahul Parwal

Specialist

ifm removebg preview 1

India

Rahul Parwal
Atomic Talk: Where AI goes wrong - Lessons and Testers
Johanna Rothman

President

Rothman Consulting Group, Inc.

United States of America

JohannaRothman removebg preview
Atomic Talk: Effective Public Speaking-How to Show Your Human Value in an Age of AI
Alex Schwartz

Director of Engineering

Germany

alex schwartz profile pic removebg preview 1
Atomic Talk: Wrestling with Business Logic – A Simple Approach for Clarity and Fast Feedback
Ajay Balamurugadas

Customer Success & Strategy Head

PostQode logo

India

Ajay b 1
Atomic Talk Title:
Júlio de Lima

QA Manager

capco

United States of America

julio
Atomic Talk: QA and Software Testing Careers in the USA - An X-Ray of Today’s Job Requirements
Robin Gupta

CEO

image removebg preview 6

India

Robin Gupta
Atomic Talk: Breaking Your Own Bots
Anne-Marie Charrett

Engineering Futurist

Testing Times

Australia

Anne Marie Charrett 2023 removebg preview Picsart AiImageEnhancer
Live Panel Discussion: Details Coming Soon!
Craig Risi

Head of Engineering

Old Mutual

South Africa

Craig Risi
Atomic Talk: Building quality in LLM-powered applications
Andrew Brown

Independent Test Consultant

England

Andrew brown 2 2
Atomic Talk: Before Building AI we should First Understand Natural Intelligence
Ashwini Lalit

Sr. Manager - QE

Nimblework

India

Ashwini Lalit 1
Atomic Talk: Resilience Testing of a Tester
Maheshwaran

Sr. Lead Engineer - Automation

Elixr Labs

India

Maheshwaran
Atomic Talk: Bias in, Bias Out - Knowing various Biases in Testing AI
Samar Ranjan

Lead Quality Analyst

thoughtworks Without BG

India

Samar Ranjan
Atomic Talk: Private AI - Gains, Gaps and Gotchas
SatParkash Maurya

General Manager

image removebg preview 9 1

India

SatPrakash removebg preview
Atomic Talk: Defining ‘Enough - Testing in the GenAI Era
Alan Richardson

Author, Consultant, Researcher

Compendium Developments

England

Alan Richardson removebg preview Picsart AiImageEnhancer
Atomic Talk: Leading and Managing in Dysfunctional Organisations
Krishnamoorthy Gurramkonda

Senior QA Architect

Celigo

India

Krishnamoorthy Gurramkonda
Atomic Talk: Agentic QA Workflow
Gil Zilberfeld

CTO

TestinGil

Israel

Gil Zilberfeld Portrait1 300x300 removebg preview
Atomic Talk: Exploratory Testing with AI
Dimpy Adhikary

Staff Quality Architect

image removebg preview 7

India

Dimpy Adhikary
Atomic Talk: From Copilot to Co-Tester - Guardrails for AI Written Tests
Manish Saini

Sr. Lead Developer Advocate

browserstack logo light background 2

India

image removebg preview 2
Atomic Talk: Breaking Boundaries - A Tester’s Guide to Freelance and Remote Success
Brijesh Deb

Principal Consultant

infosys technologies logo 1

India

Brijesh Deb removebg preview
Atomic Talk: Beyond Numbers, Metrics that matter in AI Age

More speakers coming soon!

Who Should Attend This Event?

coding

Test Engineers

coding

QA & Tech Leads

coding

SDETs

coding

QA Managers

coding

Directors & CTOs

coding

Developers

coding

Automation QAs

coding

Product Managers

coding

Test Engineers

coding

QA & Tech Leads

coding

SDETs

coding

QA Managers

coding

Directors & CTOs

coding

Developers

coding

Automation QAs

coding

Product Managers

Your Custom Binge List

Get personalized session recommendations based on
your role and interests.

What are we covering in this edition?

All

Testing the AI

Career Paths within and beyond Testing

Test Leadership

AI in Testing

Automation

Testing Skills & Mindset

Agentic AI

Testing Mindset

LIVE Fireside chat

Included Artifact

How to Measure Impacts of AI

In this fireside chat with Jaydeep, we’ll dive into how AI is changing the way we measure success in both QA processes and live generative AI bots. On the QA side, we’ll look at cycle time reduction—the “time goalie” metric that shows how quickly we move from discovering a bug to fixing it. We’ll also talk about predictive quality accuracy, which shifts QA from being reactive to proactive by predicting which code changes are most likely to introduce bugs. And of course, we’ll touch on test creation velocity—how much faster teams are able to create meaningful automation with AI’s support.

On the generative AI side, we’ll explore what success really looks like when these bots go live. That means checking task completion rate—did the bot actually help users achieve what they wanted? We’ll also cover user trust score, which combines user satisfaction signals like thumbs up/down with the bot’s error or hallucination rate. Finally, we’ll discuss adoption and retention, the ultimate indicators of long-term value: are people not only trying the bot but also returning to use it again?

jaydeep

Jaydeep Chakrabarty

Testing AI

LIVE SESSION

Included Artifact

RAG to RIGOR – A Tester’s Playbook for Evidence-Backed, Policy-Safe AI

Retrieval-Augmented Generation (RAG) may sound convincing while still being wrong. Testers need sharper lenses than “hit@k” and intuition. This talk turns stochastic answers into evidence-linked verdicts by testing the full chain – ingestion, indexing, retrieval, grounding, and answerability – so every claim ties back to a cited span, the latest allowed document, and the right policy for the user’s role. Expect practical oracles (rationale-overlap, abstention SLOs), robustness drills (typo/paraphrase invariants, counterfactual twins), and drift guards (embedding/index overlap). We’ll stress-test AI with chaos (timeouts, stale caches), detect poisoning, and enforce “smart silence” when answers lack support. The outcome: a CI-ready harness that upgrades RAG from demo magic to auditable reliability – giving QA teams crisp signals, provable artifacts, and confidence to ship.

Aparana Gupta

Aparana Gupta

Testing AI

LIVE AMA

Included Artifact

Leadership in Post AI Era

A.I. in many organisations is bringing everyone to the same level. Everyone is a fresher today. By that I mean everyone is looking at it with a fresh pair of eyes. What can this do? What can this not do? How does it impact the business and customers? How does it impact people and their roles in this org and others? These are the questions in everyone’s mind.

Leaders of all orgs, their reports, their report’s reports – all of them have the same question and some speculation of what the answer could be. How then should someone lead others in this situation?

On the other side – leadership is not always about leading people. It is leading an org or a team from problem to solution. From chaos to peace. From danger to safety. From uncertainity to certainity. A.I. has created all things required for the LHS of the equation. It is the onus of the leaders to figure out the RHS. Finding the RHS requires a certain thought and leadership that the world wasn’t prepared for.

Pradeep Soundararajan 1

Pradeep Soundararajan

Test Leadership

LIVE AMA

Included Artifact

Harnessing AI to Elevate SDLC Quality

As software systems grow more complex and business cycles demand faster releases, traditional approaches to quality engineering are no longer enough. AI brings a new dimension to the Software Development Life Cycle—augmenting decision-making, predicting risks, automating quality checks, and continuously assuring value delivery. Armed with GenAI and Agents we can reshape how we think about software quality. This discussion explores practical ways to embed AI across the SDLC to accelerate delivery, reduce risk, and achieve a step-change in quality outcomes.

Mallika Fernandes

Mallika Fernandes

AI in Testing

LIVE SESSION

Included Artifact

What AI Can Do, What Testers Must Do: Partnering with AI in Automation

AI is transforming the way we think about software testing, but hype alone won’t deliver quality. In this talk, Ronak Ray—Vice President of QA & AI Strategy at Forbes—shares a pragmatic view of where AI truly adds value in automation, and where human testers remain indispensable.

This talk addresses these questions by separating what AI can realistically do today from what testers must continue to own. It will outline a practical partnership model where AI augments testers—generating test scenarios, accelerating automation, and enabling large-scale parallelization—while testers provide judgment, context, and oversight.

Ronak Ray

Ronak Ray

AI in Testing

LIVE AMA

Included Artifact

Vibe Coding – Emergence, Present & Future

In this session, Andrew will trace Vibe Coding’s journey—from emergence to current impact—exploring how it has got us into re-thinking development and testing. He’ll examine today’s tools, real-world use cases, and the cultural shifts teams need to embrace this AI-driven approach.

Key focus areas include:

Using Vibe Coding in Creating Testing Artifacts or Solutions: Exploring how vibe coding can assist testers in generating test cases, scripts, and other quality solutions more effectively.

Testing AI-Generated Code: How testers test (verify, validate and falsify) vibe-coded applications when we all don’t fully understand the underlying code. We’ll explore practical strategies for testing “black box” AI-generated systems.

Testing Frameworks for SDETs: Adapting tools – as applicable – for AI-generated code, maintaining test suites for constantly evolving codebases, and automation strategies for vibe-coded applications.

Future of Testing: Evolution of tester roles in AI-first development, new skills needed, testing personalized AI tools, and version control challenges.

Quality Challenges: Performance and security vulnerability detection, debugging when SDETs may not understand code structure, and establishing new quality standards.

Andrew will share hot takes on myths versus reality and deliver practical advice for getting started.

Z8A3511 removebg preview 2

Andrew Knight

Agentic AI

ATOMIC TALK

Included Artifact

QA and Software Testing Careers in the USA: An X-Ray of Today’s Job Requirements

In this Atomic Talk, the speaker shares findings from research on more than 500 testing and QA job openings in the U.S. The session covers the research process, the data collected, the graphs that bring it to life, and the insights drawn from the analysis. By the end, the audience will gain clarity on the most in-demand testing tools, programming languages, test automation tools, and other key requirements shaping today’s job market.

julio

Júlio de Lima

Career Paths within and beyond Testing

ATOMIC TALK

Included Artifact

Thinking Ahead: SDET Career Progression

The role of the Software Development Engineer in Test (SDET) has expanded far beyond “just writing tests.” Today, SDETs sit at the intersection of quality, development, product and automation—opening doors to a wide range of future opportunities. In this session, we’ll explore the common roles and responsibilities of an SDET, the diverse career paths that stem from this foundation, and how these skills can translate into leadership, architecture, DevOps, or even product strategy. Whether you’re currently an SDET or simply curious about the future of the role, this talk will give you a clear roadmap of what’s possible and how to prepare for it.

David Ingraham

David Ingraham

Automation

ATOMIC TALK

Included Artifact

Wrestling with Business Logic – A Simple Approach for Clarity and Fast Feedback

Business logic is at the heart of every system. It should be easy to understand and evolve—yet in many teams, even small changes are slow, risky, and painful.

If a first-semester computer science student can implement moderately complex rules in hours, why do seasoned Scrum teams with more resources take days or weeks to do the same?

Too often, rules are hidden in tangled code, tested only during slow, costly integration runs, and clarified with stakeholders far too late. The result? Long feedback loops, high costs, rework, and fragile delivery.

In this talk, I’ll show a practical, lightweight way to regain control of business logic: isolate it in clean, zero-dependency functions, capture rules collaboratively with BDD (using Cucumber), and get automated feedback in minutes, not days.

This approach bridges the gap between developers and stakeholders, improves clarity, reduces risk, and builds confidence in every change.

 

(The talk combines Software Architecture, Test Architecture, Agile, DeveloperExperience and Prototyping)

alex schwartz profile pic removebg preview 1

Alex Schwartz

Automation

ATOMIC TALK

Included Artifact

Effective Public Speaking: How to Show Your Human Value in an Age of AI

Too often, managers think they can replace any knowledge worker with an AI of some sort. However, people, learning with and from others, create the innovative products that companies need to sell. Instead of worrying about replacement, all knowledge workers can choose to show their value through public speaking. In this presentation, I’ll offer three secrets of effective public speaking, from the perspective of testers, to help you show your human value.

Johanna Rothman

Johanna Rothman

Career Paths within and beyond Testing

ATOMIC TALK

Included Artifact

Private AI: Gains, Gaps and Gotchas

In this talk, the speaker explores how local LLMs can serve as powerful and secure code assistants for software development and test automation, particularly in environments where data privacy and security are critical. When cloud-based AI tools like ChatGPT or Copilot are not feasible due to client restrictions or compliance requirements, local solutions such as Ollama running Qwen 2.5, integrated with tools like the Continue plugin, provide a safe and effective alternative.

The session walks through the setup, model selection, and use of chat-based interfaces to accelerate BDD creation, automation code generation, and performance scripting—all while maintaining full data control. Real-world observations, including challenges with complex frameworks like Serenity, handling imports, and addressing domain-specific gaps, are also discussed. The talk blends technical insights with practical outcomes, highlighting where local AI delivers the most value and where human oversight remains essential.

Samar Ranjan

Samar Ranjan

AI in Testing

ATOMIC TALK

Included Artifact

Leading and Managing in Dysfunctional Organisations

We have a leadership crisis. People do not know how to lead. They do not know how to manage. They often don’t seem to care.

Most of the basic strategies for leadership and management are simple but since leadership is bad, we need to go over the basics. I’ve worked as a consultant, a manager, a worker, a leader. I will draw on what worked for me and draw on some lessons learned from what I’ve seen fail to work.

There isn’t much time in 10 – 15 minutes so I’ll go right back to fundamentals and cover the core of leadership and management. The Art of War, a Strategy book from 500 BC refers to this as “The Moral Law”.

We’ll also cover how to build trust and respect by Knowledge Sharing and the attitudes and responsibilities of a Leader and Manager.

Alan Richardson removebg preview Picsart AiImageEnhancer

Alan Richardson

Test Leadership

ATOMIC TALK

Included Artifact

Where AI Goes Wrong - The Blind Spots Testers See

AI promises speed. Testers see the cracks.

Behind the buzz, AI tools stumble in hidden ways such as hallucinations, false confidence, blind spots. These are easy to miss, but costly if ignored.

This atomic talk reveals:

  • The subtle failures AI hides from plain sight
  • Why speed without reliability is a trap
  • Strategies to synergize and supervise AI outputs

The role of testers isn’t just about keep up with AI. It’s also in making them trustworthy and useful.

Rahul Parwal

Rahul Parwal

AI in Testing

ATOMIC TALK

Included Artifact

QA Without Firefighting: Build Autonomous, Not Automated Teams

In this Atomic Talk, the speaker explores how QA teams can move beyond reactive testing and constant firefighting by embracing autonomy instead of relying solely on automation. While automation delivers speed and repeatability, it often fails to address upstream chaos, unclear ownership, and late-stage quality challenges. The real transformation happens when QA is embedded early, owns its signals, and drives decisions proactively.

The session covers practical approaches to building autonomous QA systems—intelligent, self-service frameworks that surface quality insights early, reduce coordination drag, and remove dependencies. It highlights how to foster collaboration between engineering and QA, redefine ownership boundaries, and create conditions where quality becomes a shared, systemic outcome rather than a downstream checkpoint. For teams burdened by fire drills and quality debt, this talk offers a fresh perspective and actionable strategies to help QA lead the charge instead of cleaning up the aftermath.

Gaurav Mahajan 2

Gaurav Mahajan

AI in Testing

ATOMIC TALK

Included Artifact

Defining ‘Enough’: Testing in the GenAI Era

In Machine Learning, a model delivering 85% accuracy is often celebrated as a success. Chasing 100% is understood to be unrealistic—the data is messy, the real world is unpredictable, and the final few percentage points usually cost far more than they’re worth.

Yet in software testing—especially in the era of AI and GenAI—the question “Can we test 100%?” still lingers. The reality is that AI outputs are probabilistic: the same prompt can produce different answers, and confidence scores reveal how certain or uncertain the system is. In this context, 100% testing coverage is an appealing idea, but it doesn’t reflect how things truly work.

This talk introduces a different way of thinking. Techniques like Principal Component Analysis (PCA) can reduce the testing space to the dimensions that matter most. Confidence scores can highlight higher-risk areas, helping teams prioritize where to focus. And continuous evaluation, rather than a single pass/fail check, becomes the foundation for building trust.

The session presents a practical approach to answering the question, “Have we tested enough?” in AI projects. It’s not about testing everything—it’s about testing the right things with the right depth, so teams can ship with confidence and rest easy at night.

SatParkash Maurya

SatParkash Maurya

Testing Skills & Mindset

ATOMIC TALK

Included Artifact

Bias in, Bias Out : Knowing various Biases in Testing AI

Everyone says a human’s character depends on how they are brought up — the same holds true for Artificial Intelligence models, especially Large Language Models (LLMs). Building an LLM involves three major steps: collecting training data, training the model with that data, and finally productizing the model for real-world use. At every stage, there are subtle and not-so-subtle opportunities for bias to creep in — be it through the data we choose, the way we train, or the assumptions we bake into the final product. And just like in humans, this “upbringing” has a lasting impact on how the model thinks, responds, and interacts.

In my session, I’ll walk you through the various stages where bias can be introduced — intentionally or unintentionally and how these biases can affect the behavior and fairness of the LLMs we build. While bias is something we can’t completely eliminate, it is something we can actively manage. You’ll gain insights into practical methods to identify, reduce, and balance bias while sampling data, during training, and throughout the development cycle of LLMs. The goal is not perfection, but responsibility building models that are more transparent, inclusive, and trustworthy.

Maheshwaran

Maheshwaran VK

Testing the AI

ATOMIC TALK

Included Artifact

Testing Agentic AI

This talk explores the challenges of testing agentic AI systems—AI that autonomously reacts to events and initiates processes. Drawing on decades of experience, Robert Sabourin emphasizes that testing begins and ends with risk. A three-dimensional model (business impact, technical risk, autonomy) guides evaluation. Testers generate ideas using a broad taxonomy, from capabilities and failure modes to creative and adversarial approaches. Continuous testing and monitoring ensure findings inform business decisions, emphasizing learning over correctness.

rob toronto removebg preview

Robert Sabourin

Agentic AI

ATOMIC TALK

Included Artifact

Agentic QA Workflow

Agentic code generation has made development sprint from days to hours, but most QA delays aren’t compute—they’re coordination: manual planning, slow handoffs, and batched reviews. Meanwhile, capable agents are emerging in silos (e.g., Xylos), without orchestration or governance, so outputs arrive late, go stale, and lack an audit trail. This talk shows how we compose those siloed agents into an AI‑powered STLC that matches modern dev velocity: a one‑time JIRA connect turns an epic into a governed, dependency‑aware workflow (13–19 steps spanning design, scripts, data, execution, analysis, plus security/performance/accessibility). Steps run as soon as they’re unblocked; reviewers get real‑time SSE updates and approve in‑context with artifacts and logs; failures surface explicit diagnostics with scoped re‑runs and retries. The measurable gains come from the workflow, not agent speed: planning time drops from hours to minutes, handoff waits collapse via auto‑triggers, review latency compresses through a single approval surface, and persistence removes rework—typically reclaiming 1.5–2.5 days of non‑compute time per epic.
I will walk through the architecture (Node.js/TypeScript, Express, MongoDB), template and dependency graph design, the human‑in‑the‑loop review UX, and our failure‑handling guardrails (clear JIRA errors, retries, auditable trail). The session ends with an adoption playbook to incrementally transform your STLC—composing existing Xylos (or similar) agents without disrupting developer workflows. This talk is for testers, SDETs, QA leaders, and engineering managers who need a practical, repeatable way to orchestrate agents into a governed, reviewable QA pipeline that keeps pace with AI‑accelerated development

Krishnamoorthy Gurramkonda

Krishnamoorthy Gurramkonda

Agentic AI

ATOMIC TALK

Included Artifact

Breaking Your Own Bots

As AI agents take on critical software testing and automation tasks, their vulnerabilities can become silent ticking bombs. In this talk, we’ll explore how applying “red teaming” ; a concept borrowed from cybersecurity; can expose weaknesses in AI agents before they cause failures in production.
I’ll share practical techniques for stress-testing agents, from prompt injection attacks to adversarial workflows, and how these methods can be used not only for finding flaws but also for hardening systems against future threats. We’ll also look at real-world examples of red teaming in AI testing agents, the patterns that emerge, and the tools you can use to simulate hostile environments.
By the end, you’ll have a blueprint for turning your AI testing agents into resilient, self-correcting systems capable of surviving in unpredictable real-world scenarios.

Robin Gupta

Robin Gupta

Agentic AI

ATOMIC TALK

Included Artifact

Beyond Numbers, Metrics that matter in AI Age

AI has changed software and the way software is tested, yet many teams still rely on outdated metrics like coverage, pass rates and defect counts. These numbers do look great on dashboards but they fall short in answering the most critical question: can we really trust what AI is doing?

This talk, “Beyond Numbers, Metrics That Matter In the AI Age, explores how testers can redefine measurement for AI driven systems. I will introduce 3 essential metrics designed for AI Age: – explainability metrics, robustness metrics and trust metrics.

The attendees will walk away with a practical modern lens on metrics that moves beyond counting tests and proving trust.

Brijesh Deb removebg preview

Brijesh Deb

Testing the AI

ATOMIC TALK

Included Artifact

Before Building AI we should First Understand Natural Intelligence

Imagine you were building the world’s first heavier-than-air powered flying machine. Would you do so without first either trying to understand the principles of flight, or studying those animal species that had already succeeded in flying?

Yet, this is pretty much what many groups are currently trying to do in artificial intelligence. If you’re working in AI, or indeed pushing the frontiers of knowledge in any subject, you should do your best to gain a thorough understanding of the existing knowledge available in your own and related subjects.

Historically, many AI researchers have been highly disparaging of research into human intelligence and its results. This is puzzling, as human and other natural intelligences have already resolved many problems that AI cannot yet tackle.

In this session, Andrew explores human memory and seven of its apparent shortcomings, which many leading AI researchers have been highly disparaging of. He shows that, rather than being shortcomings, these features are effective solutions to adaptive problems humans faced in their evolutionary past, problems that AI has not yet successfully tackled.

He shows how studying these and other apparent shortcomings may lead to solutions of major challenges within AI.

The race to powered flight was won, not by the best funded group, which was the US government and the arms millionaire Hiram Maxim. Instead, it was won by two bicycle engineers, Orville and Wilbur Wright, who did what the US government didn’t bother to do. They studied how birds fly.

The race to true AI may well be won in a similar manner.

Andrew brown 2 2

Andrew Brown

AI in Testing

ATOMIC TALK

Included Artifact

Resilience Testing of a Tester

As testers, we love finding bugs. We enjoy making systems fail. But what makes a tester fail? In these trying times — when health issues are on the rise, job losses are making headlines, and tolerance levels are stretched thin — how do we help testers pass their own resilience test? Testers have to confront, justify their findings, often left out of key discussions, work on skewed timelines. It generates stress and emotional toll.

In this talk, we’ll walk through the 5W1H of emotional resilience — what it is, why it matters more than ever in the age of AI, and how to build it. We’ll visit the ‘test cases’ — those real-life moments that break a tester — and we’ll also debug the fixes that help us bounce back, stronger than before.

So let’s test, debug, and upgrade the most important system of all — ourselves!

Ashwini Lalit 1

Ashwini Lalit

Test Leadership

ATOMIC TALK

Included Artifact

From Copilot to Co-Tester: Guardrails for AI-Written Tests

Are you generating tests using Generative AI? Dimpy does, and she admits that AI can now generate tests of different types instantly. But speed doesn’t always guarantee safety. Without the right checks, there’s a risk of brittle, redundant, or misleading tests that create a false sense of coverage.

In this talk, Dimpy explores a structured “guardrails” framework for validating AI-generated tests so they can be trusted in production. She will walk through both semantic checks (AI-on-AI validation against acceptance criteria) and deterministic checks (code coverage, mutation score, flakiness detection, performance smoke tests, and security scans). Dimpy will also demonstrate a practical framework on how to automate these guardrails in CI/CD, turning raw test outputs into a measurable Test Quality Score that ensures functional, non-functional, and cross-layer coverage.

Dimpy Adhikary

Dimpy Adhikary

Testing the AI

ATOMIC TALK

Included Artifact

Breaking Boundaries: A Tester’s Guide to Freelance and Remote Success

Freelancing is more than a side hustle—it’s the launchpad to global careers. With global pay comes higher earning potential, and with global talent comes exposure to diverse practices and cutting-edge teams.

Manish begins with freelancing as the starting point, where small gigs like automation fixes, bug bashes, and manual cycles on platforms such as Upwork or Fiverr help testers build experience, portfolios, and reviews. From there, he shows how to scale into bigger freelance engagements, taking on long-term contracts in automation, performance, or QA consulting through platforms like Toptal, Braintrust, and Testlio—focusing on specialization, repeat clients, and steady income.

The session then explores the transition into full-time remote roles, where freelance credibility and client references can help land jobs via LinkedIn, AngelList, RemoteOK, or We Work Remotely. These roles provide not just stability but also global perks.

Along the way, Manish highlights the technical skills in highest demand—automation, API, performance, cloud, mobile, and TestOps—as well as soft skills like communication across time zones, self-management, and client handling. He also touches on parallel opportunities in content creation, mentorship, documentation, and community work, while sharing practical disclaimers around taxation, employer policies, payment methods, and compliance.

The session closes with a clear roadmap: start small, scale, transition, and thrive. Freelancing isn’t the end goal—it’s the launchpad to global opportunities.

manish Saini 2

Manish Saini

Testing Skills & Mindset

Event Sponsors

Premier Sponsor

Platinum Sponsors

Gold Sponsors

Silver Sponsor

Bronze Sponsor

Become a sponsor and reach a highly engaged audience of industry professionals. Support the development of skills and knowledge in the field while promoting your brand and products. Get your company a space and reach out to the best testers and developers around the world.

Our Past Event Sponsors

BrowserStack 1
mabl Logo
accelq logo
blinqIO Dark
sonar Logo
enreap Logo
lambdatest Logo
phonepe 1
qualitestgroup
saucelabs Logo
uiPath
avo Automation Logo
ase Logo
katalon logo
testrigor Logo
cigniti Logo
GSPANN Logo
Virtuoso Logo Vector.svg
keysight logo
Autify Logo
functionize Logo
algoshack Logo
co.meta Logo
Yubi logo
Yethi
Calsoft Logo RGB color
ShiftSync logo 1 1
catchpoint Logo
Jignect Technologies
Trigent_Logo_Color-with-tagline
magicpod logo
testguild Logo
stryker Logo
kobiton Logo
testvox Logo
element34 Logo
sahi logo NoBg Black
practitest Logo
testsigma Logo
devzery Logo
contextqa Logo
alignz Logo
reflect Logo
mozark Logo
QualityLogic Reg
Devassure
Betterbugs Logo
AIO TESTS 1
PostQode logo
diffblue Logo
conformiq Logo
logo.75f7205093b0520c4cd6d4d84864668c
QAble
Trailblu Logo
fasa Logo
qameta
Final AQA
Galaxy Weblinks Logo
opkey Logo
kiwiqa logo
Pcloudy Logo

THIS IS HOW WE CELEBRATED TESTING

Testflix 2024 Roundup

Testflix 2024 Roundup YT

About
The Test Tribe!

The Test Tribe Logo

The Test Tribe is World’s largest Testing Community with members in 150+ countries. We are on a mission to solve upskilling and top career growth for Testing Professionals globally.

150K+

Community
members

630+

Community Events

20+

Global Conferences

80K+

Event
Attendees

40+

Offline

Chapters

Global Offline Chapters

Porto
Calgary
Bangalore
Pune
New York
San Francisco
Germany
Berlin
Toronto
Mumbai
Hyderabad
Our Offerings
Corporate Trainings
On-Demand Courses
Finer Circle
Events
Community

Hear from
our past attendees!

I had an outstanding experience during the session. The content was engaging, insightful, and well-structured. The facilitator’s expertise and passion made all the difference, creating a welcoming atmosphere for everyone to participate

jayakrishnan

QA Manager

“Test Tribe are the Backend Peoples(DB Layer) who is helping the Peoples(UI Layer) , via Communication with API Layer(The Speakers).” This session has made a difference in the “Perception” with the atomic talks. As Atomic Talks increased and as more and more information was feed into my mind, i reliazed, my Mind Started to Scale , like AI Learning, with the Data been Feed. Thank you once again and …. Thats Sums Up for the team!

Participating in Testflix was an enlightening experience that expanded my understanding of testing methodologies and the role of AI in the testing process. The insights I gained have equipped me with valuable ideas that I can apply in my future projects. I highly recommend Testflix to anyone looking to deepen their knowledge in testing and explore the integration of AI in the field

I gained knowledge on a variety of new topics and walked away with a much deeper understanding of modern testing strategies. The sessions were insightful, offering fresh perspectives and practical takeaways that I’m excited to implement. Huge thanks to the speakers and organizers for creating such an enriching event! If you’re looking to stay ahead in the world of testing, attending this conference is a must.

TestFlix 2024 was an excellent learning experience. All testing experts came together and shared valuable insights into emerging trends, tools, and techniques. Sessions on functional automation and AI are of quite great interest. It inspires me to enhance my skills and pursue all new ways. So all in all, it is rewarding.

Joining testflix is really a great experience as it allows me to learn new perspective and connect to other qa and testers around the globe

Attending TestFlix in 2024 has been a transformative experience for me. The knowledge and insights I gained from industry experts have greatly enhanced my understanding of testing practices. Over the past few years, I’ve been able to apply these lessons in my career, and I continue to see their positive impact. I’m grateful for the ongoing journey of learning and growth that TestFlix has inspired

avatar man profile user 5
RS Parrthipan

TestFlix 2024 was truly inspiring! I gained valuable insights into innovative testing techniques and had the chance to network with industry leaders. The hands-on workshops were particularly helpful in reinforcing my skills. I left with a wealth of knowledge and a renewed passion for my work. Highly recommend it to all testing professionals!

I was amazed to attend this program where I was able to get insights on market trends, new technologies, sessions on career growth and Gen AI.

I had a great experience attending testflix for the first time. got to know what’s trending in market. lots of takeaways, many new connections. I am motivated and proud to be a quality engineer

I have been attending The Test Tribe – Chennai meetup for the past 9 months. To be honest it is the best thing I have ever decided to do. These meetups have not only expanded my understanding of testing but also helped me gauge my position within the testing community.

At the heart of it, when the purpose of the conference was rooted to GIVE to the testing fraternity it was indeed- Great, Impactful, Valuable, and Enriching.

Frequently asked questions

TestFlix will have 40+ pre-recorded Atomic +Live Talks weaved together to make one amazing binge.

Duration of the software testing event would be around 7-8 hours covering multiple time zones on both the days.

Yes, event recording will be shared but probably only after 3-4 Weeks after the event.

Event link and instructions will be shared with registered participants 2 days prior to the event.

You can bring in your testing peers to register for the event. You can spread the word on social media using the hashtag #TestFlix and tag us. You can also support Testflix by making donations using ‘Addons’ section at the bottom end of Attendee Registration form.

You can share the list of participants with First Name, Last name, Email address, Country, Organisation, Designation with us at  [email protected] and we will bulk register your teams.

TestFlix is an online international test conference, where we have 60+ global speakers from various top IT companies, who will be talking on various themes related to software testing.

Yes, the whole event will be in Virtual mode, so you can enjoy and learn at the same time from the comfort of your own place.

Test Engineers, Developers, Test Leads, Test Directors, Test Architects, CTOs, CEOs, CS IT Students, Founders, everyone can attend this international software testing conference, to grow their knowledge.

There are social contests happening where the winner will get amazing prizes from The Test Tribe Pvt, Ltd. Tune in LIVE to know more about them.

TestFlix is a global software testing conference where we aim to create a truly global virtual stage for Software Testers from the maximum possible countries can share their knowledge through pre-recorded Atomic Talks.

World’s Leading Software
Testing Conference

Virtual I Free

[sibwp_form id=2]

World’s Leading Software Testing Conference

Get the Testflix 2025 Schedule

Fill the Form Below to Download
The Test Tribe Logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.