Open to our entire Porto Tribe, & beyond. Limited seats.
150+ Leading Minds from the Tech & Testing Community,
Driving the Future of Engineering, Quality, and AI in Business.
Leadership Sessions
Expert Panel
Fireside Chat
Leadership
Networking
Live Demos
QonfX is our flagship global conference that brings the QA and Tech community together to explore the Future of Software—Quality, Engineering, AI, and beyond.
Curated to inspire deep conversations, breakthrough insights, and meaningful connections, QonfX offers a high-value learning experience for professionals committed to excellence. Hosted across key cities in the US, Europe, and India, QonfX brings together the minds driving the next evolution of software quality and engineering.
Porto
Every major leap in software development has been a leap in abstraction. Early programmers wrote machine-level instructions. Then compiled languages arrived, and developers began reasoning at a higher level, in C, Java, Python, trusting the compiler to handle the translation faithfully without ever reviewing the output byte by byte. We are now living through the next leap: natural language specifications as the primary programming artefact, with AI coding agents acting as the new compiler.
This talk builds on the premise that AI coding agents will become as reliable as humans at translating clear, well-formed specifications into correct code. If that premise holds, the central act of software development will shift from writing implementations to co-creating specifications. Developers will define what a system should do, under what constraints, and to what standard of correctness, then delegate the implementation to agents running in parallel, much like a manager delegates work to a team today. Just as we write Python without reviewing the bytecode the interpreter produces, we will write specifications and trust that the agent did a correct job of turning them into working software. But trust does not mean blind faith. Quality controls remain just as important as they are today, and verifying that the output matches the intent becomes the critical step.
This shift has the potential to change where the bottleneck sits in the software delivery pipeline. Today, the ratios between product owners, developers, QA engineers, and DevOps reflect a world where writing code is the slow part. When AI agents remove that constraint, the slow parts become something else entirely: how precisely intent is articulated, and how rigorously the output is verified against it. That raises important questions about how teams will be structured. Will deep specialisation, frontend, backend, infrastructure, remain necessary? Or will role consolidation accelerate as the cost of crossing those boundaries drops?
We are already seeing early signs of this at Ocean Infinity, where product teams prototype working applications rapidly through AI-assisted development, engineers deliver across the full stack, and automation work that would have taken weeks now takes days. We are still early in this transition, and even at Ocean Infinity we are far from doing true specification-driven development. But the direction is clear: the skill that will matter most is not how well or how fast we write code. It is how clearly we can define what we want, and how well we can verify that we got it.
Pedro Costa
Exploratory Testing
Functional Testing
Porto
Software teams have never moved faster. With AI-powered tools, developers can now generate code, tests, and solutions in seconds — dramatically increasing productivity and accelerating delivery.
But while speed has evolved, quality hasn’t kept up.
This creates a dangerous illusion: the belief that more output means better outcomes.
In many organizations, what is perceived as a strong quality culture is often just a collection of tools, processes, and checklists — a “quality façade.” And with the rise of AI, this façade becomes even more convincing, as automatically generated code and tests give a false sense of confidence.
In this talk, Joana Silva explores what true quality culture really means and why it’s more critical than ever in an AI-driven world. She will challenge common assumptions about quality, highlight the risks of over-reliance on AI, and explain how speed can unintentionally increase technical debt, defects, and loss of trust.
Through practical insights and real-world experience, this session will cover how organizations can move beyond the illusion of quality and build a culture where quality is truly owned by everyone.
Because in the age of AI, one thing becomes clear:
AI doesn’t replace quality culture — it amplifies it.The software engineering world is currently trapped in the “Happy Path Fallacy” when it comes to AI. We build an agent, test a few isolated prompts in a playground, verify the “vibes,” and push to production. But traditional testing methodologies break down with AI. Agents are non-deterministic, they take multi-step cognitive trajectories, and they suffer from silent drift when the underlying models update. For engineering and QA teams, “vibe checks” do not scale.
To build reliable AI systems, we need to transition from the dark art of ad-hoc prompt crafting into a rigorous engineering discipline. This session is designed for technical leaders and engineers who need to bring predictability to unpredictable models. We will explore how to build automated evaluation pipelines that act as the ultimate CI/CD gatekeeper for GenAI applications, treating evaluations as the immutable product specification.
We will dive deep into a practical Eval Toolkit comprising four scalable patterns: Deterministic Assertions for strict formatting, LLM-as-a-Judge for open-ended quality (and how to overcome its inherent biases), Trajectory Evaluations for workflow efficiency, and Adversarial Evals for stress-testing edge cases.
By anchoring these metrics in comprehensive Tracing (using tools like MLflow), teams can finally embrace “Eval-Driven Development” (EDD). This talk provides a concrete, code-backed approach to stop guessing, catch regressions automatically, and turn real-world production failures into permanent test cases.
Joana Silva
Exploratory Testing
Functional Testing
Porto
Deploying Generative AI in production is fundamentally different from building demos or experimenting with prompts. What works in controlled settings often breaks down when systems must operate reliably at scale, handle unpredictable inputs, and integrate with real users and business processes.
In this talk, we’ll share practical lessons from building production-grade GenAI systems, covering prompt design, tool calling, guardrails, human-in-the-loop and evaluation (evals) for non-deterministic models. We’ll also touch on monitoring and observability in real-world environments.
We’ll highlight common pitfalls—such as prompt fragility, lack of systematic evals, and underestimated token costs—through.
The goal is to cut through the GenAI hype and provide a pragmatic view of what it takes to build and operate reliable AI systems in production.
Ricardo Filipe
Exploratory Testing
Functional Testing
Porto
The software engineering world is currently trapped in the “Happy Path Fallacy” when it comes to AI. We build an agent, test a few isolated prompts in a playground, verify the “vibes,” and push to production. But traditional testing methodologies break down with AI. Agents are non-deterministic, they take multi-step cognitive trajectories, and they suffer from silent drift when the underlying models update. For engineering and QA teams, “vibe checks” do not scale.
To build reliable AI systems, we need to transition from the dark art of ad-hoc prompt crafting into a rigorous engineering discipline. This session is designed for technical leaders and engineers who need to bring predictability to unpredictable models. We will explore how to build automated evaluation pipelines that act as the ultimate CI/CD gatekeeper for GenAI applications, treating evaluations as the immutable product specification.
We will dive deep into a practical Eval Toolkit comprising four scalable patterns: Deterministic Assertions for strict formatting, LLM-as-a-Judge for open-ended quality (and how to overcome its inherent biases), Trajectory Evaluations for workflow efficiency, and Adversarial Evals for stress-testing edge cases.
By anchoring these metrics in comprehensive Tracing (using tools like MLflow), teams can finally embrace “Eval-Driven Development” (EDD). This talk provides a concrete, code-backed approach to stop guessing, catch regressions automatically, and turn real-world production failures into permanent test cases.
Luís Manuel Maia
Exploratory Testing
Functional Testing
Porto
AI is no longer a side topic in software engineering. It is changing how software is built, how fast it moves, and how value is perceived across teams. As software becomes increasingly developed with AI and shaped by AI-driven capabilities, the testing profession is being pushed into a new reality, one where old assumptions about roles, career paths, and relevance are no longer safe.
This talk explores that shift in three moments.
1st, it frames why AI matters now, not as hype, but as a force already altering the economics and speed of software delivery.
2nd it brings that pressure into the testing job market, where rising expectations, automation fatigue, and new AI-enabled practices are creating both anxiety and opportunity.
3rd , it asks the question many testers, leaders, and teams are already feeling: what kind of testing professional will remain valuable in the next few years?
Rather than offering an apocalyptic message or an overly optimistic one, this session takes a pragmatic view of a profession in transition.
It looks at the growing tension between technology acceleration and professional adaptation, and challenges the audience to rethink what relevance, contribution, and differentiation mean in the age of AI.
Attendees will leave with a clearer understanding of the forces reshaping testing careers, a sharper perspective on the changes already underway, and a stronger sense of what it will take to stay relevant in an industry that is moving faster than ever.
Paulo José E. V. Matos
Exploratory Testing
Functional Testing
All
Berlin
Toronto
Virtual
San Francisco
Berlin
In order to create accessible products you need to start as early as possible in the process. Accessibility can not be automated or outsources and needs everybody’s involvement for it to become a feature of your products rather than a release blocker or compliance tickbox
Christian Heilmann
Toronto
In today’s ever-evolving tech landscape, anticipating and adapting to change is not just a skill—it’s a necessity for test leaders, especially with the rise of Generative AI. As projects face immediate market pressures from investors and endure heightened scrutiny from an increasingly skeptical customer base, test leaders must navigate not only technological disruptions but also significant…. Read More
Robert Sabourin
Berlin
In a world brimming with innovations and solutions, the journey to using GenAI in testing is often shadowed by a paradox: the obvious truths become hidden behind layers of noise, and illusions arise in their place. Rahul peels back the layers that veil judgment. This talk explores the essence of problem-solving, questioning whether the challenges we face have already been resolved by existing tools or if they demand the unique touch of new technology. Why do we reach for GenAI, and what does it promise to solve that others cannot? And ultimately, can the old and new coexist, each serving its rightful purpose, without clouding our vision? Rahul unravels these questions and more, guiding you through the path where clarity unfolds itself. Through a real-world case study, he will illustrate how GenAI can be leveraged meaningfully, emphasizing how understanding and restraint are key to sifting through the noise and seeing the obvious with renewed eyes. This keynote is a call to embrace both simplicity and innovation, to cut through the illusions, and to navigate the path where GenAI is not just noise, but a genuine part of the testing story.
Rahul Verma
Berlin
Retrieval-Augmented Generation (RAG) is transforming software testing by enhancing efficiency, accuracy, and scalability. It combines information retrieval with generative AI, enabling automated test artifact generation and improving knowledge retrieval for QA teams. One of its key benefits is automating test artifact generation by analyzing historical data, user stories, and documentation to create structured test cases .… Read More
Maiia Sviatchenko
Toronto
Generative AI can be used to create content such as text, images, or even code, and it finds applications in many different areas. GenAI is also a tool that can assist us with test automation. Like any other tool, the value of GenAI comes not just from its intrinsic properties but from how we use it, where we use it, the context, and so on. In this talk, I will share some of my experiences with GenAI-assisted test automation. We will explore several effective ways to leverage GenAI in test automation, identify areas where it can accelerate our work, and examine…. Read More
Lavanya Mohan
Toronto
In today’s rapidly evolving development landscape, quality leadership faces unprecedented challenges as AI integration makes applications increasingly non-deterministic. Ben Hofferber draws from his extensive experience at The Link, Rangle.io, and Hint Services to illuminate the core problems quality leaders must address when “user acceptance doesn’t validate system resilience.” … Read More
Ben Hoefferber
San Francisco
Niranjani Manoharan
San Francisco
Engineers integrating AI agents into test automation solutions face significant challenges in communicating application state effectively, requiring translation of DOM structures into semantic representations while capturing dynamic state changes across multiple contexts.
In this interactive session using Webdriver.io, attendees will learn practical implementation techniques for enabling AI agents to simplify browser automation.
Christian Bromann
Hyderabad
The rise of intelligent systems is reshaping what it means to build, scale, and lead in technology. Engineering leaders today must deliver at unprecedented speed while ensuring every layer—architecture, data, and human decision-making—remains trustworthy. Charu unpacks how the craft of engineering is evolving as AI becomes an active participant in design, testing, and delivery. She discusses how to lead teams that balance experimentation with discipline, translate complexity into clarity, and build systems that improve with every release. The session offers a forward look at how excellence in 2026 will be defined not just by performance, but by responsibility, resilience, and the ability to lead through constant change.
Charu Srinivasan
Hyderabad
Jaydeep Chakrabarty
Hyderabad
As AI becomes embedded in products and processes, the meaning of “quality” is shifting. Vanya focuses on how leaders can redefine quality for their teams — not as a phase, but as a shared mindset that guides decisions, behaviours, and standards. She explores how to create clarity around expectations, build cross-functional ownership, and shape cultures where speed doesn’t override judgment. This talk gives leaders a forward-looking view of how quality evolves when AI influences design, delivery, and user experience — and how strong leadership can keep that quality visible, consistent, and intentional.
Vanya Seth
Hyderabad
Swetha Yalamanchili
Hyderabad
Pankaj Kumar
Hyderabad
Change brings opportunity, but it also brings hesitation, fear, and pushback. In this conversation, Nirmala & Swetha talk openly about the real challenges leaders face when driving new automation or AI initiatives — from resistance on the ground to moments when teams simply aren’t ready. They discuss how to handle tough conversations, address fears of job loss, deal with underperformance, and prevent leaders in the middle from slowing progress.
The session also explores how to upskill teams when capabilities do not match expectations, how to keep people motivated through uncertainty, and how leadership traits have evolved in the AI era. It’s a practical, honest look at what strong, resilient leadership really requires today — clarity, empathy, firmness, and the ability to help teams grow through change rather than fear it.
Nirmala Datla
Swetha Yalamanchili
Hyderabad
Jaydeep Chakrabarty
Prajakt Deshpande
Shiva Kumar RV
Hyderabad
AI systems behave differently from traditional software — they learn, adapt, and sometimes work in ways teams didn’t anticipate. This panel looks at how to bring clarity and control to systems that are always shifting, through three essential lenses: platform engineering built on scalable, open frameworks; data-science methods for observing model behaviour and drift; and product-scale rollout strategies that keep reliability front of mind. The conversation will cover how to test behaviour that isn’t deterministic, how to prepare for failures no one scripted, and how to build trust in AI-rich products using open foundations and real-world practice.
Saurabh Mitra
Shravan Koninti
Shashank Chaturvedi
San Francisco
n the rapidly evolving landscape of software testing, innovative AI-powered observability platforms are transforming quality engineering practices by automatically detecting patterns and predicting potential issues across complex microservices architectures.
Niranjani’s top tips to revolutionize your testing practices include leveraging custom ML models, building efficient data pipelines, and implementing continuous model training and validation for maximum effectiveness.
Niranjani Manoharan
San Francisco
Engineers integrating AI agents into test automation solutions face significant challenges in communicating application state effectively, requiring translation of DOM structures into semantic representations while capturing dynamic state changes across multiple contexts.
In this interactive session using Webdriver.io, attendees will learn practical implementation techniques for enabling AI agents to simplify browser automation.
Christian Bromann
San Francisco
As test automation and AI technologies transform engineering practices, the roles of quality engineers are evolving from manual testers to strategic architects of intelligent testing ecosystems. Engineering leadership must adapt by fostering environments where experimentation with AI is encouraged, creating space for leaders to champion innovative approaches to software quality. This panel of experts from industry-leading companies will share how they’re reimagining software quality through AI-augmented test automation, while establishing new paradigms for engineering leadership that embrace the collaborative potential of human expertise amplified by AI-fueled systems. Learn how upskilling transforms change into opportunity and a panel discussion you won’t want to miss.
Aparijita Mathur
Priyanka Halder
Mala Punyani
Ashwini Purushotham
Become a sponsor and reach a highly engaged audience of industry professionals. Support the development of skills and knowledge in the field while promoting your brand and products. Get your company a space and reach out to the best testers and developers around the world.
The Test Tribe is World’s largest Testing Community with members in 150+ countries. We are on a mission to solve upskilling and top career growth for Testing Professionals globally.
We are also in these cities
Yes, QonfX Porto is an in-person conference only.
QonfX is designed for testing and technology professionals, with a strong focus on the future of tech and testing. Anyone from the tech and testing community is welcome to attend.
To register, you can simply click here. Although limited slots are available are granted on a first-come, first-served basis.
No—in-person attendees must use their reserved invites. To ensure fairness, do not apply unless you can commit to attending.
QonfX Porto will be held on May 9, 2026, from 9:30AM IST.
Thanks for your interest! Once you register, you’ll receive a confirmation email along with a calendar invite to block your day. As we get closer to the event, we’ll update the invite with the venue details and continue sharing all relevant information over email.
To cancel your pass and request a refund, please notify us at [email protected] at least 15 days prior to the event.
No, session recordings will not be shared, as QonfX is an in-person event only.
Yes, you will receive a Participation Certificate on your registered email.
We appreciate your interest in contributing to QonfX! Here’s how you can get involved:
Join our community of testers and start your journey