Jawahar Govindaraj - The Test Tribe
Session

The quiet architecture
of trustworthy agents.

Outline

What we’ll cover in this session.

Further details coming soon.

Most teams try to bolt AI agents onto QA the same way they bolted on Selenium a decade ago — and hit the same wall. This session is about the architecture that keeps agents honest: where they actually earn their keep, what breaks when you scale them, and the control surfaces that separate a demo from a shipped pipeline.

We’ll walk through
01

Where agentic workflows actually earn their keep in a real QA pipeline — and the two places they quietly fail.

02

The four control surfaces to set up before an agent touches production: scope, evaluation, failure cataloguing, human-in-the-loop.

03

Patterns for flaky-test triage, regression pruning, and visual-diff arbitration with receipts from three production systems.

04

A reference architecture you can take back to Monday’s sprint planning, plus the metrics that prove it’s working.

Jawahar Govindaraj
Speaker

Jawahar Govindaraj

Global AI Head

Tritusa
Tritusha

Two decades across fintech and platform engineering — the ones that worked, and the ones that taught him why they didn’t.

Currently leads technology strategy at Thoughtworks, where he advises engineering orgs on rolling out agentic QA workflows to production. Previously led platform teams at two fintechs and has spoken at QCon, GOTO, and three prior editions of TribeQonf. Open-source contributor to evaluation and observability tooling.

Catch this session live.

One pass, every talk, no parallel tracks. Super Early Bird
ends when the next 200 seats are gone.

The Test Tribe Logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.