As AI chatbots power critical user interactions, even a minor misstep in testing can lead to high-profile, costly failures. How do you ensure your LLM-powered chatbots don’t become the next headline? In this session, discover the cutting-edge techniques used to rigorously test LLM chatbots, ensuring robust performance and avoiding common pitfalls.
You will:
In just 15 minutes, you’ll walk away with actionable insights to elevate your chatbot testing, diagnose hidden issues, and build resilient AI systems that handle real-world demands flawlessly.Â
This talk is a must for testers, engineers, and AI professionals who want to future-proof their LLM chatbot systems and stay competitive in the evolving AI landscape.
Talk Takeaways
Stay ahead of AI developments with best practices for testing conversational systems
Ravindra Varshney is a hands-on Senior Technology Director with over 21 years of experience in application development, data engineering, and cloud platform buildout. Specializing in large-scale system integration, he has a proven track record of leading global teams to deliver innovative solutions that align technology with business strategy. Ravindra’s expertise spans financial services technology, cloud transformation, and big data engineering, with deep knowledge of agile software delivery and organizational change.
He is known for his ability to bridge technical and business leadership, driving strategic initiatives and fostering highly productive, cross-functional teams. Ravindra has successfully managed major projects at J.P. Morgan Chase, including the development of real-time position management systems and regulatory platforms, utilizing advanced technologies like AI, AWS, and Kubernetes. He holds certifications in Kubernetes, AWS Machine Learning, and Solutions Architecture, and has a Post Graduate Diploma in AI and Machine Learning from Purdue University.
Join our community of testers and start your journey