IMPORTANT NOTE: Timing of sessions and room locations are subject to change.
The Sched app allows you to build your schedule but is not a substitute for your event registration. In order to attend OpenSearchCon Europe 2026, please visit our website to register.
This schedule is automatically displayed in Central European Summer Time (UTC+02:00). To see the schedule in your preferred timezone, please select from the drop-down located at the bottom of the menu to the right.
Sign up or log in to add sessions to your schedule and sync them to your phone or calendar.
OpenSearch's ML Commons plugin enables deploying embedding models directly on your cluster—yet many developers still rely on external services. This session demonstrates building a complete RAG system using OpenSearch as both vector store and ML inference engine.
We'll cover deploying sentence-transformer models via ML Commons, processing PDFs, generating embeddings within OpenSearch, configuring knn_vector indices, and implementing semantic search. The highlight: voice search using local Whisper (STT) model creating a fully self-contained system.
What I hope to achieve: Share practical patterns for leveraging OpenSearch's ML capabilities and gather community feedback.
What attendees gain: Step-by-step knowledge to deploy ML models on OpenSearch, build RAG pipelines, and integrate local voice processing—reproducible techniques they can apply immediately.
How this helps the ecosystem: Showcases OpenSearch's native ML features, reducing dependency on external services. Demonstrates OpenSearch as a complete AI platform, not just a search engine.
Target audience: Developers building RAG applications, OpenSearch operators exploring ML capabilities.
I'm a Staff/Lead Software Engineer at Genesys, working on the Conversational AI team. I build intelligent search and knowledge systems using OpenSearch, focusing on vector search and RAG pipelines. I'm passionate about exploring OpenSearch's AI/ML capabilities—particularly ML Commons... Read More →