/dq/media/media_files/2025/09/30/elastic-2025-09-30-16-40-08.png)
In an exclusive conversation with Bahaaldine Azarmi, Global Vice President, Customer Engineering at Elastic, we explored how the company is redefining search and AI solutions for enterprises worldwide. From integrating vector databases and LLMs to launching developer-centric agentic platforms, Elastic is shaping the next wave of intelligent applications. The discussion also highlighted the company’s tailored approach for the Indian market, balancing innovation with performance and scalability.
Excerpts:
Now that Elastic positions itself as a search AI company, how has this focus influenced your innovation strategy and long-term product roadmap?
The arrival of LLMs and the buzz around tools like ChatGPT has triggered a significant shift in the market, especially with the rise of vector databases. At Elastic, we had already started investing in vector databases even before these developments.
Our goal is to build the right platform primitives that can work seamlessly with AI. This involves developing vector databases, establishing strong foundations for generative architectures, and integrating all of this into the Elastic platform.
For example, an SRE handling incident root cause analysis can do it in a traditional way pre-2025 or leverage agentic AI, LLMs, and vector databases today for faster, more efficient investigations. AI has fundamentally influenced how we design our solutions and platforms.
Elaborate on how Elastic leverages vector databases and LLMs, especially compared to traditional key-based search.
Our vector database component isn’t an add-on; it’s deeply integrated into our core, Lucene and Elasticsearch. For developers familiar with Elasticsearch, vectors are just a new data type alongside lexical search.
Vector databases are powerful for similarity searches, but exact matches and filtering by tags or attributes still require traditional search. Combining both gives more relevant results to our users.
Regarding LLMs, we view them as becoming a commodity. We don’t build our own LLMs; instead, we provide our customers with the flexibility to use any model they prefer. Our inference service enables integration with any LLM, so whether a customer is building a RAG (retrieval-augmented generation) solution or an agent, they can use the model of their choice.
What role do open source and cloud-native design play in building an AI-ready enterprise
Elastic is fundamentally an open-source company. Supporting our developer community has always been a priority. Everything we build, including our inference service, is developer-centric. Our upcoming AgentBuilder platform, for instance, empowers developers to build complex workflows and tools through APIs.
From a cloud-native perspective, Elastic can be deployed in multiple ways: self-managed, serverless, on bare metal, or via Kubernetes. We ensure seamless integration with cloud services, enabling security and observability solutions that fetch and process data efficiently from diverse environments.
Can you provide more details on Elastic’s agentic solutions and how they empower developers to build intelligent applications?
The market initially rushed to integrate LLMs, but realised RAG alone isn’t sufficient. Our approach is to expose Elastic’s APIs, search, aggregation, and data interactions, so LLMs can leverage them as tools.
With our AgentBuilder platform, developers define “tools,” which are functions agents can invoke. For example, a developer can create a custom ESQL query tool for their specific business application. An agent can then execute tasks using these tools in real-time, extending the application’s capabilities without needing constant feature updates.
This allows chat-driven or agent-driven interactions to adapt to individual user expectations and extend business applications naturally.
ElasticOn is hosting in India, a significant ICT market. How are you tailoring solutions for the Indian context?
India’s market is unique due to its large, hyper-connected population and diverse service businesses. Every digital service requires search, and Elastic provides a platform that delivers the most relevant results consistently.
As businesses scale, they need observability and, eventually, security compliance. Many of our Indian customers already start with search and naturally extend to observability and security as their needs grow. This positions Elastic uniquely as a comprehensive platform for Indian enterprises.
What strategy is Elastic adopting to collaborate with Indian enterprises, government, and developers to optimise data architecture for performance and scale?
We address multiple segments, from SMBs to large enterprises and public sector accounts. The key is meeting customers where they are: whether self-managed, bare metal, cloud, or Kubernetes.
We emphasise efficiency and cost optimisation. For instance, in observability use cases, we help customers manage important logs for fast access while moving less critical data to cost-effective storage, all while keeping it searchable. My team works closely with customers to optimise architectures, ensuring they can do more with less. This philosophy is built into our product itself, allowing users to scale efficiently without escalating costs.