/dq/media/media_files/2025/09/25/tim-berglund-2025-09-25-11-59-02.png)
Tim Berglund, Vice President of Developer Relations at Confluent
In a wide-ranging conversation, Tim Berglund, Vice President of Developer Relations at Confluent, spoke about the company’s developer-first initiatives, the global reach of its Data Streaming World Tour, the convergence of developer roles, and the critical role of event-driven architectures in today’s real-time digital world. He also addressed how AI integration fits into streaming pipelines and how Confluent is preparing partners and developers for the future.
To begin, could you give us a summary of the Data Streaming World Tour and its objectives?
The Data Streaming World Tour is a truly global series of 30 to 40 events. I have spoken at one in London, Australia, Singapore, Indonesia, and across the U.S. The core audience includes customers, potential customers, and anyone interested in streaming technologies.
Each event begins with a keynote for context setting, followed by customer and partner success stories and technical presentations. The audience is typically technical, developers, operators, and architects, people building real systems. It is been a very successful program in helping organisations progress along their adoption journey.
Confluent has a strong developer ecosystem with demos, webinars, and other resources. How large is this community, and how do you support them?
That is what my team does, developer relations. The global community of developers who could use data streaming is massive, probably 3–5 million globally. We estimate that about 150,000 companies use Apache Kafka today.
Our role is not to sell Confluent Cloud or Platform directly but to help practitioners, developers, operators, and architects understand streaming technologies. Adoption is a journey: first learning concepts like Kafka, Flink, and Iceberg; then experimenting with demos; and eventually building solutions.
We support this through keynotes at Data Streaming World Tour events, conferences such as Current (with editions in India, Europe and the U.S., and meetups of around 350 groups globally with nearly 200,000 RSVPS annually. Along with Confluent Developer's free curriculum, example code, tutorials, and certifications, including the Certified Data Streaming Engineer,
Many developers struggle to understand the difference between traditional databases and streaming. How do you explain this shift?
At its core, the difference is between “what is” and “what happens.”
A database schema is about the state of things, current accounts, products, or relationships. It doesn’t naturally capture history.
Streaming platforms like Kafka are about events, things that happen. They capture a log of history, not just the present snapshot.
Both are necessary, and you can translate one into the other. But in today’s world of distributed, microservices-based applications, event logs are a more natural way to connect systems.
How should developers think about integrating AI models into streaming data pipelines?
AI is fundamentally a data problem. Large language models (LLMs) are powerful, but they need fresh, contextual data to deliver reliable results.
For example, if you ask, “Which accounts will churn next quarter?”, the model requires relevant organisational data in real time. That’s where streaming comes in.
Two approaches help:
Streaming RAG (Retrieval-Augmented Generation) pipelines, where real-time data feeds into the model.
MCP server resources, exposing enterprise data sources (databases, Kafka topics, etc.) for AI agents to consume.
If your data is stale, you risk hallucinations. Timely, well-engineered context reduces that risk—though, like humans, LLMs will sometimes still get things wrong.
How does Confluent guide partners in choosing the right Kafka tooling and integrations, especially for AI and analytics?
We think of ourselves as “data Switzerland.” Confluent doesn’t aim to own every component, but to integrate with everything.
For AI, we don’t run a vector database ourselves, but we provide connectors to Mongo, Pinecone, Postgres vector index, and others. Similarly, outside of AI, we integrate with Salesforce, Snowflake, S3, mainframes, and Oracle/Postgres databases through fully managed connectors.
Our philosophy: let customers choose the right tools while Confluent ensures smooth, reliable integration.
What is new in the Kafka ecosystem?
The most talked-about development is KIP-1150: diskless Kafka.
Confluent and Warpstream already offer diskless clusters, but open-source Kafka will soon support it too. Not everything will go diskless, but it’s an important evolution. I’m glad Kafka isn’t stuck in its 2012 mindset; it is adapting to today’s lower-cost, flexible infrastructure options.
How do you see the roles of data engineers and software developers evolving?
They are converging. The line between operational and analytical data is blurring. With technologies like Kafka, Flink, Iceberg, and open table formats, these disciplines are merging.
A data engineer today may still focus on SQL, but increasingly also needs Python and broader programming skills. Similarly, application developers need to understand data pipelines.
That’s why we launched the Data Streaming Engineer Certification, validating core streaming and stream-processing skills that apply to both data engineers and application developers.
How do you articulate the business value and ROI of event-driven architectures?
Smartphones changed everything. They shifted consumer expectations from waiting days to being notified instantly when the world changes.
Batch processing can’t support this real-time world. To deliver the mobile-first, always-on experiences customers expect, organisations must embrace event-driven architectures.
Yes, ROI can be calculated with consultants and spreadsheets, but the higher-level answer is simpler: real-time infrastructure is no longer optional; it’s necessary to compete.