/dq/media/media_files/2025/10/31/nathan-thomas-oracle-2025-10-31-17-18-04.jpg)
Nathan Thomas, Vice President of Multicloud & AI Strategic Initiatives, Oracle
Oracle is re-engineering its cloud foundation to meet the new demands of enterprise artificial intelligence (AI), and powering that shift is a combination of high-density infrastructure, open multicloud architecture, and unified data platforms. At the centre of these initiatives is Nathan Thomas, Vice President of Multicloud and AI Strategic Initiatives at Oracle.
With leadership experience at Epic Games, where he oversaw Unreal Engine, and at Google Cloud, where he led Product Management for Storage, Thomas brings a rare mix of hyperscale infrastructure expertise and practical platform engineering to Oracle’s cloud strategy.
Speaking to Dataquest at the Oracle AI World in Las Vegas, Thomas breaks down Oracle’s next-generation AI and multicloud innovations—from Zettascale GPU clusters and Acceleron network fabrics to Helios rack architecture, hybrid vector search, and the Oracle AI Data Platform. He also discusses heterogeneous compute, sustainability, and why India is becoming central to Oracle’s global AI expansion.
How is Oracle Cloud Infrastructure (OCI) integrating new agentic artificial intelligence (AI) capabilities, and what does this mean for enterprise use cases?
Nathan Thomas: The foundation for agentic AI begins with infrastructure. OCI must deliver extremely strong base layers—for our own workloads and for the enterprise workloads customers deploy on top of us. Many of our recent announcements focus on precisely that foundational engineering.
Acceleron for network fabrics is a major example. We are reducing packet latency, strengthening node-to-node communication, and embedding Zero-Trust Packet Routing (ZPR) directly at the physical layer. When Network Interface Cards (NICs) are combined with Graphics Processing Units (GPUs), enterprises get high performance with end-to-end security boundaries. These innovations feed into capabilities such as OCI Zettascale 10, supporting the next generation of GPU cluster scale.
We also announced 131,000-GPU superclusters, which require advances in cabling efficiency and closed-loop cooling systems to support ultra-high-density deployments. These core engineering investments power our Generative AI (GenAI) services and support models like Gemini 2.5 within OCI’s multicloud ecosystem.
Oracle AI Database 26ai, Hybrid Vector Search, Memory-Centric Processing (MCP), and agentic development frameworks help enterprises unify and secure data for AI—an essential requirement in the enterprise world where governance matters as much as performance.
All of this integrates into our Fusion Applications, creating a full-stack approach—from infrastructure to applications—that ensures AI delivers real value for enterprise customers.
On the infrastructure side, Oracle has announced large deployments with both NVIDIA and AMD GPUs. How do you balance these two architectures?
Nathan Thomas: The demand is strong for both, and we scale according to the customer requirements. NVIDIA GB200 and GB300 clusters continue to be central to enterprise workloads. At the same time, we see significant momentum around the AMD MI355X and MI450 series. These bring strong performance characteristics, and customers are increasingly interested in a heterogeneous compute model.
The Oracle Insight Platform plays a major role in enabling this heterogeneity. The industry is large and the workloads are diverse enough for both NVIDIA and AMD to play a strong role. Our goal is to provide customers with choice and flexibility, allowing them to use the best architecture for their specific workload.
Do you see sector-specific leaning toward AMD or NVIDIA?
Nathan Thomas: We are seeing broad demand across all industries.
OCI’s new Helios rack architecture supports 72 GPUs per rack with liquid cooling, Ultra Accelerator Link (UAL) networking, and UALink memory sharing. How does this translate into meaningful real-world performance improvements?
Nathan Thomas: Closed-loop, non-evaporative cooling enables major density improvements. Higher density means racks can be placed physically closer together, and that immediately reduces latency between nodes. Something as operational as cooling can deliver significant performance benefits.
Helios is closely tied to Acceleron. By unifying the NIC and the Data Processing Unit (DPU), we create a single low-latency path for packet movement across nodes. When you combine low-latency networking, high-density placement, and unified communication paths, you obtain substantial improvements in workload performance—especially for tightly coupled compute tasks like distributed training, multi-node inference, and large language model processing.
Oracle talks a lot about open standards and heterogeneous compute. From an enterprise IT standpoint, how does this reduce lock-in?
Nathan Thomas: Enterprise customers want flexibility—there is no ambiguity on that point. Oracle has a long-standing commitment to open standards and open-source ecosystems. Customers want to run workloads wherever their AI pipelines, data strategies, or regulatory constraints lead them.
Our multicloud strategy reflects this. We enable Oracle Database and OCI services—including AI services—to run across Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). These are not superficial integrations. They include dedicated, high-bandwidth interconnects, identical hardware footprints, and consistent data-governance models. Our goal is to let enterprises choose freely without changing their core database architecture or compromising on compliance or security.
With the launch of the Oracle AI Data Platform, how does OCI underscore GPU-rich infrastructure, time-to-data unification, and Agentic AI workload?
Nathan Thomas: The Oracle AI Data Platform is designed to provide enterprises with a unified semantic view across all their data assets—transactional systems, document stores, logs, vector embeddings, and more. Unifying structured and unstructured data is essential because real-world AI workloads need context and continuity across data types.
We integrate Autonomous AI Database and Autonomous AI Lakehouse so that embeddings, vector search, metadata, and processing layers all operate together. Historically, enterprises have faced silos and compliance challenges that have limited their ability to leverage AI. The AI Data Platform changes this equation by letting them unify, secure, and operationalise data at scale.
The truth is that hardware matters—but without clean, governed, enriched data, enterprises will not extract meaningful business value from AI.
How is Oracle ensuring performance and governance parity across OCI and partner clouds?
Nathan Thomas: This is where our multicloud database architecture becomes critical. We deploy Oracle Exadata racks directly inside partner cloud facilities. These racks operate as Oracle-managed child sites running the exact Exadata hardware that customers use on-premises or in OCI.
This ensures consistent performance, the same management interfaces, and the same operational tooling across environments. When enterprises migrate workloads between these environments, the system's behaviour remains identical.
Once the data plane is consistent, enterprises gain the freedom to use any cloud’s AI pipeline—whether that is Amazon Bedrock, Microsoft Copilot, Google Vertex, or Google Gemini—while keeping their Oracle database architecture unchanged. This combination of consistency and choice is extremely important for enterprises scaling AI.
Sustainability is becoming critical for AI-scale workloads. What is Oracle doing on this front?
Nathan Thomas: Sustainability is deeply embedded in our systems design and operational planning. We expand capacity based on very specific demand forecasts to ensure efficiency across the entire infrastructure lifecycle.
We collaborate with local energy utilities and, in certain cases, develop on-site power generation facilities. Cooling innovations like closed-loop systems improve density and reduce water and energy usage. Across the board, our goal is to make sure high-density compute infrastructure grows responsibly, without compromising environmental considerations or operational reliability.
Is Oracle working with AMD or NVIDIA to reduce GPU power consumption and thermal overhead at the architectural level?
Nathan Thomas: Yes, very closely. There is complete alignment between our environmental goals and our financial goals. Reducing power consumption reduces operational cost and environmental load, so this alignment drives innovation.
Demand for GPUs is enormous—both inside and outside Oracle—so efficient utilisation, high performance per watt, and improved thermal characteristics are essential. We work continuously with our partners across architecture, firmware, and systems engineering to advance efficiency.
India is one of the fastest-growing markets for cloud and AI. How is Oracle adapting its AI-infrastructure and multicloud strategy for the country?
Nathan Thomas: India is a significant and strategic market. Oracle has been present for more than thirty years, with over 5,000 enterprise customers and more than 500 partners in the country.
We already operate two OCI regions in India, and we are expanding further, including for AI workloads. We are working with Google Cloud, Microsoft Azure, and AWS to bring multicloud and AI capabilities deeper into India. Over the next twelve months, we plan a series of India-specific expansions, including localised regions that will support AI training, inference and enterprise workloads at scale.
What role does India play in Oracle’s global product development roadmap?
Nathan Thomas: India is playing a major role. Oracle operates nine product development centres in India—Bengaluru, Hyderabad, Chennai, Gandhinagar, Noida, Mumbai, Pune, Kolkata, and Thiruvananthapuram. These centres contribute significantly to our global roadmap across Oracle Cloud Infrastructure, the Oracle Database, and AI-driven services. They are core engineering hubs for Oracle worldwide.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
 Follow Us
 Follow Us