The AI security gap no one sees—until it’s too late

Rajnish Gupta, Managing Director & Country Manager at Tenable India, discusses the cybersecurity risks in cloud-based AI environments, from excessive privilege and hidden misconfigurations to data leakage and machine identity threats.

author-image
Aanchal Ghatak
New Update
AI security
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

While enterprises scramble to embed AI in every pigeon hole of their business, a less visible and escalating threat is lurking below the surface - excessive privilege. In cloud-based AI environments, where pipelines cross services, clouds, and identities, a single over-permissioned token can lead to data leaks, model theft, or large-scale compromise.

Advertisment

In an interview, Rajnish Gupta, Managing Director & Country Manager at Tenable India, pulls apart hidden cybersecurity dangers related to scaling AI in the cloud - and shares how CIOs can get ahead by adopting risk-based, identity aware security strategies. Excerpts:

Let’s start with the basics — what does “excessive privilege” really mean in the context of Cloud AI, and why is it becoming a growing cybersecurity concern?

In Cloud-AI environments, “excessive privilege” means any user, service account, or model pipeline has more rights than its specific task requires—read-write access to every data bucket instead of just one, the ability to spin up GPUs across all projects, or blanket admin on a training cluster. These over-broad defaults creep in when teams accept canned roles, let temporary debugging permissions linger, or fail to audit service accounts that stitch pipelines together.

Advertisment

It is because AI workloads sit on troves of sensitive training data and valuable, proprietary models, a single over-privileged identity is all an attacker or a disgruntled insider needs to steal IP, poison datasets, or ransom the entire pipeline. The risk is magnified by automation: one compromised token in a CI/CD job can traverse multiple clouds, storage layers, and inference endpoints before anyone notices. As organisations race to operationalise AI, the attack surface grows faster than manual reviews can keep up, making privilege sprawl one of the most urgent and preventable cloud AI security gaps.

What are some of the most critical security gaps you’re seeing in AI-enabled cloud environments today — especially those that might be invisible to CIOs?

The most serious—and least visible—gaps stem from the “Jenga-style” layering of managed AI services, where cloud providers stack one service on another and ship them with user-friendly but overly permissive defaults. Tenable’s 2025 Cloud AI Risk Report shows that 77 percent of organisations running Google Cloud’s Vertex AI Workbench leave the notebook’s default Compute Engine service account untouched; that account is an all-powerful identity which, if hijacked, lets an attacker reach every other dependent service.

Advertisment

On AWS, 91 percent of SageMaker users still allow root access in at least one notebook, and 14 percent of Bedrock deployments expose their training data buckets to the public internet. These inherited misconfigurations sit below the CIO’s normal dashboards, yet one compromised token or open bucket can cascade through the entire AI pipeline. The same study found that 30 percent of cloud-AI workloads also run with known critical vulnerabilities such as CVE-2023-38545 in curl, turning excessive privilege into an easy path to remote code execution.


With AI systems often integrated into sensitive workflows, how can organizations proactively manage cyber exposure across both infrastructure and data layers?

Proactive exposure management begins with a unified, real-time inventory of every asset that touches the AI pipeline—compute, storage, service accounts, datasets, models and third-party tools—across all clouds and on-prem. Classify those assets by sensitivity, then stream telemetry from workloads, IAM, and network layers into a Cloud-Native Application Protection Platform with Data- and AI-Security Posture Management.

Advertisment

Such a CNAPP correlates misconfigurations, over-privileged identities and active CVEs, ranks them by business impact, and feeds prescriptive fixes straight into DevSecOps workflows. By replacing provider defaults with least-privilege policies, scanning infrastructure and data layers continuously, and automating remediation within the CI/CD toolchain, organisations keep AI workloads resilient while they scale.

How should CIOs think about securing not just users, but also machine identities and service accounts in dynamic, AI-heavy cloud ecosystems?

Treat every credential such as human user, service account, container role, model-training pipeline, as a first-class identity. Start by federating them into a single cloud-agnostic inventory (via CIEM/CNAPP) so you can see who or what can reach sensitive data and models at any moment. Overlay context—asset criticality, network exposure, exploit telemetry—to score each identity’s blast radius, then strip rights to the minimum needed with automated, just-in-time elevation and short-lived tokens.

Advertisment

Back this up with continuous posture controls: rotate keys and service-account secrets automatically, enforce MFA or workload attestation for privileged actions, and monitor behavioural baselines so anomalous use of a machine credential triggers an immediate quarantine. By unifying visibility, applying risk-based least privilege, and baking automation into the CI/CD pipeline, CIOs can keep both human and non-human identities from becoming the weakest link in AI-driven cloud estates.

As AI pipelines grow in complexity, what’s your guidance on prioritizing vulnerabilities — and how can risk-based approaches improve resilience?

As AI pipelines sprawl across notebooks, feature stores, orchestration tools and multi-cloud infrastructure, patch queues can balloon into the thousands. Shift from “fix everything” to risk-based triage: rank each CVE by three signals—exploit activity in the wild, ease of lateral movement from the affected asset and the business impact of the data or model it touches.

Advertisment

Exposure-management platforms can pull those signals automatically, mapping a vulnerability to the workloads it runs on, the identities that can reach it and whether the asset is internet-facing. Feed that context back into DevSecOps so teams patch first where a live exploit could derail model training, siphon proprietary data or halt production inference. By coupling continuous discovery with impact-aware scoring and automated workflows, you turn vulnerability management from an ever-growing list into a rolling, data-driven sprint that hardens the most critical links in the AI supply chain, first keeping resilience aligned with business risk.

Training and testing data often carry proprietary IP or customer information. What best practices should CIOs adopt to protect this data from leakage or misuse?

CIOs should treat every dataset in the AI pipeline as a high-value asset. Begin with automated discovery and classification across all clouds so you know exactly where proprietary corpora or customer PII live, then encrypt them in transit and at rest in private, version-controlled buckets.

Advertisment

Enforce least-privilege access through short-lived service-account tokens and just-in-time elevation, and isolate training workloads on segmented networks that cannot reach production stores or the public internet.

Feed telemetry from storage, IAM and workload layers into a Cloud-Native Application Protection Platform that includes Data Security Posture Management; this continuously flags exposed buckets, over-privileged identities and vulnerable compute images, and pushes fixes into CI/CD pipelines before data can leak. 

Finally, build privacy into the data itself—mask sensitive fields, use synthetic or differentially private samples where possible, and watermark corpora so any exfiltration is traceable. Unified visibility, least privilege by default and automated, posture-aware remediation together provide the strongest defence against data leakage or misuse.

Do you see a growing need for AI-specific threat modeling and penetration testing in the cloud? If yes, how can organizations get started?

The rapid shift of LLMs, vector stores, and training pipelines into production creates attack surfaces prompt-injection, model-stealing, data-poisoning that traditional threat modelling and pen-testing overlook. Addressing them begins with full visibility: auto-discover every model, dataset, feature store, inference API, and service account across all clouds, then map trust boundaries and data flows. Apply an AI-aware framework such as MITRE ATLAS or OWASP’s LLM Top 10 on top of classic STRIDE/ATT&CK to identify abuse cases unique to AI.

With the model in hand, extend red-team exercises: abuse over-privileged service accounts, attempt prompt injections, tamper with training data, and try to exfiltrate fine-tuned models. Automate baseline checks through a CNAPP that includes AI-security-posture management; it surfaces misconfigurations, public exposures, and toxic privilege combinations and feeds fixes into CI/CD pipelines. Finally, feed test findings back into design reviews so insecure defaults are stripped out before the next model ships. Map, model, test, automate—then iterate as the pipeline evolves.

Tenable operates at the intersection of cloud security and exposure management. Can you share any real-world use cases or insights from India Inc.?

Indian enterprises are turning to Tenable’s exposure-management stack to plug the hard-to-see gaps that sit between classic cloud-security controls and fast-growing attack surfaces:

Across the board, Tenable’s 2024 study of 69 large Indian organisations found that only 58 percent of attempted cyber-attacks were stopped, underscoring the limits of point products and the need for unified visibility. A follow-up Cloud Risk Report showed that 38 percent of India-based cloud workloads combine three toxic conditions at once public exposure, a critical unpatched CVE and highly privileged access making them prime takeover targets.

If you had to give CIOs a 3-point checklist for strengthening their Cloud AI security posture in 2025, what would that include?

1. Deploy a CNAPP that bundles AI-SPM and DSPM to map every model, dataset, service account and CVE, then flag the “toxic triple” — public exposure, exploitable vulnerability and excessive privilege — in real time so the riskiest issues are fixed first.

2. Treat the shared-responsibility line as code: enforce encryption, logging, network segmentation and secure images through organisation-wide policies and IaC scanning, preventing Jenga-style misconfigurations from ever reaching production.

3. Converge human and machine IAM under just-in-time access, short-lived tokens and continuous behaviour monitoring so no account can quietly escalate rights to poison data or steal models.