Advertisment

Survey reveals 92% of Indian organizations perceive GenAI tools as a potential security threat

According to its latest survey 92% of organizations in India consider GenAI tools like ChatGPT to be a potential security risk.

author-image
DQINDIA Online
Updated On
New Update
Gen AI


Advertisment

In India, GenAI tools are prevalent, with 95% of organizations leveraging them, yet 92% perceive them as a security risk. Interestingly, 75% cite skill gaps as a barrier to adopting tools like ChatGPT, and 71% emphasize the dominance of IT teams over general employees in driving GenAI tool usage.

New research from Zscaler, Inc, cloud security company, suggests that organizations are feeling the pressure to rush into generative AI (GenAI) tool usage, despite significant security concerns. According to its latest survey, All eyes on securing GenAI of more than 900 global IT decision makers, although 92% of organizations in India consider GenAI tools like ChatGPT to be a potential security risk, 95% are already using them in some guise within their businesses.

As Generative AI tools take centre stage in India's digital economy, organizations in their pursuit of staying competitive strategically leverage GenAI to drive innovation and productivity. However, the survey spotlights key challenges such as 100% of respondents considering the lack of resources to monitor the usage and 75% admitting the lack of skills or talent to implement or use GenAI tools like ChatGPT effectively.

Advertisment

Sudip Banerjee, CTO, APJ, Zscaler, said: “Generative AI has become a technological revolution with unlimited possibilities. Hence the spotlight is on organisations to navigate the intricate balance between innovation and security. Our survey underscores the dynamism of GenAI adoption, highlighting the need to sharpen focus on both Zero trust principles and skill development to unlock the full potential of GenAI technology. Therefore, integrating a zero-trust solution can provide full control over technology’s usage per user and application, allowing organizations to maintain a secure and controlled environment."

Even more worryingly, 22% of this user group aren’t monitoring the usage at all, and 36% have yet to implement any additional GenAI-related security measures – though many have it on their roadmap.

"GenAI tools, including ChatGPT and others, hold immense promise for businesses in terms of speed, innovation, and efficiency," emphasized Sanjay Kalra, VP Product Management at Zscaler. "However, with the current ambiguity surrounding their security measures, a mere 30% of organizations in India perceive their adoption as an opportunity rather than a threat. This not only jeopardizes their business and customer data integrity, but also squanders their tremendous potential.”

Advertisment

The rollout pressure isn’t coming from where people might think, however, with the results suggesting that IT has the ability to regain control of the situation. Despite mainstream awareness, it is not employees who appear to be the driving force behind current interest and usage – only 3% of respondents in India said it stemmed from employees. Instead, 71% said usage was being driven by the IT teams directly in India.

"The fact that IT teams are at the helm should offer a sense of reassurance to business leaders,” Kalra continued. “It signifies that the leadership team has the authority to strategically temper the pace of GenAI adoption and establish a firm hold on its security measures, before its prevalence within their organization advances any further. However, it's essential to recognize that the window for achieving secure governance is rapidly diminishing."

With 75% of respondents in India anticipating a significant increase in the interest of GenAI tools before the end of the year, organizations need to act quickly to close the gap between use and security.

Here are a few steps business leaders can take to ensure GenAI use in their organization is properly secured:

  • Implement a holistic zero trust architecture to authorize only approved AI applications and users.
  • Conduct thorough security risk assessments for new AI applications to clearly understand and respond to vulnerabilities.
  • Establish a comprehensive logging system for tracking all AI prompts and responses.
  • Enable zero trust-powered Data Loss Prevention (DLP) measures for all AI activities to safeguard against data exfiltration.
Advertisment