Advertisment

How Risky is GenAI?

Security is the moot point. The risks associated with Generative AI are significant and will constantly evolve. Read here the details.

author-image
DQI Bureau
New Update
How Risky is GenAI

Security is the moot point. The risks associated with Generative AI are significant and will constantly evolve.

Advertisment

How risky can GenAI be?

That’s a tough cookie, you see.

Use it for PR or marketing,

Advertisment

Not hacking or jailbreaking,

Else, end up in jail or flee.

If that limerick made you blink, these statistics should make you think: The Asia-Pacific region (including China, India, and Japan) will invest US$78.4 billion on AI by 2027—compared with US$24.8 billion in 2022—growing at a 25.5% annual clip. That estimate from IDC (International Data Corporation) includes hardware, software, and services.

Advertisment

IDC says the increase in AI spending reflects a shift towards leveraging innovative tech, including GenAI (Generative AI), to help operations, boost customer experiences, and boost competitive edge.

“Most organizations in the region have already started investing in GenAI or are navigating its potential. Some key challenges while exploring GenAI include trustworthiness, privacy, security, copyrights, and finding a suitable business partner. However, these hurdles can be overcome as the technology matures.”

“GenAI has gained a huge momentum,” says Vinayaka Venkatesh, IDC’s senior market analyst for the Asia-Pacific. “Most organizations in the region have already started investing in GenAI or are navigating its potential. Some key challenges while exploring GenAI include trustworthiness, privacy, security, copyrights, and finding a suitable business partner. However, these hurdles can be overcome as the technology matures.”

Advertisment

As for the Indo-Pacific region outside of Japan and China, AI adoption is led by India, Australia, and South Korea. This sub-region will account for 34% of Asia-Pacific’s AI spend and is set to grow at a 26.8% annual clip to cross US$28.2 billion by 2027. Japan is the third-highest investing region in AI solutions, with US$12 billion in investments expected by 2027. Japanese companies use GenAI to boost productivity, quality control, risk management and customer experiences.

Moot Point

The moot point is security. How risky can GenAI be? A Gartner survey notes that 34% of companies are either already using or implementing AI application security tools to mitigate the accompanying risks of GenAI. More than half (56%) of respondents are also exploring such security solutions.

Advertisment

“IT and risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI TRiSM (trust, risk and security management),” advises Avivah Litan, a Gartner distinguished vice president. “AI TRiSM manages data and process flows between users and companies who host GenAI foundation models, and must be a continuous effort, not a one-off exercise to continuously protect an organization.”

The risks associated with GenAI are significant and will constantly evolve. Undesirable outputs and insecure code are top-of-mind risks when using GenAI. “Organizations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage,” Litan adds. “This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI malperformance can also cause organizations to make poor business decisions.”

Board Games

Advertisment

Corporate executives are scrambling to understand and respond to the power and potential of GenAI. The technology is still nascent, but few doubt its power to disrupt operating models in all industries. Should the responsibility for the appropriate use of GenAI rest with management or the board?

“Boards are responsible for how GenAI is used at the companies they oversee,” notes McKinsey & Company. “The technology still poses real risks, leaving companies caught between fear of getting left behind—which implies a need to rapidly integrate GenAI into their businesses—and an equal fear of getting things wrong. How to unlock the value of GenAI while also managing its risks? Members of the board can help their management teams move forward by asking the right questions.”

The technology still poses real risks, leaving companies caught between fear of getting left behind—which implies a need to rapidly integrate GenAI into their businesses—and an equal fear of getting things wrong.

Advertisment

Here’s my alphabetical list of ten action points for companies to consider leveraging GenAI:

•             Acquire: Advanced analytical and creative skills to complement the capabilities of technology. The talent model may also need to adapt. However, it is important to consider the potential impact of replacing junior-level talent with AI on the future pipeline of creators, leaders, and managers. This concern was recently highlighted at the World Economic Forum.

•             Bring: Boards on board. Unless board members understand GenAI and its implications, they will be unable to judge the likely impact of a company’s AI strategy and related decisions regarding investments, risk, talent, and technology on their stakeholders. “Our conversations with board members reveal that many of them admit they lack this understanding,” McKinsey says.

•             Create: Corporate culture shifts. A company’s culture plays a crucial role in determining its success with GenAI. Companies that struggle with innovation and change will find it tough to keep pace. Does your company have a learning culture? That could be the key to success. Does your company foster a shared sense of responsibility and accountability? Without this shared sense, it is more likely to run afoul of the ethical risks associated with AI. Both questions involve cultural issues that boards should consider prompting their management teams to examine.

•             Design: A scalable data architecture so that it can handle increasing amounts of data without compromising its integrity or performance. Designing a scalable data architecture that covers data governance and security procedures is crucial. Depending on the use case, the existing computing and tooling infrastructure might also need to be upgraded. Is top management clear about the data systems, resources, tools, and models required? How do you acquire the missing components?

•             Enable: An effective data governance program. “A data governance program cannot exist on its own; it must solve business problems and deliver outcomes,” advises IBM. “Start by identifying business objectives, desired outcomes, key stakeholders, and the data needed to deliver them. Coordinate and standardize policies, roles, and processes to align with business objectives. This will ensure data is being used ethically and all stakeholders work towards the same goal.”

•             Finance: Experimentation and new operational models for GenAI. A modern data and tech stack is crucial for success. While foundation models can support a wide range of use cases, many of the most impactful models will be ones fed with additional, often proprietary, data. Therefore, companies that have not yet found ways to harmonize and provide ready access to their data will be unable to unlock much of GenAI’s transformative potential.

•             Galvanize: Tech teams and relevant tools, crucial for effective implementation of GenAI. To ensure privacy and security, invest in PET (privacy-enhancing technologies), which minimizes personal data use and maximizes data security. Deploy ModelOps and model monitoring tech. ModelOps is a set of practices to manage ML models throughout their lifecycle and includes model development and deployment. Model monitoring is a process that tracks the performance of ML models over time and alerts teams when models are misfiring or ingesting bias.

•             Highlight: GenAI, like traditional AI, raises privacy concerns and ethical risks. It can perpetuate bias hidden in training data or in the ML process, which could lead to unfair or biased outcomes. It also increases the risk of a security breach by opening more areas and new forms of attack. For example, deepfakes could simplify the impersonation of company leaders, raising reputation risks. There are also new ones, such as the risk of infringing copyrighted, trademarked, patented, or otherwise legally protected materials by using data collected by a GenAI model.

•             Include: Use cases and case studies. A software developer can use GenAI to create entire lines of code. Law firms can answer complex questions from reams of documentation. Scientists can create novel protein sequences to accelerate drug discovery. “However, the tech still poses real risks, leaving companies caught between fear of getting left behind—which implies a need to rapidly integrate GenAI into their businesses—and an equal fear of getting things wrong,” McKinsey notes. “The question: how to unlock the value of GenAI while managing its risks.”

•             Juxtapose: Stringent security policies and protocols with the potential of GenAI. “Organizations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage,” Gartner warns. “This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI malperformance can cause organizations to make poor business decisions.”

To devise an effective GenAI strategy, weigh the potential implications of the tech in the short and long term. Ideal GenAI applications are in software engineering, marketing, customer service, and product development. Companies in media and entertainment, banking, consumer goods, life sciences, telecoms, and tech could see the most potential in GenAI deployment.

Since I started this column with a “risky” limerick, let me end with a “rewarding” one:

How rewarding can GenAI be?

Help you achieve your goals, you see.

Writing, coding, designing,

All that with amazing timing,

Give GenAI some credit, or a fee.

Raju Chellam

Raju Chellam is a former Editor of Dataquest and is currently based in Singapore, where he’s the Chief Editor of the AI Ethics & Governance Body of Knowledge, and Chair of Cloud & Data Standards.

maildqindia@cybermedia.co.in

Advertisment