Advertisment

Generative AI: The race where the ‘slowest’ wins

Why is Generative AI going to be a different race track in the enterprise Olympics- and would everyone play ball?

author-image
DQINDIA Online
New Update
Generative AI

Responsibility, caution, transparency, checks and control could be the new set of criteria for winners in an otherwise pound-for-pound game. Why is Generative AI going to be a different race track in the enterprise Olympics- and would everyone play ball?

Advertisment

At a packed hall of eager eyes and hungry ears at the TrailblazerDX conference, Sarah Franklin, President and CMO, Salesforce announced their own AI baby in the form of Einstein GPT. But she did not hesitate in explaining that: “We want to build the future with AI in a way we are proud of. There are a lot of questions about control, about governance and who makes the algorithms. These are important to address along with the productivity gains expected.” The same day a panel on AI Ethics talked deeply about how Salesforce is cognizant about issues like explainability, about the trade-off between speed and creativity with AI models and about customer’s data.

At the time of writing this article, Sam Altman was heard speaking with the same caution. He appeared patient in his interviews, where he was commenting on an open letter (by Tesla CEO Elon Musk, Apple cofounder Steve Wozniak and other signatories) asking companies such as OpenAI to pause development of AI systems “more powerful than GPT-4.”

Indranil Bandyopadhyay
Indranil Bandyopadhyay
Advertisment

Open AI acknowledges the risks and challenges of responsibly developing and deploying such powerful technology. - Indranil Bandyopadhyay, Principal Analyst, Forrester

Altman gave an assurance that his team wants to deliver the most capable and useful and safe models. He also cited how the team had spent more than six months after it finished training GPT-4 before releasing it. Speed was kept on the back-burner and priority was given to time to really study the safety model, to get external audits etc.

It’s a Bobsleigh

Advertisment

Close to the heels of these news bytes came news from Google’s lair—when its AI appeared to have been able to translate an unfamiliar language (Bangla) with very little prompting in Bangla. In media reports, CEO Sundar Pichai acknowledged this “black box” aspect to AI.

In fact, a recent report from Team8 (a global venture group that builds and invests in companies in enterprise technologies, FinTech and digital health) and its community of CISOs (CISO village) we see that CISOs feel pressure to broadly enable GenAI, and at the same time, they understand that doing so indiscriminately could create wide-ranging risks.

And it’s not just safety that is getting in the driver’s seat. The way we judge the ‘greatness’ of an AI model may also change soon. Size, literally, won’t matter from now on.

Advertisment

Research scientists at OpenAI have been working on scaling Large Language Models (LLMs). In a paper on GPT-4 Technical Report, Adrien Ecoffet says—We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs... The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4’s performance based on models trained with no more than 1/1,000th the compute of GPT-4.”

Experts have also pointed out how jumps in parameters have been met with a leap in the capabilities of the GPT models, but now this approach may be yielding diminishing returns. In fact, Altman has also underlined that size is a false measurement of model quality—especially when the world will see multiple models working together, each of which are smaller.

So with all this transpiring around, would the big enterprise and AI players – aka Google, Salesforce, AWS, Microsoft – use a different playbook? Especially in the boxing ring of Enterprise AI?

Advertisment

Marathon Des Sables

Ask Indranil Bandyopadhyay, Principal Analyst, Financial Services, insurance, Data Science, AI at Forrester, and he starts by spelling out the distinct approach of every big Tech player around.

He picks out Google as one of the pioneers and leaders in generative AI research. “The company has published many influential papers and released open-source tools and frameworks for generative AI development. Google’s approach to generative AI is driven by its curiosity and ambition to push the boundaries of what is possible with machine learning. The company invests heavily in research and innovation to explore new ideas and methods for generative AI.”

Advertisment
Shahin Khan
Shahin Khan

This is to be expected in a well-funded global race. - Shahin Khan, Analyst, OrionX

Turn to Microsoft’s approach to generative AI and Bandyopadhyay points out how it is driven by its vision to empower every person and organization on the planet to achieve more with Artificial Intelligence. “The company integrates its generative AI capabilities with its existing products and services, such as Office 365 Copilot, Business Chat, Bing Search, etc., to provide more value and convenience to its customers and users.”

Advertisment

Salesforce’s approach to generative AI, he adds, is focused on enhancing its core business of CRM. “It aims to provide its customers a competitive edge by automating tedious tasks, increasing productivity, improving customer satisfaction, and driving revenue growth.”

Rebecca Wettemann
Rebecca Wettemann

Trust will be really important for broader use of generative AI moving forward. - Rebecca Wettemann, CEO and principal of Valoir

As to OpenAI’s approach to generative AI, it is driven by its vision of creating artificial general intelligence (AGI), which can perform any intellectual task humans can. OpenAI believes that AGI can be a positive force for humanity if it aligns with human values and goals. However, it also acknowledges the risks and challenges of responsibly developing and deploying such powerful technology.

In the eyes of Rebecca Wettemann, CEO and principal of Valoir, a technology research firm, advisor to CIOs and technology companies, while Open AI and Google have taken a more tech-centric approach, both Microsoft (embedding ChatGPT in Viva Sales) and Salesforce (with Einstein GPT) have focused on specific uses for the technology and prebuilt those applications to accelerate ‘time to value’ and reduce the risk and learning curve for users.

Shahin Khan, veteran analyst and industry-observer at OrionX also separates the boys from the men in the enterprise AI race. “Competition will be intense in all areas of AI, so the phenomenon we are seeing is really interesting and paradoxical. AI is seriously advanced technology, presumably accessible to a select few, and yet, for many uses cases, it is rapidly becoming table stakes, offered by several companies globally. This is to be expected in a well-funded global race, when many players can reach similar milestones at the same time.”

That said—Organizations that want to use such technology should indeed have a wait-and-watch approach, avers Bandyopadhyay. “Generative AI is a high-speed developing area, so following an approach so early in the development life cycle may end up not being so clever.” He indicates how Google faces scalability, privacy, and ethical issues related to generative AI. And how Microsoft faces some challenges regarding quality control, user feedback, and social impact related to generative AI.

Wettemann seconds that a key area to think about is trust, which will be really important for broader use of generative AI moving forward. “Salesforce’s Office of Ethical and Humane Use is providing customers with a thoughtful approach to understanding how they can trust generative AI, the implications of model training and guidance, and the need for guardrails to ensure AI can be used in a safe and trusted manner.”

“Since generative AI is in the limelight, content consumption and creation is most visible.

So issues of data bias, data currency, multi-media integration, and enabling a permissive-but-responsible creative process will be important and an opportunity for differentiation. Big players will continue to incorporate AI into their existing products, each highlighting a different aspect, but will look largely similar to end-users.” Khan argues.

Devroop Dhar, Co-Founder, Primus Partners dissects that from an enterprise angle, there would be two parts to all the excitement and questions that follow Chat GPT-4. “One would be what developers do on top of it and others would be those who would be integrating it. There would be many enterprises that are building solutions – it’s only a matter of time of watching them grow and get more spotlight. And that’s why many large tech companies would be investing heavily in this area.”

Devroop Dhar
Devroop Dhar

There are no precise norms or regulations yet- voluntary or mandatory. - Devroop Dhar, Co-Founder, Primus Partners

But what would determine real success is the solutions and use-cases across industries – like fraud management, risk management, service delivery and conversations. And there is no denying the role of regulation – self or external- as Dhar adds. “There are no precise norms or regulations yet: Voluntary or mandatory. Development and market use-cases will emerge, but the ethical part of enterprise AI needs to be strengthened.”

Perhaps, it’s a race of tortoises and not hares, after all.

Slow and Steady–and sticking your neck out

Ask Wettemann if it would be better to be slow and cautious than be the first one to win the race and she says that the answer is going to be different for different companies – and different divisions within companies. “Expect that the field will continue to move very rapidly, and we’ll see some great results – and some great failures. Key for companies will be ensuring users understand the potential and risks of generative AI and how to use it effectively. AI is going to need human interview and guidance for now, and likely for a long time, so setting realistic expectations is important.”

The biggest dangers of AI are using it outside of the scope in which it was trained, and leaving to faith the impact that it will have on jobs and livelihoods, Khan reminds. “We need a balance of global competitiveness and local safeguards, so the ideal scenario is to develop a legal framework informed by a corresponding ethical framework that can govern responsible use of AI, while at the same time building a social transition plan that enables society’s move towards a desirable digital future.”

Generative AI is an emerging field with potential applications across a wide range of industries, and many companies in the BigTech brigade are investing hugely in this area, observes Dr. Santanu Paul, Founding CEO & MD of TalentSprint. “I feel the AI ecosystem is warming up, and has to undergo rounds of iterations and upgrades before we can make any solid assessments. It is certainly going to be quite interesting to witness how things unfold for us in the near future.”

Companies need to play very smartly and patiently

on the enterprise side of Generative AI. Wettemann advises that Business leaders need to keep telling themselves that this is just technology – its successful application will depend on the humans who make decisions around it. “If we think back to when e-mail came out, we got great benefits in productivity and efficiency, but we also got spam, and phishing, and people that always replied to all. AI’s productivity increasing potential is exponentially greater than e-mail, but so are its risks.”

The enterprise market, as usual, will be a major target of Generative AI where language models would learn the vocabulary, product lines, and customers of the enterprise, notes Khan. “This will complement recommendation engines and AI systems and physics-based simulation applied to heavy-asset process automation/management.”

Hares may get to the finishing line faster, but turtles know the roads better.

Like Altman who has been heard saying that he’s being open about the safety issues and the limitations of the current model. Not because of anything else, but it’s the right thing to do. “We are not here to jerk ourselves off about parameter count.”

It could actually be a triathlon where something more than speed or size will count. Patience, foresight and agility – would matter, together. And if we turn to Bob Sanders, maybe some emotions too.

“Size doesn’t matter. It’s all about the heart.”

It always is.

By Pratima H

pratimah@cybermedia.co.in

Advertisment