10 Biggest Questions on Generative AI Answered

Big tech companies such as Google and Microsoft are integrating Generative AI technology into their products following the success of ChatGPT

New Update
Generative AI 550x300

Generative AI (GAI) is a type of artificial intelligence that can produce various types of content, including text, imagery, audio, and synthetic data. It relies heavily on Transfer Learning and Large Language Models (LLMs) to understand context and generate new data. The most popular generative model in Language Generation (LG) is the GPT (Generative Pre-trained Transformer) model.


In the news

Recent interest in Generative AI has been driven by the popularity of ChatGPT. Breakthroughs in Generative AI and Transfer Learning from top universities and Big Tech are driving the next wave of innovation around AI, and key players like Google and Microsoft are building significant capabilities while still considering risks and monetization. 

So, as financial services firms across the world try to assess the risks, opportunities, and challenges of embracing Generative AI for their businesses, we consider the 10 most pressing questions around the technology: 

  1.  What are Large Language Models (LLMs)?

Large language models (LLMs) are foundation models that utilize deep learning in natural language processing (NLP) and natural language generation (NLG) tasks. An LLM is essentially a transformer-based neural network, introduced in a 2017 article by Google engineers entitled, Attention is All You Need. The goal of the model is to predict the text that’s likely to come next (LLMs don't understand that they are just statistical models predicting the next word in a sequence). There are several other factors that have enable the development of LLMs since 2018:

  • Computational resources
  • Advancements in deep learning
  • Availability of data (digital platforms, social media, etc.)
  • Increased interest in natural language processing
  • Transfer Learning (In transfer learning, the knowledge of an already trained machine learning model is applied to a different but related problem)

2. Who are the key players in the LLM space?

LLMs, driven by Big Tech firms, are at the heart of this AI. They’re helped in this space by their access to large amounts of unstructured data, research and development, and computational power. Google and Microsoft already use language models to improve the accuracy of their search engines and the natural language processing (NLP) cloud services they sell.

Meanwhile, OpenAI, as a research organization, is interested in advancing NLP and AI more broadly, while social networks like Meta (Instagram and Facebook) and Twitter, use LLMs to detect risky behavior, spam, and harassment. New start-ups like Cohere and AI21labs are also gaining traction with innovative NLP-based solutions.


3. Has there been a significant increase in investment?

Investments from Microsoft in OpenAI, rumoured to be around $10 billion, have accelerated efforts from all big tech firms to make significant investments in GAI. Consequently, the venture capital community is now pouring billions into GAI and its promotion. Meanwhile, start-ups are looking at use cases to monetize and leverage the capabilities of powerful models like GPT. We expect to see an explosion of innovative start-ups driving this technology.

4. Where are the Generative AI ecosystems right now?


GAI has exploded in recent months, leading to multiple new start-ups, emerging in several areas, from generating text to code. Start-ups are building on these technologies to automate parts of patent-writing, power the next generation of search engines, and create better experiences in virtual worlds and gaming. Increased availability of open source tools and APIs will mean an explosion in new start-ups entering the market over the next 18 months.

5. What are researchers doing right now?

LLMs are a very active field of research and will continue to dominate tech news in the coming months. In February 2023, Meta released ChatLLaMA, a collection of open source, foundational LLMs. LLaMA is creating a lot of excitement because it’s smaller and better-performing than GPT-3.


Meanwhile, ‘Colossal-AI’ is a framework for replicating the training process of OpenAI’s popular ChatGPT application, optimized for speed and efficiency. 

6. What are the key limitations of generative models?

The publicity around LLMs is highly correlated with the Dunning-Kruger effect and the idea of "illusory superiority”. This concept may explain why some people believe LLMs are more capable than they actually are. Additionally, the hype around LLMS may be driven, in part, by a lack of understanding of their limitations – LLMs don't understand what they’re processing or producing, they’re just statistical models predicting words – much like the predictive text on our mobile phones.

Screenshot 2023 05 18 at 12.47.21 PM

7. Are there ethical problems around bias and discrimination?

Language models utilize vast amounts of text data from the internet, which may contain biased or discriminatory language. This can result in them perpetuating and amplifying existing biases and discriminatory attitudes. This poses a particular risk when aimed at end-users in enterprise systems.

That said, one of the reasons ChatGPT was publicly released is that it embraces user feedback, meaning the community can now report bias and discrimination and the model can then improve over time (ChatGPT and New Bing have pre and post processing to try and avoid some of the issues with bias).

8. Can it drive misinformation and fake news?

LLMs can be used to generate misleading or false information that appears to be credible. This can lead to the spread of misinformation and ‘fake news’, which can have serious consequences, such as impacting public opinion or influencing national elections.

9. Are there issues around privacy and security?

Training data used to develop language models often contains personal information, raising privacy concerns. Because of their size, they can’t be deployed on-premise (for now). All data sent to these models is currently going to the cloud.

10. How might Generative AI develop in the future?

Currently, GAI is in its relative infancy, and it’s far away from rivalling human intelligence. There are though some likely future research areas; including models that can generate their own training data to improve themselves; models that can fact-check themselves; massive sparse expert models – where a model is able to call upon only the most relevant subset of its parameters in order to respond to a given query; and legal frames for these models – looking at questions around data privacy, bias, and the potential for misuse.

Meanwhile, OpenAI’s latest milestone, GPT-4, a large multimodal model that can handle multiple types of input, like text, images, and audio, brings us a step closer to genuine artificial intelligence.

Many questions remain but the potential for innovation is real

The global financial services industry is clearly interested in the potential of Generative AI. However, many questions still remain over regulatory restraints, how much the technology will benefit them, and how much behind the scenes change might be required. Watch this space for future updates!

  • By Hachem Ohlale, Synechron’s Global Head of Analytics, Data Practice, The Netherlands
chatgpt generative-ai microsoft-bard