Here are the top five predictions on what we should expect to see in AI in 2022.
Ethical/responsible AI will move beyond ‘fluffy policy’, and become embedded in tangible tools and actual law and regulations.
It’s been a fashionable trope in recent years for both businesses and governments to talk about using AI in a way that is both ethical and responsible. Organizations have become well versed and even better practiced at telling us how important this is, yet, actual steps to ensure their impact have been much more rare. This year, that should change, and AI will move into the realm of solid regulation and law being put into practice. For example, the EU has put out a new proposal, which next year it will be looking to move forward as the first steps towards getting beyond rhetoric and into regulation.
This legislation will prohibit the use of AI with unacceptable risk to do harm, such as, with some narrow exceptions, the use of AI for biometric identification (such as facial recognition) in public spaces for law enforcement, the use of subliminal techniques to distort behavior so that personal or physical harm is caused,and using AI for social scoring by public authorities (a situation where social media commentary can significantly impact on your ability to get a job, vote, or get credit).
Applications of AI with high risk will become closely regulated. The UK has also just published its National AI strategy, and the Chinese government in August announced proposed regulation that in certain areas will go further than the EU. Similar developments in the USA will also take things a step further, we could see organizations finding themselves having to prove that they are not just complying with regulations around ethics and responsibility in the way they are using AI, but also that they are using it to benefit customers and provide them with the transparency and explainability required to reassure consumers that it is being used as a force for good.
Better operationalization of AI will make ‘evil’ apps a thing of the past.
As more responsible AI becomes a greater priority for the majority of businesses, attention will turn to eliminating ‘evil’ apps, in which the AI has been deliberately manipulated to provide nefarious outcomes, such as ransomware, where software has been deliberately designed to block access to a computer system until a sum of money is paid.
Alternatively, other ‘evil’ apps can arise as a result of problems in the algorithm which lead to unintended consequences, such as automated decisioning systems that include some form of undesired algorithmic bias toward protected groups. Both instances can be overcome by providing greater rigor and structure around how these apps are developed and why. By answering and documenting simple questions, the use of ‘evil apps’ can be reduced.
These questions include: What information does the app contain and why? How does it work? What are you building and why? What is this app’s purpose? Can you prove that algorithmic bias is within limits, and can automated decisions explain themselves? We could see ultimate transparency appear within the app development space where a full register of what developers are doing with the AI is a prerequisite to producing anything.
2022 will be the year people finally go ‘all in’ on AI.
It’s fair to say that AI has had a tricky infancy, childhood and adolescence. There’s no doubt its role has changed considerably – from its initial introduction on the edge of a business in their innovation labs, to the present day when people are beginning to understand that it has the capacity to transform organizations from the center out. In recent years there’s been a caution about extending its use beyond basic functionality, and how much it can be trusted, which has meant its use hasn’t been pervasive within businesses.
However, now that more and more organizations have dipped their toe into the water and have had their eyes opened as to the benefits it can provide, the technology is finally ready to reach maturity. A key reason for that is that end users are also reaching maturity in their own understanding about both how they can get the best results from AI, and also the rights and wrongs of using it. Now that AI has been largely demystified, users have a far better understanding of how to apply it effectively and correctly, which means that they are finally ready to adopt it on a wider basis and send its use into the mainstream.
AI is leaving the labs and transforming the business from the core. Enterprises’ lofty top level goals to become more evidence centric and data driven are translated into pervasive amd ubiquitious automated decisions that drive and optimize any customer interaction or business process.
Rise of ‘The Intelligence of Everything’ will help AI become more ‘human’.
This year, we’ll see AI beginning to become more well-rounded and go beyond the intellectual functions we associate it mainly with today, and we’ll begin to see it evolve into emotional, creative and relational intelligence and other more abstract ‘human’ qualities. We are not just intellectual beings – it’s these other qualities that make us truly human.
By replicating and/or exploring the human condition through the AI lens, businesses will be able to better understand their customers emotional state, interact and bond more naturally and emphatically in chatbots and other channels, and provide a better overall service.
AI will become established as a business tool.
Traditionally, AI has been the preserve of a select few, many of whom have expert qualifications and work in fields like data science where the technology can allow them to recognize patterns in large subsets of data. Next year, this could change, and we’ll see the use of AI moving beyond the data scientists and statisticians and into the realm of data savvy business people who will be able to use to drive better agility, understand their customers more and drive better outcomes.
There’s no doubt that AI will be in the spotlight in the coming year. What’s clear is that the key tenets of AI that aren’t going away is that people need to be able to find it both trustworthy, and relatable. Regulations around ethical and responsible use of the technology will help with the former, allowing people to finally go ‘all in’ on AI, while the continued evolution of AI to bring it closer to the human experience will make it even more relatable to you and I – and even more valuable to organizations – as a result.
Suman Reddy, MD and Country Head, Pega India.