Advertisment

Poisoned AI: When artificial intelligence turns rouge

Threat actors are also turning to artificial Intelligence and machine learning to launch their cyber attacks

author-image
DQINDIA Online
New Update
AI

Artificial Intelligence (AI) could be one of the most disruptive technologies that the world has seen in decades. Virtually every industry can benefit from AI applications and its adoption rate reflects widespread confidence in its potential. Further, preventing ransomware has become a priority for many organizations. So, they are turning to AI as their defence mechanism. However, like any other tech, AI is a two-sided coin. Threat actors are also turning to AI and ML to launch their attacks. There’s a massive problem that threatens to undermine the application of AI and allows adversaries to bypass the digital fortress undetected. It is called Poisoned AI or Data Poisoning.

Advertisment

What is Poisoned Artificial Intelligence or Data Poisoning?

Machine learning is believed to be a subset of artificial intelligence. Data poisoning targets the machine learning aspect. It is a form of manipulation that involves corrupting the information that is used to train machines. Simply put, data poisoning exploits training data to deliberately mislead machine learning algorithms.

How Does It Work?

Advertisment

Computers can be trained to correctly categorize information given reams of data. For instance, a computer might be fed 1,000 images of various types of animals that are correctly labelled by species and breed before the computer is tasked with recognizing an image as a dog. A system may not have seen a picture of a dog, but given enough examples of different animals correctly labelled by species or breed, it should be able to recognize a dog's image. In cybersecurity, the same approach is used. An accurate prediction requires a huge number of samples that are correctly labelled. It is said that even the biggest cybersecurity companies can collate limited data, so they crowdsource the data. It increases their diversity of the sample and the chances of detecting the malware. But there is a risk with this approach as professional hackers can manipulate such data by labelling the data incorrectly.

Threat actors carefully craft a malicious code that labels bad samples as good ones, and then add these samples to a larger batch of data. This helps the hackers to trick the AI/ML into surmising that a snippet of software that resembles the bad example is harmless. Such tampering with the data used to train machines provides a virtually untraceable way to circumvent AI-powered defences.

How to Combat the Threat of Poisoned AI?

Advertisment

To stay safe, organizations need to ensure that their data is as clean as possible which means regularly checking that all the labels being put into machines are accurate. Additionally, scientists who develop AI models should regularly check that all the labels in their training data are accurate. Some cybersecurity experts have also suggested a second layer of AI and ML algorithms to pinpoint errors in data training.

Further, when dealing with AI, sample size is all that matters. However, organizations must train their systems with fewer samples to make sure all the data is clean.

Wrapping up

AI is one of the important tools that enable the cyber security companies to get solutions. The global market for AI cybersecurity is already expected to triple by 2028 to $35 billion. But AI is not omnipotent. Hackers are always looking for their next exploit. Thus, one should always be proactive to detect any such cyber risks from hackers.

The article has been written by Neelesh Kripalani, Chief Technology Officer, Clover Infotech

Advertisment