Advertisment

AI and Security: Wait, on whose side does it stand?

AI is going to disrupt the near future and it is the promising technology that will stand for decades from now. Security is a concern, AI is the answer.

author-image
DQINDIA Online
New Update
AI

There was an inventor named Hiram Stevens Maxim who caused some stir in military warfare, if only for a certain time, by giving colonial armies a weapon of novelty – The Maxim Gun. Supposedly, the first recoil-operated machine gun in production.

Advertisment

But then, he also had a son. Hiram Percy Maxim, who invented something else – The first commercially successful silencer!

And that’s how it has always been. You think a technology breakthrough will flip the status-quo in a big seismic shake-up. But before that shaking-up gathers speed, there is another breakthrough – the anti-dote that takes the fizz off the first one away – just like that.

After ages of fighting cyber-criminals in a catch-up mode, the security folks had got something glossy, nimble-footed and sturdy up their arms. Artificial Intelligence (AI) allowed them with everything they had been craving for - acuity, lightning-fast insights and an almost-Superman like ability to react to red-flags. AI was finally equipping them to handle the complexity, volume and real-time sensitivity of data that mattered a lot in fighting sly and smooth attackers.

Advertisment

It was also promising that long-sought-after 'proactive' edge in security. The world of zero-day exploits and software patches, too, could undergo a big shift now.

A proactive-stance is a big strength that defenders are gaining thanks to AI’s capabilities. “Modern day AI algorithms have a plethora of options for unsupervised learning, where you need not feed any labeled data to the system to make predictions. There can be an infinite number of abnormal behaviors and it is impossible to model them all. It is sufficient if we model the normal behavior for the system to throw warnings on anything that deviates from the normal behavior. This brings in the much-needed proactive edge.”  Ramprakash Ramamoorthy, Product Manager, Zoho Labs opines.

Ask Sanjay Katkar, Joint MD & CTO, Quick Heal Technologies and there is no doubt that new-age technologies such as AI and machine learning are, indeed, playing a critical role in the cybersecurity domain for quite some time now. Their importance in the future will only increase when it comes to both defensive and offensive security operations, he points out. “AI’s massive computational power will be integral to ensuring rapid threat response and deploying a robust first line of defence against an increasingly-complex threat landscape for all nodes within the IT network including devices, users, endpoints, and data. These technologies will also help the security ecosystem automate, scale, and generalise threat detection, response, and remediation more effectively.” He explains exactly ‘why’ AI is a strong force in security.

Advertisment

There is more to it. The security space is all about high-dimensional data, spells out Ramamoorthy when he indicates why traditional techniques perform poorly when the 'curse of dimensionality' kicks in. “AI can simply erode data boundaries and perform much better in high-dimensional data. Most security use-cases are about finding a needle in the haystack, where 90 per cent of alerts from a security platform are generally harmless. It is the 10 per cent that could potentially be troublesome and AI can help picking the troublesome alerts and suggest contextual remediation techniques.”

So it sounds like a hammer, an axe and a gun – all rolled into one? No wonder, as per Zion Market Research, the global ‘AI in cyber security market’ was estimated about $7.1 billion in 2018 and all set to reach $30.9 billion by 2025. We are talking about a Compounded Annual Growth Rate (CAGR) of slightly above 23.4 per cent between 2019 and 2025 here. The report shows that the AI penetration rate is accelerating rapidly for various cyber security applications and AI’s ability to adapt to changes in detecting potential threats is turning out to be the most beneficial for organisations. Security providers are significantly using AI solutions for fighting cyber-attacks because of the new strengths they find in early threat detection, response time reduction, and segregation as per priority. Incidentally, the Asia Pacific artificial intelligence (AI) in cyber security market is slated to grow at the fastest rate.

To combat growing threat-risks, cybersecurity companies are processing a massive number of security events to build contexts around security attacks, incidences, and breaches, Katkar tells. “For instance, at Quick Heal Security Labs, we detected and blocked close to 300 million malware infections targeting consumers and businesses in Q2 2019. This huge data trove represents a unique opportunity to gain deep threat insight and derive actionable intelligence for SecOps teams. At the same time, it is also a classic ‘finding the needle in a haystack’ challenge. Processing such huge volumes of data to draw relevant security insights in real-time is a challenge for human-only teams. Thankfully, tech-led interventions such as AI, machine learning, and security analytics are stepping in here to help address this challenge.”

Advertisment

Katkar points out the biggest advantage that AI delivers - the automation of security processes. “The number of threats, both known and unknown is increasing at a rapid pace. Security automation helps SecOps teams combat the scale and volume of potential threats and attacks by streamlining threat detections and response”

Security systems need to identify incidents as fast as possible, Ramamoorthy reasons. “It would be pointless to detect a security breach after the breach has happened. AI algorithms can be proactive and detect abnormality in the system and also predict if the abnormality would end up in a security breach over time, through probabilistic evaluation over past incidents.”

That sounds unprecedented. Something that cybersecurity teams always looked for; and, finally, have on their finger-snap.

Advertisment

What is easy to forget here is that the same tool is also being fiddled with by people and minds that are on the other side of the fort. Cyberattackers are not exactly snoozing when it comes to leveraging AI for being swift, being precise and being ruthless. And that spells a new headache. 

Not Atheist but Agnostic

If AI and automation are cutting down variability and cost, improving scale and limiting errors for protectors, the same fruit is ripe for the picking for attackers too.

Advertisment

AI’s use by cyber-attackers has proliferated to a level where even World Economic Forum (WEF) is beginning to be concerned about it.  In a Capgemini Research Institute report and an analysis, WEF has warned about the increasing sophistication of attackers. Looks like AI-enabled technology can strongly enhance attackers’ abilities to preserve both their anonymity and distance from their victims. That does not bode well for cyber-security folks.

Also, consider the unique edge that attackers have with AI - an asymmetry in the goals. Defenders must have a 100 per cent success rate, but the attackers need to be successful only once.

The ever-expanding size and complexity of their technology and data estates is muddling the waters further. Attackers will continue to have more surfaces to explore and exploit.

Advertisment

The rise of ‘Adversarial AI’ is another pattern worth noting. Attackers are exploiting machine learning models and are creating ‘adversarial examples’ that often resemble normal inputs. They are meticulously optimised to break the model’s performance. An Accenture 2019 report on the subject unfolds how these changes eventually stack up, causing the model to become unstable and leading to inaccurate predictions on what appear to be normal inputs.

It is easy if an adversary can determine a particular behavior in a model that’s unknown to developers. That is enough to exploit the system and create ‘poisoning attacks’ (where the machine learning model itself is manipulated). With higher levels of complexity, machine learning models are losing their interpretability.

All these loose ends allow attackers to easily dictate a model’s response for any input, and nudge the system to the outcomes they want. Yes, this can be done for even state-of-the-art modeling techniques like random forests and support vector machines. The high complexity of deep learning models makes them particularly susceptible to adversarial attack, contends the Accenture report.

Katkar reminds of another major industry-shaping trend as connected devices gain footprints in more homes and industries. “This, combined with the increase in the number of hard-to-detect threats, will shape the future of AI in cybersecurity for the next couple of years.” He also noted false positives as amongst the biggest challenge for AI adoption in cybersecurity. “Every day, more systems and processes critical to the day-to-day functioning of individuals and enterprises are digitised. In such a situation, even a single high-priority false positive is unacceptable. This is why it is important to augment AI-led security capabilities with other measures such as cloud technologies and reputation services etc. to minimise the impact of false positives.”

Then there is the aspect of ‘rules’. As Ramamoorthy contends, the security industry is very rule-dominant. “By that I mean, we capture the essence of threats in the form of static rules. When your traditional security system gets an update from the vendor, it is likely that the vendor is pushing an updated set of rules. Now it can be effortless for an attacker to fly under the radar by slightly deviating from the rules.”

Example: You could have a rule to alert you when there are more than 10 failed logins per minute. The attacker can elude by doing 9 attempts a minute.

In other words, AI is turning out to be an easy weapon in the hands of attackers. A butter-knife, per se.

Attackers fooling AI, What next?

There is a lot the defence side can start to spruce up for.

When machine learning models continue to be stubborn and elusive ‘black boxes’, all this complexity and unexplainable model behavior can create new advantages, when tapped right.

Ramamoorthy avers. “Most AI is a black box and unless the attacker has a clear picture of how the AI engine works internally, it would be challenging to perform an adversarial attack. This is where explainable AI can kick in though. Even though AI is automating the decisions, the human in the loop takes into account the AI engine's prediction and does the remediation.”

If the AI system could explain why it made that decision, then he or she could assess the chances of a possible adversarial attack on the model. Since this is an evolving technology, we should be watching out for more in this space on how adversarial attacks would evolve, he suggests.

There are other ways too. Like rate-limiting how individuals can submit a set of inputs to a system – which can be a deterrent to adversarial attackers as per WEF suggestions.

A laser-sharp focus on making modifications could help in breaking an adversary’s ability to fool a model. A lot could lie latent in the way we are structuring our machine learning models. This could be a source of some potent natural resistance. Inserting enough adversarial examples into data during the training phase may also help to make a machine learning algorithm inclined to learn how to interpret them.

A lot can change on the broader strategy side as well. It is time to create defensible ‘choke points’ instead of spreading efforts equally across the entire environment, WEF insists. Do not just worry about the infrastructures that underpin AI models, but strengthen the models too.

In the earlier example of alert-after-ten-attempts, “Your AI-based security system understands the trend and seasonal shapes of your data and dynamically sets thresholds. For example, 10 failed logins on a Monday morning 9AM could be normal, but the same on a Saturday morning 3AM is a security issue.” Ramamoorthy illustrates.

AI Ramprakash Ramamoorthy, Member Leadership Staff - Machine Learning & Analytics, Zoho Labs

AI Sanjay Katkar, Co-Founder & CTO at Quick Heal Technologies

  • By Ramprakash Ramamoorthy, Member Leadership Staff - Machine Learning & Analytics, Zoho Labs

    &  Sanjay Katkar, Co-Founder & CTO at Quick Heal Technologies
Advertisment