Right from Yudhishthir claiming Ashwathama’s death in the Mahabharata to Edward Snowden becoming a whistleblower, insider threats are not new to the world! For instance, unbeknownst to his employer, an ex-Cisco employee Sudhish Ramesh used his own Google Cloud account to launch a malicious code to delete 456 virtual machines used to support Cisco’s WebEx application. These applications provided video conference and collaboration tools to customers across the globe and as a result of the attack, approximately 16,000 users of WebEx couldn’t access their accounts for two weeks – costing Cisco $2.4 million worth of damage in restitution and employee downtime.
Needless to say, the cybersecurity products and solutions available have exponentially grown in sophistication since the times of the ancient war of Mahabharata. Artificial intelligence and machine learning have made it possible to streamline information and evaluate the same in milli-seconds. As of today, with the evolution of AI/ML and quantum computing, a brute force attack can guess 350 billion passwords in a second!
How can AI complete the puzzle?
Even with technology as advanced, according to the 2020 Ponemon Institute Cost of Insider Threats, it takes an average of approximately 77 days to contain an insider threat and cost ~$11.45 million! Currently, ~90% of all data beaches have a human aspect to it. Despite such far reaching consequences, cybersecurity for the human aspect has only been looked at in siloes where and assessed in a piecemeal fashion rather than as pieces of one big jigsaw puzzle. Input and signals from identified variables can be fed into a machine learning unit that can predict insider threats. The variables can be broadly categorized as:
- Profile of the employee: Were they previously working for your competitor, criminal record, status of employment- fresher or serving notice
- Their cybersecurity Awareness: courses, seminars or lectures attended to update and refresh cyber-awareness.
- Personal and official devices they use: mobile phones, laptops, tablets and smart gear used to access business critical information
- Their normal (and abnormal) behavioral pattern: the time of their usual logging on and off, pattern of checking work emails, access location (home/ public cafe)
- Previously breached personal information: what data of the employee is already exposed on the deep and dark web as a result of publicly available data breach information.
The hurdle of using different services such as UEBA, IAM and PAM amongst others, targeting different issues is that the security teams still have to collate and co-relate data across dashboards and convert the jargon into addressable actions. For instance, User & Entity Behavior Analytics (UEBA) monitors the logging pattern of an employee but fails to co-relate it to their profile to justify suspicious behavior. With almost 28% of SOC alerts going unaddressed due to the sheer volume of alerts per day, the solution is to create a channelized method where human intervention is minimized and automated intimations. This could be in the form of voice calls being initiated as soon as unlikely (superhuman) behaviour is noticed. Such cybersecurity strategy demands 24x7x365 monitoring of employees. Measuring risk stemming from people across departments within an organization comes handy here. The measured risk can be depicted in the form of a numerical score per employee, aggregated to show a score per department, per outlet and ultimately the entire organization. It will be a direct reflection of the cumulative analyses from each service already being leveraged by the business, therefore bringing all the data to a single screen and represented as a simplified score.
Often, security teams confuse AI and ML-based analytics to be contributing to the IT jargon that surrounds cybersecurity when in fact, it is just the opposite. Leveraging the potential offered by an AI-enabled supervised machine learning platform, all the signals across the aforementioned five pillars of people security can be fed in real-time to generate possibilities of event A (breach) happening given the possibilities of event B (an insider threat). Such platforms create a Bayesian Network of risk. As the number of input signals increases, the precision of output improves. To make it simpler, consider a tree with a single branch pitted against one with a hundred branches – the chances of getting drenched in the rain is lower for you if you stand under the one with more branches.
Deep tech is here, and is being unabashedly used by those on the dark side. Humans and the countless variables surrounding behaviour and motives is the perfect concoction to test the limit (less ness) of AI. Ultimately, to keep your business cyber-safe from your employees, what will you choose – the red pill or the blue?
The author is Saket Bajoria, VP, Product Management and Customer Success, Safe Security.