Technology is at the front and center of businesses today. With its catalyzed adoption after the onset of COVID-19, organizations prioritized speed, convenience, and efficiency instead of a security-first culture. Despite cybersecurity strategies becoming more advanced, business leaders and security teams are unable to demonstrate tangible success over cybercriminals who continue to exploit both the basic and sophisticated vulnerabilities. A Deloitte report suggests that before the pandemic, about 20% of cyberattacks used previously unseen malware or methods, but the proportion rose to 35% during the pandemic.
Every organization, regardless of its cybersecurity maturity, should be assessing cyber risk posture across its enterprise. At any given point, security and risk management leaders should be able to answer:
- Are they secure today?
- If yes, how secure?
The need for technology risk assessment
Businesses are basing their critical cybersecurity decisions on abstractions from different subjective sources. A security and risk management leader’s answer to these questions is influenced by multiple factors such as the team’s capabilities, the investment in cybersecurity, and the number of cybersecurity products deployed. Moreover, the traditional approach to cybersecurity has been a point-in-time, cross-sectional evaluation of sample sets or feedback to gauge an organization’s cyber risk posture. This is no longer sufficient.
Businesses lack a dynamic, real-time, and consistent visibility of their organization-wide hybrid technology stack that is constantly evolving. They need a platform that quantifies risk consistently across their technology verticals to ensure an “apples to apples” comparison, identifying the places that need immediate attention.
How is your tech-stack risk measured?
Risk assessment in cybersecurity is based on Bayesian statistics, where the probability of breach within a year due to security gaps in an asset is estimated. This probability is continuously updated as information is fed into the scoring model. The accuracy (confidence) of the score is directly proportional to the number of input signals. This is not a new method and has been used by other industries to lower risks. For instance, an insurance agency providing automobile insurance to a logistics and transportation organization used AI/ML-enabled predictive technology to understand the possibility of insurance claims. They used data such as the driver’s health and vision, road conditions, the prevalence of accidents on the routes used, previous claims due to a particular driver – to calculate the likelihood of claims and the need for the claim to be fulfilled.
For an organization’s technology stack, risk quantification platforms integrate with the existing stack of an enterprise to provide a real-time cyber risk assessment in the form of a breach-likelihood score. At a macro level, it represents the risks across the technology stack of the organization. Daily scanning of every IP Address in the IT network of the organization is performed, and its data is processed through the scoring engine to generate a score per IP address. This also enables microlevel scoring for individual assets. Macro group-wise scores include business units, operational units, technology verticals, locations, and outside-in organization-wide analyses. Similarly, micro asset-wise scores consider the score per IP Address in the network, per ARN on the cloud, and per application.
Metadata considers the relevant factors that can provide initial information about the breach probability from any source before deep-diving into the control configurations. The metadata inputs include geography, industry type and size, vertical, and BCIA requirement (business criticality and confidentiality, integrity, and availability). The relevant inputs and controls are taken into the model to estimate the likelihood of a breach for a particular asset or group of assets.
The initial breach probability is the first level of input that goes into the model. This input is based on industry research and is specific to the asset scoring model. The initial breach probability is the market prevalent probability that an organization will face at least one data breach in a year due to any weakness in their technology, existing misconfigurations, or vulnerabilities.
The final score is inversely proportional to the likelihood of a breach; that is – the lower the breach likelihood, the higher the score, and vice versa.
Understanding the output breach-likelihood score:
The score will never reach the highest score of 5 because it translates to 0% breach probability, which means there’s no chance of a breach. When there is either a 0% or 100% chance of a breach, it’s not a probability but a prediction, which is impossible in statistics. An asset score between 4 and 5 corresponds to a reach probability of lower than 20%. Most customers with sound investments in cybersecurity should lie under this range. A score between 2 and 4 indicates several misconfigured Configuration Assessment (CA) controls that need to be remediated and open vulnerabilities that need to be patched. Finally, a score below 2 means the overall inputs need to be evaluated to improve the score.
How does it help?
Most cybersecurity challenges arise from not knowing where to begin. A breach-likelihood score provides that knowledge in a real-time, simple-to-understand, and enterprise-wide manner. It brings the employees, security team, and Board to the same page concerning cybersecurity. Ultimately, measuring cyber risks helps with prioritization, mitigation and management, cybersecurity budgeting, investments in cyber insurance, and defining the enterprise’s overall cyber risk acceptance and tolerance.
The author is Saket Bajoria, Chief Product Officer, Safe Security