Unless an unforeseen breakthrough occurs, we are likely decades away from artificial general intelligence. Now, this is not necessarily a bad thing, as AI in its current state effectively serves to assist—rather than replace—our work lives. It is true that there has been some job displacement, especially in the retail sector; however, for the most part, this fear has been overblown. Moreover, it’s not as if we always want our AI models making decisions autonomously.
It is vital that humans are kept in the loop
For algorithmic decision-making to work best, human intervention is extremely important. After all, virtually no artificial intelligence models are correct 100 per cent of the time. Humans must periodically audit the model, assess the integrity of the data, and explain why the AI model made a given decision. Moreover, as the past year revealed, humans also must intervene in the case of an unforeseen event. Due to the pandemic, many AI models needed to be adjusted to account for the sudden shift from office work to remote work.
AI models are bolstered by causation and counterfactuals
To date, most AI models’ decisions are solely made based on correlation. For example, one bank’s AI model may reject a loan application due to a low credit score. However, a better model would account for causal techniques and also enable counterfactual inference over the loan applicant’s data points.
Causation can help the end user understand why exactly his loan was rejected and the counter factual inference techniques can give him an insight into the amendments he should make to get his loan application cleared positively. By bringing in these new variables, the model better explains why it is making its decisions. Equally important, causation and counterfactuals help to prevent preconceived biases arising from the datasets.
Data privacy regulators are beginning to explicitly incorporate Artificial Intelligence
When the GDPR was written, there was not a single mention of artificial intelligence. On the other hand, the California Privacy Rights Act (CPRA), which is a recent expansion of the CCPA, does make reference to artificial intelligence. According to the CPRA, consumers can opt out of any profiling by automated decision-making technologies. Additionally, consumers can request additional information to ascertain exactly how and why these ML-models are using their data. This is an important step forward, as AI-models’ decision-making behavior should be transparent, and users’ data should not be misused. To this end, it will be increasingly important that legislators continue to regulate AI models.
Data poisoning and AI-fueled adversarial attacks are a concern
It’s important to remember that it is not only IT security personnel who are using AI to bolster their processes. Bad actors also have access to such technologies, and they weaponise AI wherever possible. One particularly important concern is the increase in AI-based adversarial attacks and data poisoning.
If bad actors can manipulate data within an artificial intelligence model without a human’s knowledge, they can cause a great deal of trouble. To remedy this, one must frequently audit models, check the veracity of the data, and consider the use of blockchain technologies. Such technologies can help to confirm the authenticity and source of the data. This way, if the data is being used inappropriately, companies and regulators can track down the culprits.
Despite the threat of AI-based attacks, and the frequently overblown fear of technological job displacement, artificial intelligence will continue to make our lives easier in 2021. Artificial intelligence technologies serve to augment, not replace, our work lives. As we continue to fine tune our AI models, we’ll automate mundane tasks and save time; however, we also need to keep humans in the loop, ensuring we don’t stray into unwanted territory.
- Ramprakash Ramamoorthy, Product Manager, ManageEngine Labs