Advertisment

Eternal Sunshine of the Spotless Model – Hello, Machine Unlearning!

author-image
DQINDIA Online
New Update
Machine Unlearning


Advertisment

There’s so much doubt that surrounds the fascinating creature called ‘Machine Learning’ – the black box problem, unwatched tumors of bias and the dangers of rogue AI lurking around. What if we could just switch it all off and reboot? Can we try selective amnesia here? With Machine Unlearning?

Just because we ‘can’, does not mean, we ‘should’. That is, often, so handy an advice in the world of politics and technology. But the opposite can also be so useful. Just because we ‘should’, we should strive for – we ‘can’.

In the classic story of Frankenstein- the man-made monster, one finds a lot of pages where Henry could have helped, where Victor would have stopped his blunder, where William’s murder could have been avoided and where Frankenstein would have been saved from loneliness and confusion.

Advertisment

As we move deeper, and further, into the still-unfolding story of Machine Learning; we have to remember these if-only moments. Although it is too early, or too cynical, or both, to think of carelessly-made AI creatures as Frankensteins; we cannot close our eyes to the horrors that would transpire if the story repeats itself.

Perhaps, Machine Unlearning can be the Henry here who can help a lot of Victors. If that’s even even a possibility, the story can be wrapped up in 3 simple questions.

Is Machine Unlearning plausible?

Advertisment

Do we need it?

How soon can we get there?

Let’s begin.

Advertisment

Is Machine Unlearning plausible?

As generative AI, LLMs, and data infrastructure mature; more and more companies are automating their decision-making pipelines with AI, observes Rahul Mahajan, CTO, Nagarro. “However, along with the benefits of AI, there are also risks to consider, such as model safety, data privacy, bias monitoring, and ethics. Additionally, the cost of training foundational models is high, and making incremental modifications to these models can be difficult.”

publive-image

Dattaraj Rao, Chief Data Scientist, Persistent Systems
Advertisment

That’s where Machine Unlearning jumps in. As Mahajan explains, Machine unlearning is a process that removes the influence of a subset of training examples from a trained machine-learning model.

Dattaraj Rao, Chief Data Scientist, Persistent Systems also feels that way. Machine unlearning, which is selectively forgetting certain data points from a trained ML model, is viable and an area of interest.”

But how practical is it to execute it - specially in unsupervised vs. supervised learning models, models with labelled data, and where neural networks have hidden nodes. To add to that, there is the stubborn ‘Black Box’ nature of AI. How do we reverse something when cannot have a good look at its circuitry?

Advertisment

The ‘unlearnability’ of the model will grow as the model gets more complex - very much like ‘explainability’.

Dattaraj Rao, Chief Data Scientist, Persistent Systems

Rao opines that Machine unlearning is possible to implement. “Its effectiveness will depend on the kind of model you opt for. In my view, the ‘unlearnability’ of the model will grow as the model gets more complex - very much like ‘explainability’. With neural networks with hidden nodes, as systems become less explainable, it becomes challenging to make them unlearn facts. The hidden nodes store complex patterns in data, and once trained, may be very difficult to unlearn simply due to network dependencies.”

The effort to executive machine unlearning may vary and is dependent on deep learning architectures, explains Mahajan. But, as he assures, machine unlearning is surely possible. “Supervised learning makes unlearning tough, but it is not the biggest challenge. Re-training while an option, but in some ways is a brute-force and ineffective solution.  Recent research like ‘Recurrent neural networks with continuous-time hidden states determined by Ordinary Differential Equations (ODEs)’ – all that has shown that incremental correction is possible.

Advertisment

Unlearning can get even more tricky in some specific pockets of this technology.

Rahul Mahajan CTO Nagarro 11zon 1

Rahul Mahajan, CTO, Nagarro

Machine Unlearning may play out differently in CNN (Convolutional Neural Networks) vs. RNN (recurrent Neural Networks) models. CNNs usually have multilayer perceptrons and entail one or more convolutional layers. CNNs are heavy on convolution and pooling- and mostly used for image and vision applications. But they generally work in one direction- mostly feed-forward set-ups. RNNs tend to be more complex.  RNNs save the output of processing nodes. The result is fed back into the model and so they can be in both directions. They are heavy on time-series and work on memory on what’s been calculated ‘so far. They are mostly used for text applications.

Ask Mahajan and he points out that - “CNN models are typically more computationally efficient than RNN models, so machine unlearning in CNN models may be less computationally expensive. RNN models are typically more expressive than CNN models, so machine unlearning in RNN models may be more effective at removing the influence of harmful or outdated data.”

Then, there are scenarios like NLP (Natural Language Processing) and Image classification. NLP tasks typically involve large amounts of data, so machine unlearning in NLP tasks may be more computationally expensive, weighs in Mahajan. “Image classification tasks typically involve smaller amounts of data, so machine unlearning in image classification tasks may be less compute-intensive.”

It’s not easy though. Rao adds that implementing this is technically challenging, and even more so, verifying that the information has been forgotten. “I recommend enterprises with mature ML and MLOps processes venture into this space.”

Machine Unlearning is not a trivial task.

Rahul Mahajan, CTO, Nagarro

Machine unlearning is not a trivial task, echoes Mahajan. “It requires certain provisions in the ML architecture, including concepts of transferred learning and neural networks that can be dynamically reconfigured. However, the potential benefits of machine unlearning make it a worthwhile investment for companies that are serious about using AI to make better decisions.”

One way of unlearning is by utilizing a black-box approach, suggests Rao. “This involves implementing a technique such as differential privacy, which adds random noise to data to provide a mathematical guarantee of privacy. However, with this, you cannot guarantee that certain data points will be forgotten - without compromising the utility of the data. Inherent machine unlearning, where certain data will be forgotten, depends on the model type. For simpler models like regressing and random forests, we could understand the logic behind prediction and use that to forget some data points deterministically. For complex models like neural networks (CNN, RNN) these methods get more complex, and we have to resort to statistical methods of unlearning.” 

Do we need it?

All we need is a quick glance at the accidents that autonomous cars have run into, at the racist comments that bots have shockingly spewed with complete recklessness, at the in-built bias against some segments of society that many enterprise AI tools have – specially in HR areas like recruitment. And we know the answer. It would be a God-send to have the ability to rewire harmful models. We need this power not just from an ethical perspective but also from the angle of the many tangible and intangible costs that awry models can pour into the business world.

Machine Unlearning can be used to address the risks associated with AI-powered decision-making like protecting user privacy or complying with regulations, points out Mahajan. “It is a powerful tool that can be used to address the risks associated with AI-powered decision-making. By removing the influence of harmful or outdated data from trained models, machine unlearning can help to ensure that AI systems are safe, fair, and accurate. As AI continues to evolve, machine unlearning will become an increasingly-important tool for ensuring the responsible use of AI.”

Rao reasons that it is particularly important where data privacy is a major concern, and enterprises want to ensure that the model has not memorized details on a particular individual or group. “It can also be used to remove stale or biased data and improve the overall generalizability and fairness of the model.”

With the recent Digital Data Protection Bill in India that changes the contours on privacy rights– and many such emerging efforts on the ‘right to be forgotten’ in Europe and the UK – Machine Unlearning may even become a regulatory homework. Whether companies want to or not – morally- they will have to put guardrails in place to avoid penalties or shutdowns or user break-ups.

How soon can we get there?

Machine unlearning needs to be done in conjunction to model and explainability in the context of responsible AI, augurs Mahajan. “Some of these efforts are critical for enterprises to gain trust in AI. Transparency around the chain of logic makes scaling AI efficient. Machine unlearning- in some ways- is also an important part of AI evolving to a more generic form of intelligence.”

What if Machine Unlearning falls into the wrong hands?

Indranil Bandyopadhyay, Principal Analyst, Data Science and AI, Forrester

All said- this is still an evolving field, and a lot of research is happening in the space, sums up Rao.

Indranil Bandyopadhyay, Principal Analyst, Financial Services, insurance, Data Science, AI at Forrester rightly reminds that a lot of regulations are emerging to confront AI with a tough stance. “Self-regulation may not work so strongly. Regulation is the right and logical outcome for an emerging technology like AI which has a high impact (both positive and negative) potential on society. We need precautions in place. Whether intentional or intentional, negative impact here can be quite serious. It has to be controlled. And if a model has gone rogue but we can find a way to make it forget its bias- how great that would be! However, if that process is time-consuming or energy-intensive – the better alternative, then, is to build it from scratch- again.”

Indranil Bandyopadhyay

Indranil Bandyopadhyay, Principal Analyst, Data Science and AI, Forrester

He also leaves us with another haunting thought- what if machine unlearning, too, falls into the wrong hands- won’t it do more harm than good?

But he prefers to see the glass half-full anyway. “As a concept, it is nice approach. Let’s see where it goes.”

With all these ifs and buts and wow-s about machine unlearning, we realise that the three questions are not that difficult to answer. But we did forget the fourth question that is even more important – and even more scary.

We know we ‘should’. But ‘Will’ we do it?

-By Pratima H

machine-learning
Advertisment