Advertisment

Driving AI ethics in a digital-risk economy

author-image
DQINDIA Online
New Update
AI ethics

Let’s rewind back in time to around 2015-2016 -- this was the time when the biggest thing on the minds of company CIOs was GDPR.  The reason CIOs were worried was because they had to make a bunch of changes to their IT systems so as to adhere to the regulation and there were stringent penalties for non-compliance.  The regulation proposed a fine of up to €20 million or up to 4% of the annual worldwide turnover of the company, whichever was greater.  

Advertisment

Cut to the present day, enterprises are beginning to find themselves in a similar boat – this time due to the European Union’s Ethics Guidelines for Trustworthy AI <1>.  The guidelines (which are in a proposal stage right now) propose a set of rules to address risks created by AI applications.  They propose conformity assessment before any AI system is put into service as well as after it has been placed in the market.  As in the past, CIO’s need to think about how they need to evolve their IT systems to adhere to this regulation.   Further, penalties for non-compliance to this regulation are more stringent -- up to 6% of the global annual turnover of the enterprise or €30 million, whichever is higher.

The above example is not isolated – there are regulations such as SR 11-7 <2> in the US and Canada has also recently proposed an AI Regulation <3> which outlines clear criminal prohibitions and penalties if an AI model causes harm.  So, the bottom line is that AI Ethics is going to become part of the DNA of almost all digital enterprises.  Enterprises are going to scramble to adhere to the regulations in different parts of the world.  So, the important question that is on everyone’s mind is – what do enterprises need to do to adhere to all of these regulations?  The answer is AI Governance. 

What exactly is AI Governance?

Advertisment

In simple terms AI Governance is the ability to direct, manage and monitor the AI activities of an organization.  With AI Governance in place, organizations will be able to answer questions such as:

  • What all AI models are running in production right now?
  • Why was a specific model built?  What use case is it meant to solve?
  • How can we prove that the AI model running in production has been fair?
  • What kind of fairness tests were run on the model when it was validated? Who validated it?
  • Was the data used to build a model free of bias towards all genders?
  • Was the right process adhered to when moving a model to production?

So, what does it take to achieve AI Governance?   At a basic level, AI Governance is about three things:

Advertisment
  • Trust in Data:  Organizations need to ensure that the data used for building models is trusted and free from bias.
  • Trust in Model:  The AI models built by the enterprise need to be tested for trust (bias, drift, quality, etc.) at each step of the model lifecycle. 
  • Trust in Process:  Enterprises need to ensure that they have a standardized, governed and repeatable process for the AI model lifecycle.

In order to build trust in data, models and process, there is a need for technology which will collect facts about each step of the model lifecycle.  Facts which prove that the data was free from bias, facts that prove that the model was trustworthy and facts that prove that the process was standardized and not bypassed for any model.  As far as possible these facts should be collected in an automated manner and stored in a central location. Further it is critical that any technology that is used to collect these facts should work across a diverse set of tools that can be used to build and deploy AI models. 

IBM has been at the forefront of the entire Trusted AI space and has one of the most comprehensive offering for AI Governance called as AI Factsheets <4>.  It helps enterprises collect these facts from each step of the AI lifecycle and helps answer all the questions listed above irrespective of how and where the model was built and deployed. 

Advertisment

In summary, AI Ethics is not a nice to have but is a must have in today’s digital risk economy.  Any enterprise that makes use of AI, needs to start planning on what changes do they need to make to adhere to the various AI regulations.  Thankfully there already exists a comprehensive piece of technology that can help enterprises on this journey and help them ensure that their AI models are trustworthy and adhering to all the AI regulations. 

References:

<1> Ethics Guidelines for Trustworthy AI https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf

Advertisment

<2> Supervisory Guidance on Model Risk Management

https://www.federalreserve.gov/supervisionreg/srletters/sr1107a1.pdf

<3> https://www.itworldcanada.com/article/breaking-news-government-files-latest-attempt-at-privacy-legislation-reform/488771

<4> https://www.ibm.com/analytics/common/smartpapers/ai-governance-smartpaper/ 

The author is Manish Bhide, Distinguished Engineer & CTO - AI Governance & OpenScale, IBM India Software Labs

Advertisment