Advertisment

EU Parliament approves landmark AI Act! Other countries likely to follow with their own Acts!!

EU Parliament approves landmark AI Act! Other countries likely to follow with their own Acts!! This is a game changer Act,

author-image
Pradeep Chakraborty
New Update
artificial intelligence g2908a073b 640

European Union Artificial Intelligence Act was recently approved in the European Parliament as the world's first-ever such step. The landmark agreement saw the European Commission put forward the proposed regulatory framework on AI with the following specific objectives:

Advertisment
  • ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
  • ensure legal certainty to facilitate investment and innovation in AI;
  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and
  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

Promotion of AI-driven innovation is closely linked to the Data Governance Act, the Open Data Directive, and other initiatives under the EU Strategy for Data. It will establish trusted mechanisms and services for re-use, sharing and pooling of data that are essential for the development of data-driven AI models of high quality.

The AI proposal strengthens significantly the EU's role to help shape global norms and standards, and promote trustworthy AI consistent with the Union values and interests. It provides the Union with a powerful basis to engage further with its external partners, including third countries, and at international fora on issues relating to AI.

Advertisment

The AI Act includes prohibitive AI practices, high-risk AI systems, transparency obligations for certain AI systems, measures in support of innovation, and governance and implementation. It has laid down around 89 harmonized rules on AI, and amended certain Union legislative Acts. Codes of conduct, confidentialty, and penalties, have also been listed. There are administrative fines on Union institutions, agencies, and bodies.

Artificial Intelligence

The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere in the world. It assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Third, applications that are not explicitly banned or listed as high-risk are largely left unregulated.

Advertisment

Similar to the EU’s General Data Protection Regulation (GDPR) Act in 2018, the EU AI Act can become a global standard, determining to what extent AI has a positive, rather than negative, effect on your life, wherever you may be. EU’s AI regulation is making waves internationally. In late September 2021, Brazil’s Congress passed a bill that creates a legal framework for AI. It still needs to pass the country’s Senate.

The EU AI Act would require generative AI systems, such as ChatGPT, to disclose that the content was AI-generated, which will help distinguish deepfakes, and ensure safeguards against generating illegal content. Systems would also be required to share detailed summaries of the copyrighted data used for their training with the public.

Loopholes exist

There are several loopholes and exceptions in the proposed AI law. Shortcomings limit the Act’s ability to ensure that remains a force for good in your life. For example, facial recognition by police is banned, unless images are captured with a delay, or the technology is being used to find missing children. Also, the law is inflexible. If in two years’ time, a dangerous AI application is used in an unforeseen sector, the law provides no mechanism to label it as “high-risk”.

Advertisment

Cost of the EU AI Act is also there. Center for Data Innovation, a non-profit, focused on data-driven innovation, published a report claiming that the EU AI Act will cost €31 billion over the next five years, and reduce AI investments by almost 20%.

Ms. Nathalie Smuha, Researcher. Law and Ethics of AI, KU Leuven Faculty of Law & Leuven.AI Institute, distinguished societal harm, from individual harm, in context of the AI Act. Societal harm is not concerned with the interests of a particular individual. It considers harms to society at large, going over and above the sum of individual interests. The proposal remains imbued by concerns relating almost exclusively to individual harm, and seems to overlook the need for protection against AI’s societal harms.

Researchers at Oxford Information Labs discussed the role EU AI Act gives to standards for AI. Conformance with harmonized standards will create a presumption of conformity for high-risk AI applications and services. This, in turn, can increase confidence that they are in compliance with the complex requirements of the proposed regulation, and create strong incentives for industry to comply with European standards.

Advertisment

Leverhulme Centre for the Future of Intelligence and Centre for the Study of Existential Risk, two leading institutions at the University of Cambridge, provided their feedback on the EU AI law to the European Commission. One of their recommendations is to allow changes to the list of restricted and high-risk systems to be proposed, increasing the flexibility of the regulation.

European DIGITAL SME Alliance contends that wherever conformity assessments will be based on standards, SMEs should actively participate in development of those standards. Else, standards may be written in a way that is impractical for SMEs.

European strategy

Ms. Lucilla Sioli, Director for Artificial Intelligence and Digital Industry, DG CNECT, European Commission, had earlier presented the European Strategy for AI.

Advertisment

She said the definition of AI should be as neutral as possible in order to cover techniques, which are not yet known/developed. The overall aim is to cover all AI , including traditional symbolic AI, ML, as well as hybrid systems.

CE marking is an indication that a product complies with the requirements of a relevant Union

legislation regulating the product in question. To affix a CE-marking to a high-risk AI system, a

provider shall undertake the following steps:

  • Determine whether its AI system is classified as high risk under the new AI regulation.
  • Ensure design and development and quality management system are in compliance with the AI regulation.
  • Conformity assessment procedure, aimed at assessing and documenting compliance.
  • Affix the CE marking to the system, and sign a declaration of conformity.
  • Placing on the market, and putting into service.
Advertisment

There are requirements for high-risk AI. There is a need to establish and implement risk management processes, in light of the intended purpose of the AI system. In doing so:

  • Use high-quality training, validation and testing data (relevant, representative,
  • Establish documentation and design logging features (traceability and auditability),
  • Ensure appropriate certain degree of transparency and provide users with information

    (on how to use the system),
  • Ensure human oversight measures built into the system and/or to be implemented by

    users), and
  • Ensure robustness , accuracy and cybersecurity.

There are obligations to be fulfilled by both providers and users. For providers:

  • Establish and implement quality management system in its organization.
  • Draw up and keep up-to-date technical documentation.
  • Logging obligations to enable users to monitor the operation of the high-risk AI system.
  • Undergo conformity assessment and potentially re assessment of the system (in case of significant modifications).
  • Register AI system in EU database
  • Affix CE marking and sign declaration of conformity
  • Conduct post-market monitoring
  • Collaborate with market surveillance authorities

For user obligations:

  • Operate AI system in accordance with instructions of use
  • Ensure human oversight when using of AI system
  • Monitor operation for possible risks
  • Inform the provider or distributor about any serious incident or any malfunctioning
  • Existing legal obligations continue to apply (e.g. under GDPR).

AI that contradicts EU values is prohibited. Examples are: subliminal manipulation resulting in physical/psychological harm, exploitation of children or mentally disabled persons resulting in physical/psychological harm, general-purpose social scoring, remote biometric identification for

law enforcement purposes in publicly accessible spaces (with exceptions).

There are four key policy objectives for AI in Europe. First, set enabling conditions for AI development and uptake in the EU, make the EU the right place for excellence -- from lab to market, ensure AI technologies work for people, and build strategic leadership in sectors, such as climate and environment, health, strategy for robotics in AI, public sector, agriculture, mobility, and law enforcement, immigration and asylum.

Conclusion

The EU AI Act is a great beginning from all fronts, despite several loopholes and weaknesses. It is expected that over time, as AI develops, those weaknesses can get fixed, and further refined. USA and countries in Asia are closely following the EU AI Act, and also watching its ramifications, and rectifications. These, and the other countries are likely to follow with their own Acts later!

INDIAai is a collaboration among MEITY, NEGO or National e-Governance Division, and NASSCOM. It is currently a knowledge portal, research organization, and an ecosystem-building initiative. It stands to unite and promote collaborations among various entities in India’s AI ecosystem. One hopes there will soon be an Indian AI Act.

ai artificial-intelligence eu
Advertisment