Advertisment

How can we have responsible Generative AI?

How can we have responsible Generative AI?

author-image
Pradeep Chakraborty
New Update
OMG

Governments, social scientists, technologists, and heads of AI organizations are expressing concerns about potential misuse of the power unleashed by Foundation Models and Generative AI. Many nations and organizations have already started taking steps to regulate this powerful technology to prevent possible misuse and disruption of the societal fabric. Fear and concerns are synonymous with the approach toward nuclear technology worldwide.

Advertisment

Teams of the Object Management Group (OMG) Responsible Computing Initiative launched a Tiger team to study and evaluate the potential concerns regarding Generative AI Techniques (GAIT) and recommend ways to channel the capabilities of GAIT responsibly to benefit society.

Specific objectives include investigations on how to:

  • Instill trust and reduce bias in the data used to build GAI models.
  • Recommend proper and ethical use of AI models whilst protecting the intellectual rights of authors, artists, and other professionals.
  • Regulate the proliferation of AI models to ensure no disruptive effects on employment, society, and the environment.
  • Educate the consumers, administrators, and governmental bodies on the dangers and limitations of excessive use of AI models.
Advertisment

Different tracks of this initiative are:

  • Ensure trustworthy Generative AI models, which must be based on accurate and unbiased data.
  • Impact on the labor market, especially many white-collar jobs like copywriters, interpreters, and even professionals like investment advisors, editors, lawyers, customer support professionals, creative writers, doctors, etc.
  • Issues related to IP, copyright violation, ownership, royalty, etc.
  • Impact on the environment with the power-hungry deep learning process to build (train) GAI models.

Michael Linehan (OMG) and Cheranellore Vasudevan (IBM) presented their thoughts on this initiative.

Advertisment

Michael Linehan, Program Director, Responsible Computing, said Responsible Computing establishes a cohesive, interconnected framework across six critical domains to allow every organization to educate on their responsibilities, define their goals, and measure all their progress against their aspirations. Participatory movement of like-minded people and organizations realizing their ambitions, and acting upon their responsibilities, will have a positive impact on society and the future of technology by sharing by engagement methodologies and sustainability maturity models.

Responsible Computing is part of the Object Management Group (OMG). The OMG comprises of Industry IIoT Consortium (IIC), AREA, BPM Health, Consortium for Information and Software Quality, OMG Standards Development Organization, Digital Twin Consortium, Responsible Computing, and DDS Foundation.

There are several ongoing AI projects. Under Responsible Computing, there is Generative AI Tiger Team, Responsible Generative AI white paper, and it has published generative AI member articles. Under IIC, there is Industrial AI Task Group, Industrial AI Reference Framework, IoT implications of Gen AI/LLM, and sustainability of AI. Under OMG, there is the AI Task Force, AI and OMG Standards white paper, Modelling IP / copyright issues, contribution to the EU Observatory of Standards, and liaison with IEEE P3123 on AI taxonomy.

Advertisment

Transformation in AI
Cheranellore Vasudevan, Technical Lead and Strategist of AI and Data Science, IBM, noted that an impactful transformation in AI is ongoing. There are powerful generative capabilities of the foundation models. Concerns about disruptive impact of generative AI are around education, jobs, cognitive skills, IP and copyright, errors and hallucination, and society, human life and values.

There are resource consumption and environmental impacts. We also have the Responsible Generative AI Tiger Team in place. Foundation models or large language models (LLMs) refer to Transformer language models1 that contain hundreds of billions (or more) of parameters, which are trained on massive text, image, other forms of data. These models exhibit capabilities to understand natural language, provide answers to complex queries, and more importantly, create/compose texts, images, audio, and video outputs.

Foundation models have capabilities, such as impressive performance in summarizing, translating, creative writing, synthesizing audio and images, generate software code. Ability for unsupervised or self-directed learning, and hence capable of unintended capabilities. Also, the demonstration of knowledge on-par or even exceeding the skills of any single human expert.

Advertisment

As for limitations, generative AI doesn’t have intentions, goals, or the ability to understand the cause and effect. Generative AI doesn’t explain how it produces the outputs. Generative AI acts like a black box with no understanding of the domain principles.

Concern and impact
There are concerns on accuracy, bias, and non-transparency. For accuracy, most foundational models (FMs) are trained on Internet data, which may not be accurate. Compelling misinformation and erroneous results make them risky to depend on. Lack of self-awareness – limits of its own knowledge stems from the black-box approach of FMs. Finally, it has potential to “hallucinate”, making up false statements.

Regarding bias, FMs are susceptible for bias because of the same reason of being trained on Internet data. They can amplify biases in training data, and further reinforce and propagate stereotypes and racist views. Regarding non-transparency, if lack of accuracy and bias are symptoms, the lack of transparency is the root cause. The answers are composed or recreated from patterns rather than basic knowledge of the domain. The associative pattern could be accurate in 70 -80%, or even higher percentage of tests.

Advertisment

Next, we have concerns regarding IP and copyright issues. Foundation models use several sources of information assets spanning internet pages, news services, repositories of art, literature, music, audio, and video collections. There are concerns about originality, copyright violation, and royalty payments.

Questions raised are: Who gets the royalty from GAI creations? Are the source owners, the model owner, or model user? How can we subject GAI outputs to plagiarism checks? How do we differentiate between original and artificial, or even, what is meant by original creations to begin with? How it is different from some one learning from many artistic works and creates something as based on what he/she learnt? What is the boundary between a learned compilation and reproduction, vs. collage of multiple sources?

Another concern can be impact on labor. Automation leads to disruption in job market. There are historical transformative periods of agricultural revolution, industrial revolution, IT revolution, and so on. What about physical labor vs. white collar jobs? How is it different from disruptions, when printing press, camera, or computers were invented? May be, the quickness of its onslaught, or its too incomprehensive to measure its impact?

Advertisment

Next, we look at the impact on education. Individualized intelligent tutoring and training may become a reality. There will be equity on educational resources. Ideal state of 1x1 training may become affordable and accessible to many. Does that diminish the need for an expert teacher? Does over-dependence on “machine educators” eliminate the opportunities to have role models and motivational teachers / Gurus? How far can human touch of teaching/tutoring impact the students psychologically and emotionally? What is the impact of any potential absence of group learning? What are impacts on evaluating original work vs. AI-created?

There are concerns around power consumption too. In training, model training is very power intensive. It is reported that GPT-3 was trained using 1000+ GPUs for over 30 days, and it is projected that GPT 4 may have used 10,000 GPUs or more. GPT-3 was told to have 175 billion parameters, required 355 years of single-processor computing time, and consumed 284,000 kWh of energy to train. Training these FMs in specific applications also needs additional power. Environmental impacts of the carbon footprint caused by LLMs is a major concern.

And, what about run-time (inferencing)? Running the LLMs is also power and other resource (memory) intensive. It is reported that the compute resources for large AI models account for same, and in some cases bigger than the power requirements of training the model. Imagine multiple instances and use cases running at the same time!

Next, we have the impact on basic cognitive and artistic skills. Performance of Generative AI is very impressive. It can impair creative writing, ability for organizing thoughts, generating and evaluating alternatives? Desirable behavior skills and disciplines – ability to remember things, being methodical, creating an individual style, ability to analyze situations? Will this lead to over dependence on machines (just like many can not drive without the GPS navigational applications?

Is it a concern or boon? Do we have to do same mundane things – naming all 50 states in USA, or name the widest river of the world? Can we enhance our cognitive skills to handle higher things, and increase our productivity by multiple order of magnitude?

We also have a concern regarding the impact on society and human skills. Foundation models could diminish the need for humans to acquire knowledge or get trained professionally. This leads to more questions than clear answers. Will it help humans operate at much higher plains of cognition sooner in age? Will it reduce the need and societal recognition of formal education and specializations.? How is going to affect creativity and artistic skills? Will humans ignore the need and ability to learn? How is going to impact basic human skill of comprehension of a passage of text, or recognize and interpret an image (like X-ray, EEG)?

Another concern is about monopoly and power players. Building and maintaining Foundation Models require huge amounts of resources including money, computing power, technical skills, and data. GPT-3 has hundreds of layers, billions of weights, and was trained on hundreds of billions of words. Rate of growth of power and range of Generative AI models is too quick and often alarming. Generative AI technology is mainly dominated by a small number of tech giants who could exhibit unhealthy leverage and monopoly. Danger of making similar decisions (for example in stock trading) – aka “herd mentality” that could lead to monoculture (Gary Gensler, SEC Chair).

Regulating Generative AI necessary
We need to be regulating Generative AI. There are efforts by various Governments and several professional organizations to bring to regulate and establish best practices in AI industry. We also have the European Union’s AI Regulation and US Federal Government’s efforts.

AI Safety Institute initiated open letter for regulating AI. Analogous to International Atomic Energy Agency (IAEA) that was set up after the WW II. Some of the prominent Foundation Model creators are resisting any governmental regulations, while volunteering to self-regulate!

We can have a (desirable!) world with responsible AI. We can use Generative AI to unleash human productivity. There can be faster discovery of drugs, quicker and more bug-free software code generation, quicker generation of semi-creative images/videos/texts, less wait times as less humans in the loop (appointments/reservations/tech support.

We can enhance our capabilities with better interpretation of images in medical and fault diagnosis. We can also have better prognosis prediction. We can reduce costs with automated services, and have better education and communication. We can facilitate better equity in information access and knowledge acquisition. We can have better communication and accessibility for people with special needs.

Let us not fear technology; we need to embrace, master, and manage it! Historically, technology has done more benefits to humanity than damage, if properly regulated and managed. Leaving the hype and fear mongering aside, as “responsible technologists”, let us work on how we can “tame” this powerful new tool for the benefit of humanity in the coming
years!

Advertisment