Advertisment

“Pause Giant AI Experiments Powerful than GPT 4, Humans at Risk of Becoming Obsolete”

OpenAI, the creator or ChatGPT and GPT 4, has also in fact admitted that there has to be independent audits before releasing new systems

author-image
DQINDIA Online
New Update
VMware

Future of Life Institute has released an open letter urging people to pause all major artificial intelligence experiments that are bigger than GPT 4. The institute, while seeking signatures from those who wish to support this move has stated that advanced AI requires to be planned for and managed by commensurate care and resources, which is lacking at present. The institute adds that AI labs, in the recent months, have been engaging in a “out-of-control race” to develop and deploy digital minds that could potentially be out of control in terms of understanding, predicting and even controlling.

Advertisment

Why Should Experiments on Models Higher than GPT 4 Be Stopped?

The institute has asked pertinent questions in terms of AI Labs coming up with solutions that could harm humans in the following ways:

  • Should we let machines flood our information channels with propaganda and untruth?
  • Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us
  • Should we risk loss of control of our civilization?
Advertisment

OpenAI, the creator or ChatGPT and GPT 4, has also in fact admitted that there has to be independent audits before releasing new systems. “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important,” said OpenAI.

Along the same lines, Sridhar Vembu, CEO, Zoho Corp also says that there needs to be a debate on the risks posed by AI. “Given this open letter on the serious risks posed by AI (I have not signed it but thinking about it), it is time for a serious debate in India on this topic as well. I am worried enough that I spend most of time figuring out Zoho's way ahead,” he said while sharing the letter by Future of Life Institute.

Way Forward for Models Higher than GPT 4

The institute adds that powerful AI systems should only be developed after all stakeholders are convinced that the risks of such models would be manageable and their effects will remain positive. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” adds the institute in its letter.

Advertisment