Advertisment

Failed AI Projects that Went Terribly Wrong: The Dark Side of Artificial Intelligence

Failed AI projects in the past have displayed that artificial intelligence will always need human interference to ensure success

author-image
Supriya Rai
New Update
Failed AI projects

Ever since ChatGPT has come into the limelight, the world has been going on an overdrive as far as experiments on artificial intelligence are concerned. Companies have been coming with newer AI products, as the demand of ChatGPT like products have been on the rise. However, along with the world of benefits that AI has to offer, the world needs to be reminded that there is also a darker side to artificial intelligence. There are several failed AI projects in the past that had to be shut down due the undesirable results that they produced.

Advertisment

In fact, the Future of Life Institute has been sharing an open letter urging organisations to pause all major artificial intelligence experiments that are bigger than GPT 4. The institute has stated that AI has the potential to flood our information channels with propaganda and untruth, outnumber, outsmart and replace humans, as well as put us at the risk loss of control of our civilization. In addition to this, failed AI projects of the past also seem to suggest the same.

Failed AI Projects of the Past

There have been a few failed AI projects in the past that have had disastrous consequences. Here are some examples:

Advertisment

Tay, the chatbot: Tay was a chatbot, an acronym for “thinking about you”, developed by Microsoft in 2016. It was designed to learn from the conversations it had with users on social media platforms like Twitter. However, within 24 hours of its launch, Tay began to spout racist, sexist, and other offensive remarks, apparently having been influenced by the hateful messages it received from some Twitter users. Microsoft was forced to shut down the experiment within a day.

The Therac-25: The Therac-25 was a radiation therapy machine developed in the 1980s. It used software to control the amount of radiation delivered to cancer patients. However, due to a programming error, the machine delivered lethal doses of radiation to some patients, causing serious injuries and even deaths. Reports suggest that Therac-25 massively overdosed patients at least six times between June 1985 and January 1987.

Deep Patient: Deep Patient was an AI system developed by Mount Sinai Hospital in New York City. It was trained on patient data to predict medical conditions and suggest treatments. However, a study found that the system made incorrect predictions in 28% of cases, raising concerns about its reliability and safety.

Advertisment

Google's image recognition software: In 2015, Google's image recognition software misidentified black people as gorillas. This was due to a bias in the training data used to develop the algorithm, which was heavily skewed towards images of white people. Google ultimately had t apologise for this error after it received massive backlash over this development. 

Facebook's emotional manipulation study: In 2014, Facebook conducted a study in which it manipulated the news feeds of some users to see if it could affect their emotions. The study sparked outrage and raised ethical concerns about the use of personal data for research purposes without consent. Facebook's experiment in which it manipulated nearly 700,000 users' news feeds to see whether it would affect their emotions, breached ethical guidelines for "informed consent”. 

These failed AI projects examples show that while AI has the potential to revolutionize many areas of our lives, it also poses significant risks if not developed and used responsibly. 

Advertisment