ChatGPT has been creating ripples and is being widely discussed ever since it was made public by OpenAI in November 2022
ChatGPT, which is a chatbot created by artificial intelligence research laboratory OpenAI, has been hogging the limelight since late last year. The generative pre-trained transformer, while on the one had, has been grabbing attention for its quick and accurate answers, on the other hand is also being criticised for it has the potential to be misused by bad elements. “ChatGPT is scary good. We are not far from dangerously strong AI (artificial intelligence),” tweeted Elon Musk.
ChatGPT has basically been built on top of OpenAI’s GPT-3 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques. The company made it public in November 2022 requesting public to use it with a hope that feedback would make it better. “We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response,” said the creators of the Chatbot.
However, soon after its release users had the chatbot helping them with making codes to even helping them with recipes. ChatGPT has been reported to be extremely versatile and even possesses improvisation skills. Although it suffers with multiple limitations at present, there is scope for improvement as it is capable pop reinforcement learning.
What ChatGPT Can Do
ChatGPT is capable of almost anything that users request the artificial intelligence chatbot to do such as:
- Write codes.
- Debug computer programs.
- Cook-up recipes.
- Compose music.
- Write articles.
- Play games.
- Write essays and articles for students.
- Imitate Linux systems, and so on.
Longterm Implications of Using ChatGPT
While it does seem exciting at present, ChatGPT can certainly be misused by bad actors as well. For instance, there is a fear of the chatbot being used to execute cyberattacks. LKeading cyber threat intelligence firm Check Point Research and other such firms have warned that ChatGPT is capable of writing phishing emails and malware, and is all the more capable of doing so when combined with OpenAI Codex.
In the same vein, Chester Wisniewski, principal research scientist, Sophos, says: “ChatGPT is an interesting experiment at the moment, but its wider availability certainly appears to present new challenges. I have been playing with it since its public availability in November of 2022 and it is quite easy to convince it to assist with creating very convincing phishing lures and responding in a conversational way that could advance romance scams and business email compromise attacks. OpenAI seems to be trying to limit the high risk activities from abusing its use, but the cat is now out of the bag. Today the biggest risk is to English speaking populations, but it is likely only a matter of time before it is available to generate believable text in most commonly spoken languages of the world. We have reached a stage where humans are unlikely be able to discern machine generated prose from human written in casual conversations with those we are not intimately familiar which will security filters to aid in preventing humans from being victimized.”
Should Chatbots Like ChatGPT Be Created?
Sam Altman, CEO of ChatGPT creator OpenAI, while accepting the cyber security concerns believes that this is a risk that the world will have to take. “I agree on being close to dangerously strong artificial intelligence in the sense of an AI that poses, for instance, a huge cybersecurity risk. And i think we could get to real artificial general intelligence in the next decade, so we have to take the risk of that extremely seriously too.”