ChatGPT has occupied headlines ever since it was made public in November 2022, with the platform crossing over a million registered users within a matter of days. While the conversational artificial intelligence chatbot offers a great number of benefits, experts have also warned of the possible negative consequences of such AI assistants. One particular concern that has been addressed by several industry folks is the fact that ChatGPT could be used by cyber criminals to launch sophisticated attacks.
“ChatGPT is a term that has seemingly graduated to dinner table conversation status over the last month. While its predecessors garnered interest in the data science industry, very few had realized practical uses for the average consumer. That can be put to rest now, as the “smartest text bot ever made” has inspired thousands of innovative use cases, applicable across nearly every industry. In the cyber realm, examples range from email generation to code creation and auditing, vulnerability discovery and much more,” says Steve Povolny, principal engineer and director, Trellix.
Although OpenAI, the creator of ChatGPT is trying to mitigate these bad elements by limiting malicious content, cyber criminals could easily find a fix for this. “However, with breakthrough advances in technology, the inevitable security concerns are never far behind. While ChatGPT attempts to limit malicious input and output, the reality is that cyber criminals are already looking at unique ways to leverage the tool for nefarious purposes. It isn’t hard to create hyper realistic phishing emails or exploit code, for example, simply by changing the user input or slightly adapting the output generated,” notes Povolny.
He goes to add: “While text-based attacks such as phishing continue to dominate social engineering, the evolution of data science-based tools will inevitably lead to other mediums, including audio, video and other forms of media that could be equally effective. Furthermore, threat actors may look to refine data processing engines to emulate ChatGPT, while removing restrictions and even enhancing these tool’s abilities to create malicious output.”
Although these concerns are worth noting, Povolny feels that such AI tools must still be encouraged as they have the potential to foster innovation and collaboration in the industry. “While cyber security concerns have manifested, it’s important to remember that this tool has even greater potential to be used for good. It can be effective at spotting critical coding errors, describing complex technical concepts in simplistic language, and even developing script and resilient code, among other examples. Researchers, practitioners, academia, and businesses in the cybersecurity industry can harness the power of ChatGPT for innovation and collaboration. It will be interesting to follow this emerging battleground for computer-generated content as it enhances capabilities for both benign and malicious intent,” he adds.