ChatGPT is not capable of creating malware automatically: David Fairman, Netskope

ChatGPT was able to create, review, improve and explain code, but it will not create novel, functional malware automatically

Supriya Rai
New Update
ChatGPT Interview with David

While ChatGPT is being discussed for all the benefits that it has to offer, cybersecurity experts have been cautioning users of the possible negative implications of using the AI chatbot as well. The OpenAI-owned artificial intelligence conversationalist can apparently be used by malicious elements to write codes that can execute cyber attacks. David Fairman, chief information officer and chief security officer, Netskope, however, says that while such concerns are real, they shouldn’t be overstated.


DQ: What are the implications of new age tools like ChatGPT on the security industry?

David Fairman: There has been a lot of discussion about the ways in which cyber criminals may be able to abuse AI systems like ChatGPT to strengthen their nefarious attacks. However, what has been lacking in many of the conversations is a consideration for the ways in which security professionals can use the very same tools to strengthen their defence.  

Attackers can certainly use AI systems to help identify targets, mimic human behaviours to better evade security systems, and craft both malware and the content in which it is distributed.  However in the coming years and months we will also see security teams effectively embracing AI to improve threats identification and automate much of the defence process. In fact, AI is commonly used in many of the latest cyber security products that are used by security teams today and we will see this continue to evolve.

DQ: Are the concerns about ChatGPT being used by bad elements real?

David Fairman:
The concerns are certainly real, but they shouldn't be overstated. The Netskope Threat Labs team conducted a full investigation into the ways threat actors might make use of ChatGPT. ChatGPT was able to create, review, improve and explain code, but it will not create novel, functional malware automatically - at least not yet, but the cyber industry is watching the continued evolution of this. Probably the most immediately useful contribution ChatGPT might make to an attack campaign will be in the creation of novel, fluent bait messages for the social engineering element of an attack, such as phishing emails.

DQ: How should CISOs and CIOs overcome these concerns?

David Fairman:
There are two points security and IT leaders should note about the emergence of ChatGPT.  Firstly, and tactically, it is important to bear in mind that this tool is able to help their teams in defence too. For instance, ChatGPT has been trained on examples of common code vulnerabilities and can pick them out when you share the source code with ChatGPT.

Perhaps the more important point to note, however, is that ChatGPT is not creating new forms of attack. It may impact scale and effectiveness, but it doesn't - yet - fundamentally change the nature of the attacks that organisations are subject to.  If this new AI development changes any priorities for a CIO or CISO I would say it should reiterate the importance of  maintaining a strong security posture and acting with urgency to mitigate known vulnerabilities.  In addition, it highlights the importance of the human employee within the security stack.  Just-in-time user coaching is something every organisation should include in their security processes; coaching and guiding employees to make good security decisions as they go about their work. 


DQ: How can multinational organisations navigate diverse cyber security and data protection regulations around the world?

David Fairman: The global cyber security and privacy regulatory landscape is complex and can be hard to manage.  One way to help manage this complex task is for organizations to develop their own security policies and standards, such that these are within the organization's risk appetite, and have these policies and standards mapped to the various specific jurisdictional cyber security and privacy regulations they need to comply with.  Organizations need to assess and determine if the policies and standards meet the requirements defined in the various regulations and modify them if they don’t.

Furthermore, organizations need to be able to demonstrate that they are compliant with these policies and standards through an assurance process with supporting data.  This is not a one off exercise as regulations change, or new ones created, and the policies and standards need to be updated to accommodate.

DQ: How can CISOs and CIOs can partner with CEO and other board members to improve agility and responsiveness, and enhance an organisation’s security posture?

David Fairman:
In the last year, awareness of cyber risk has grown significantly among CEOs and the board, but there is still room for improvement in translating this awareness into tangible advances in security coverage.  For cyber security discussions to be productive, they require security and business leaders to be able to collaborate and communicate effectively.  There are three things I recommend. 

Firstly; building a partnership between the CEO and CISO so that they can present recommendations to the board as a team. Secondly, making sure to present the right information to the board, to enable them to fulfill their objective of managing risk without confounding them with technobabble. And finally, upskilling the board to enable them to operate with a basic understanding of cyber threats.  Some of these are tasks for the CIO/CISO and some are tasks for the board, but when the two meet in the middle it has a transformational impact on an organisation's security posture.