Samsung's Confidential Data Leaked through Employees on ChatGPT? Know Here

Samsung employees have been using the AI platform, and an employee utilised ChatGPT to create a presentation using internal meeting notes

author-image
Preeti Anand
New Update
GPT-5

Several technology businesses, like Samsung, have permitted employees to use ChatGPT while completing jobs. Employees were caught off guard when they mistakenly disclosed critical information to the AI chatbot.  

Advertisment

Has Data from Samsung been leaked on ChatGPT?

Samsung has authorised its engineers in the semiconductor sector to utilise ChatGPT to assist in the resolution of source code issues. Employees inadvertently input top-secret information, such as source code for a new programme and internal meeting notes about their hardware. Three such events were recorded in less than a month.

Notably, ChatGPT saves the data it receives to train itself further. As a result, Samsung's trade secrets are now in the hands of the AI chatbot company OpenAI.

Advertisment

How can such errors be prevented?

To prevent making such errors in the future, Samsung Semiconductor is developing its AI for internal staff usage. It will, however, be limited to processing prompts with a maximum size of 1024 bytes.

In one of the incidents, an employee utilised the AI chatbot to optimise test sequences for detecting flaws in chips that are exclusive to the company. The employee loaded the source code of a semiconductor database download programme into ChatGPT and enquired about faults. In another instance, an employee utilised ChatGPT to create a presentation using internal meeting notes. These notes included sensitive information that should not have been shared with third parties.

Advertisment

Samsung's CEO has urged staff not to make blunders.

According to The Register, Samsung's CEO has urged staff not to make similar blunders. "If a similar accident occurs even after emergency information protection measures are implemented, access to ChatGPT on the company network may be blocked," he has stated.

Notably, according to OpenAI, "we remove any personally identifiable information from data that we intend to use to improve model performance." In our attempts to enhance model performance, we only use a tiny sample of data per client."