Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Samsung leaks data to ChatGPT

Samsung's security woes continue as it grapples with reports of data breaches by its own employees. According to a recent report by The Economist Korea, three Samsung employees leaked confidential information to the popular chatbot, ChatGPT. One employee asked ChatGPT to generate notes from a recorded meeting, while

Soham Sharma profile image
by Soham Sharma
Samsung leaks data to ChatGPT

Samsung's security woes continue as it grapples with reports of data breaches by its own employees. According to a recent report by The Economist Korea, three Samsung employees leaked confidential information to the popular chatbot, ChatGPT. One employee asked ChatGPT to generate notes from a recorded meeting, while another asked it to check sensitive database source code for errors. Reports also suggest that the third employee solicited code optimization.

The incident highlights the risks of using third-party AI tools for sensitive tasks. While ChatGPT may seem like a useful tool for summarizing memos or checking for errors, anything shared with the chatbot could potentially be used to train the system and even appear in its responses to other users. As such, Samsung employees should have been aware of the risks before sharing confidential information with the chatbot.

In response to the incident, Samsung reportedly restricted the length of employees' ChatGPT prompts to a kilobyte, or 1024 characters of text. The company is also said to be investigating the three employees in question and developing its own chatbot to prevent similar mishaps.

It's important to note that ChatGPT's data policy states that it uses users' prompts to train its models unless they explicitly opt out. OpenAI, the chatbot's owner, also urges users not to share secret information with ChatGPT, as it's "not able to delete specific prompts from your history." Deleting your account is the only way to get rid of personally identifying information, a process that can take up to four weeks.

This incident serves as a reminder that it is crucial for both businesses and people to preserve sensitive data. While ChatGPT and other AI tools can be helpful for a variety of tasks, it's important to be aware of the potential risks and take the necessary precautions to protect sensitive data. For the use of third-party AI solutions, businesses should set clear policies and procedures. Additionally, staff need to obtain the necessary training on how to handle sensitive data. Businesses may reduce the chance of data breaches by putting a high priority on data security. This will also help them to safeguard their most important assets, such as sensitive information, trade secrets, and customer data.

Soham Sharma profile image
by Soham Sharma
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More