[ad_1]
Kris Carlon / Android Authority
TL;DR
- Samsung lifted a ban that prevented employees from using ChatGPT for work.
- Three weeks later, Samsung executives found that employees were leaking company secrets to the chatbot.
- Samsung has now implemented an emergency measure to limit prompts to 1024 bytes.
What’s the biggest mistake you’ve ever made at your workplace? Whatever it is, maybe you can take solace in knowing it probably doesn’t compare to the mistake Samsung‘s employees recently made.
According to local Korean media, Samsung is currently doing damage control after executives learned employees were intentionally giving company secrets to ChatGPT. Specifically, it appears three separate incidences of this were discovered.
The first incident involved an employee who copied and pasted source code from a faulty semiconductor database into ChatGPT. This employee was reportedly using ChatGPT to help them find a fix for the code. The second case involved another employee also trying to find a fix for defective equipment. Then there was an employee who pasted an entire confidential meeting, wanting the chatbot to create meeting minutes.
The problem here is that ChatGPT doesn’t delete the queries that are submitted to it. Open AI warns that users shouldn’t input sensitive data because those prompts are stored and may be used to improve its AI models.
To add insult to injury, Samsung previously banned its employees from using ChatGPT for work. It later decided to unban the AI tool three weeks prior to these incidents. Now the manufacturer is attempting to fix its problem by putting a 1024-byte limit on ChatGPT prompts.
While this is bad for Samsung, it isn’t the only company that has experienced this problem. As Axios reports, corporations like Walmart and Amazon have also gone through something similar.
[ad_2]