OpenAI's chatbot, ChatGPT, gained considerable popularity in late 2022, but its rise has been accompanied by apprehension from major companies. With concerns around privacy risks and potential data leaks, companies such as Apple, Bank of America, and Samsung have chosen to restrict or ban the use of ChatGPT for work.
One of the primary reasons for the crackdown on ChatGPT is the use of data from conversations to improve its accuracy. While users can opt out of having their conversations saved via their ChatGPT account settings or by submitting a request via a Google form, some companies remain concerned of the potential privacy risks associated with employees using this AI tool for work. The fear of confidential business information being exposed to outsiders, as seen in the case of Samsung where internal source code and meeting recordings were accidentally leaked, has driven companies to take precautionary measures.
Most recently, Apple joined the list of companies banning ChatGPT for work due to fears that confidential business data may be leaked. Other companies restricting or banning the use of ChatGPT for work include Bank of America, Calix, Citigroup, Deutsche Bank, Goldman Sachs, JPMorgan Chase, Northrop Grumman, Verizon, and Samsung.
The decisions made by major companies to ban or restrict ChatGPT reflect the ongoing concerns about privacy risks and business data security. While some organizations are willing to explore safe usage, others are developing their own AI tools to mitigate potential risks. As regulatory frameworks and guidelines continue to evolve, businesses will need to carefully consider the balance between productivity gains and the protection of sensitive information.
The information provided in this article is for general informational purposes only. Nothing stated in this article should be taken as legal advice or legal opinion for any individual matter. As legal developments occur, the information contained in this article may not be the most up-to-date legal or other information.