top of page
Venus Caruso

Safeguarding Company Data: Mitigating Risks Associated with ChatGPT for Work

As AI-powered tools like ChatGPT gain use in the workplace, it is vital for employers to know the risks associated with employees pasting sensitive company data into such platforms. Recent statistics reported by Cyberhaven, cybersecurity company that specializes in protecting sensitive data within organizations, shed light on the pervasiveness of this issue and emphasizes the importance of taking proactive measures to protect confidential business information. This article highlights key statistics, challenges for employers, and risk mitigation measures employers can consider to help manage the risk of employees pasting sensitive company data into ChatGPT.


Key Findings

ChatGPT Use at Work

Cyberhaven reports that 9.3% of employees have tried using ChatGPT in the workplace, and 7.5% of employees have pasted company data into ChatGPT. Despite some companies blocking access, employee use of ChatGPT for work continues to grow exponentially.


Pasting Sensitive Business Data

The report reveals that 4.0% of employees have pasted sensitive data into ChatGPT at least once. Sensitive data made up 11% of the company content pasted into ChatGPT. The most common types of pasted company data include sensitive/internal-only information, source code, and client data. At the average company, Cyberhaven only 0.9% of employees are responsible for 80% of copying and pasting company data into ChatGPT. While less than one percent may not cause a pause for concern, bear in mind that each paste event creates a risk of exposing an organization’s critical company data.


Challenges for Employers

Detecting and preventing data leakage to ChatGPT can be challenging for those employers lacking robust procedures that mitigate this risk. Traditional security products are mostly designed to focus on protecting files from being uploaded and not track data once it has been copied and pasted into AI platforms like ChatGPT. Absent such a feature, an organization's security measures become ineffective. In addition, confidential business data pasted into ChatGPT may lack recognizable patterns that traditional security tools typically look for, such as credit card numbers or social security numbers. Without a contextual understanding to differentiate between harmless information and sensitive business data, an organization’s current security tools may fall short in this regard.


Mitigating Risks

Pasting sensitive, confidential, and personal data stored by an organization into ChatGPT may lead to inadvertent exposure of this information ranging from financial data, customer records, employee data, intellectual property, and trade secrets. While some instances of data pasting may be accidental, this behavior raises concerns about an organization’s insider threats. Furthermore, employees who deliberate share of company proprietary information with ChatGPT may potentially indicate malicious intent, such data theft or competitive advantage compromise. Moreover, organizations operating in regulated industries (e.g., healthcare, finance, education) have a legal obligation to comply with strict data protection regulations (e.g., GDPR, HIPAA). Employees of such organizations who paste information governed by such regulations can expose the company to significant legal consequences, hefty financial penalty fines, and potentially cause damage to the organization’s reputation through unauthorized use, disclosure, and access. To help mitigate these risks, employers must take preemptive measures. These include, but are not limited to, establishing policies and procedures, implementing technical control safeguards, conducting regular security awareness programs, and fostering a data security culture and mindset.


Policies and Procedures

Employers should develop comprehensive data classification policies that clearly outline what information can and cannot be shared with AI models like ChatGPT. These policies should be presented in a digestible manner, clearly communicated to all employees, and evaluated to ensure employees they understand their contents and the importance of adhering to them. As with all policies, employers must regularly review and update them to reflect any evolving data security requirements and industry regulations as part of their efforts to safeguard company data and operate in compliance with the applicable laws.


Technical Controls

Employers should implement technical controls to prevent or minimize the risk of employees pasting sensitive information into ChatGPT, such as using data loss prevention (DLP) tools that can detect and block sensitive company data from being copied and pasted into AI platforms. DLP tools are designed to identify patterns, keywords, or specific data formats to prevent inadvertent data leakage. In addition, employers should implement role-based access controls to restrict access to sensitive information and AI platforms. Only authorized individuals should have permission to interact with AI models and copy/paste data into them. Employers should also consider using automated redaction and anonymization techniques to remove or obscure sensitive company data before it is shared with AI models. This ensures that even if company data is pasted, an organization’s sensitive information remains protected.


Security Awareness Training

Employers must educate employees about the risks and consequences of pasting sensitive company information into ChatGPT. This can be achieved through regular training and awareness programs that train employees on data protection best practices and emphasizes the importance of handling sensitive company information securely and responsibly. In addition, employers should provide specific guidelines on the appropriate use of AI tools and platforms, including the prohibition of pasting sensitive business information and clearly communicate the potential risks and disciplinary actions associated with non-compliance. Lastly, employers should establish reporting mechanisms for employees to confidentially report any concerns or incidents related to data leakage. By encouraging a culture of reporting and feedback, an organization is in a better position to identify and promptly address potential issues.


Data Security Culture

Promoting a culture of data security is an important factor of any cybersecurity program because helps reinforce responsible behavior, notifies employees know what is expected from them, and mitigates the risk of employees pasting sensitive data into ChatGPT. This can be achieved by b establishing comprehensive policies and procedures regarding data handling, including explicit instructions on what should not be shared with AI models, holding employees accountable for their actions regarding data protection, including data security as part of performance evaluations, regularly monitoring and evaluating the effectiveness of data protection measures, and conducting periodic audits to identify any vulnerabilities or areas for improvement and implementing necessary changes accordingly.


By implementing these proactive steps, employers are in a better position to reduce the risk of employees pasting sensitive business data into ChatGPT and enhance their data security measures to protect confidential, personal, and sensitive information within the organization.


Final Remarks

The statistics provided by Cyberhaven underscore the need for employers to address the risks associated with pasting sensitive company data into ChatGPT. By implementing clear company policies, training employees, deploying technical controls, and fostering a culture and mindset of data security, employers can better manage the risks of employee data leakage and protect their valuable company assets.


 

The information provided in this article is for general informational purposes only. Nothing stated in this article should be taken as legal advice or legal opinion for any individual matter. As legal developments occur, the information contained in this article may not be the most up-to-date legal or other information.


bottom of page