
Artificial Intelligence (AI) platforms, like ChatGPT (Generative Pre-trained Transformer), Gemini, Claude and Copilot are becoming increasingly popular for productivity and efficiency in the workplace. However, these platforms have become one of the leading causes for workplace data leaks, oftentimes unbeknownst to the employees contributing to them. Simple prompts like, “Organize this report,” or “Summarize this document,” can lead to confidential company documents being used by AI platforms to train their models and produce responses for other prompts.
How does AI use information?
Unless opted out of, popular platforms like Gemini and ChatGPT use any information uploaded to the platform to improve their responses. Meaning, documents and information that are copied and pasted into the platform could be used to provide information to another user. Once information is provided to an unsecured AI platform, it is no longer in the hands of the person who uploaded it. The information can then be used, stored, or exposed in ways that the uploader won’t be aware of.
Solutions for Employers:
Educate employees on AI algorithms.
Most employees that are using AI for efficiency purposes are unaware of the risks that come with sharing data to platforms. By hosting frequent information sessions, employees can become educated on the dangers and benefits of using AI for work.
How Each Platform Uses Algorithms:
- ChatGPT: A generative AI platform that is trained through large language models (LLMs) to give accurate responses to user questions and requests. ChatGPT can be used like a search engine to provide information quickly, and can also be used to generate graphs, summarize content, organize information, and generate images. A major concern with ChatGPT is that, unless opted out of, the platform uses content uploaded by its users to further train the LLMs, which means any information uploaded might be used in a future response for a different user.
- Gemini: Similar to ChatGPT, Gemini is Google’s AI platform trained on LLMs and large data sets. Made to work like a super-powered brain, Gemini can be used to analyze large amounts of data or entire books, generate deep research, connect to Google accounts to learn more information about its users, and review pieces of writing. The major concern with Gemini is that it collects large amounts of information through users’ Google accounts. Employees might be connected to shared drives or data stores that they’re unaware of, which can be used by the platform.
- Claude: An ethical AI model trained on LLMs known for its “Constitutional” approach to AI. All user inputs and outputs are deleted after 30 days. This platform can process extremely large amounts of text, more than traditional generative pre-trained transformers, which makes it useful for writing business plans, editing text, and translating languages. Risks on this platform include misinformation due to the platform not having the same internet access as other GPTs, or data breaches and malicious code in either input or output.
- Copilot: Microsoft 365 Copilot is powered by natural language processing and machine learning algorithms. Due to its integration with Microsoft, Copilot can use stored organizational data to tailor responses, but it does not use user data to influence outputs outside of organizations. Copilot is meant to be used as an assistant, doing things like automating busy work, organizing tasks, generating plans and analyzing data. It can also be used for document creation, communication, and project management due to its integration into the Microsoft platform. The biggest concern with Copilot is oversharing of information across organizations. Business-specific data may be shared organization-wide if a user has access to sensitive information.
Information Session Examples:
Below are just a few examples of topics to consider for information sessions for your employees. When determining what these topics should be, consider asking your team what questions they have regarding utilizing AI, and then turn those questions into additional session topics.
- Prompt Writing for Efficiency: How to tactfully use prompts to maximize efficiency when using AI platforms, focusing on how to properly structure a prompt as well as how to be specific when making requests.
- AI Algorithms and Information Use: Explain how popular platforms work and how they utilize information provided by the user. Touch on the dangers of providing AI with sensitive information and give tips for how to keep confidential information private.
- Choosing the Right Platform: Describe what to look for when choosing an AI platform. Explain the risks of smaller, unreputable platforms and highlight big-name platforms and how they can be used.
Write an AI policy for employees.
To ensure safety and confidentiality, employers must have a written AI policy that is easy to understand and accessible to all employees. Emphasize how AI can be used for efficiency but address the do’s and don’ts of uploading information. Click this links below to view example policy guidelines:
- AI Security Policy: Exploring Threats to Workforce AI Tools and Building Policy
- Generative AI Security Policy Templates and Best Practices
- Why You Need an AI Policy in 2025 & How to Write One [+ Template]
Provide monitored AI platforms.
To prevent the use of personal AI accounts, consider providing the business-tailored version of popular platforms like Microsoft 365 Copilot, ChatGPT Business, Gemini Enterprise or Claude for Enterprise that can be easily monitored and used company-wide. Sensitive information will be kept confidential and protected, and each platform has security measures that can make it safer for companies to use. See a few examples below:
- ChatGPT Business: This model tailors responses to your business by using business information. However, data is encrypted and protected with secure sign-on and is never used to train other models.
- Gemini Enterprise: Gemini Enterprise allows teams to share one AI platform with built-in security measures. In Gemini Enterprise, data is not used to train other models and is kept within the company.
- Claude for Enterprise: Claude for Enterprise is designed with large organizations in mind, using organizational data for context to connect employees across the board. With secure measures in place, Claude can ensure that company data will not be used to train other models.
- Microsoft 365 Copilot: Copilot allows for collaboration across teams as it is integrated into Microsoft platforms like Word, Excel, and Powerpoint. Prompts and responses are used to provide better responses within the company but not used to train the larger language models.
The Golden Rule for Workplace AI Use:
As a rule of thumb, never upload any sensitive documents to generative AI platforms, such as client documents, company data, or any information that is not public. Consider every AI interaction public, even if the platform is classified as private. As an employer, consider taking measures to educate employees and provide ways for AI use to be productive and not risky. Educate employees on proper use, have a policy in place to protect data, and consider the use of business-tailored platforms like those listed above, where sensitive information is more protected.