LIVONIA, Mich. – Artificial intelligence tools like ChatGPT and Claude are everywhere -- from writing emails and creating marketing copy to editing videos and even helping build websites.
But cybersecurity experts warn that when employees use AI without their employer’s knowledge -- often called “Shadow AI” -- it can create serious data privacy and business risks.
Recommended Videos
At Stack Cybersecurity, marketing manager Amelia Smith says AI can be incredibly helpful when it’s used the right way.
“There’s certainly a lot of benefits to using AI when it’s approved and being done the right way,” Smith said. “Anything from data analysis or, you know, writing copy, doing graphics, video creation, video editing is especially great.”
What is “Shadow AI?”
Shadow AI is when workers use AI tools on the job without telling their employer -- or without following company policies. That can include pasting internal documents into a chatbot, uploading customer information, or using AI browser extensions that quietly connect to work accounts.
“The issues come in when your company doesn’t know about it,” Smith said. “That’s where we have issues with data loss and people using the tools in ways that they shouldn’t be using them and ways that are risky to the company and even risky to themselves.”
A survey shows 48% of employees uploaded sensitive company or customer information into chats.
44% of employees admitted to using AI at work in ways against company policies.
Why it matters: your data may not stay private
Even when companies pay for “enterprise” versions of AI tools, Stack Cybersecurity says organizations should still treat chatbots cautiously.
Smith recommends avoiding anything sensitive -- period.
“Definitely anything even slightly sensitive; financials, client information -- anything that you wouldn’t want public -- I would be very, very hesitant putting into an AI,” she said.
Cybersecurity engineer Christian Hanchett adds that policy matters as much as technology.
“If you say nothing at all, then people it’s the wild west,” Hanchett said. “You don’t know what people are doing, they don’t know what they’re doing, they’re just trying whatever works and seeing what happens.”
AI can be wrong -- and confident about it
Experts also warn that chatbots can “hallucinate,” meaning they may generate information that sounds believable but is incorrect.
“The main point of AI is it wants to make you happy -- If you tell it that was wrong, it will agree with you,” Smith said.
That’s why experts recommend always verifying facts, especially if an AI tool isn’t actively pulling information from reliable sources.
Can employers block AI tools?
Some companies are choosing to restrict AI tools on workplace devices. Stack Cybersecurity says it’s possible to block access to certain platforms and limit what software employees can install.
Hanchett demonstrated how tools can prevent access to common AI sites and reduce the risk of employees installing AI extensions.
But even with blocks in place, he says personal devices make enforcement harder.
“As long as it’s on a company device, you can control a user’s interactions with AI,” he said.
But, personal devices are where it becomes tougher, Hanchett said.
The best solution: talk about it
Both experts say employers should create clear guidelines -- what’s allowed, what’s not, and what data should never be entered into a chatbot.
“Having a list of what is allowed does more good than just saying you can’t use anything,” Hanchett said.
What to know before using AI at work
- Don’t paste in client details, financial data, proprietary documents, or anything you wouldn’t want public
- Assume AI can be wrong—verify facts with trusted sources
- Use only approved tools and follow your employer’s policies
- Be cautious of AI browser extensions that request access to accounts
Free resources to help companies get started
Stack Cybersecurity recently launched a free AI Hub that includes: