What's Up With Shadow A.I.?
Unauthorized use of A.I. is a real problem in today’s office environment. Many employees don’t know that it’s an issue. But it can be. Read on to find out how to address A.I. with respect to your employees.
By Andrew Gale, Vice President of Technical Services
What is Shadow A.I., exactly?
Shadow A.I. is basically that coworker who microwaves fish in the breakroom, but in digital form. They get the job done, but at what cost? Formally speaking, Shadow A.I. refers to the unauthorized use of artificial intelligence by employees without their organization’s approval, knowledge, or oversight. Meaning someone is quietly using generative A.I. tools like ChatGPT to analyze data, write emails, edit reports, or “polish” a presentation.
Employees love A.I. because it saves time and makes them look competent.
Employers fear A.I. because it raises concerns about data security, compliance, and the terrifying question: What else are people doing without telling us?
Why, as a business owner, compliance officer, information technology professional, or other relevant manager, should I care? Here’s how the chaos unfolds:
- Data breaches – Employees may unknowingly paste sensitive, confidential, or regulated information into generative A.I. tools. Congratulations, your private data may now be enjoying a digital vacation it was never approved for.
- Reputational damage – Poor-quality, incorrect, or biased A.I. generated content can make your company look careless, tone-deaf, or wildly misinformed. Nothing says “trust us” like confidently publishing something that’s completely wrong.
- Regulatory non-compliance – Industries with strict data and privacy rules (looking at you, healthcare, finance, and legal) can land in hot water if generative A.I. is used improperly. Fines, audits, and uncomfortable meetings ensue.
- Bad decision-making – A.I. can hallucinate. Employees who treat its output like gospel may end up making strategic decisions based on vibes, guesses, or statistical nonsense.
A.I. is like giving everyone in the office a sports car without driver’s ed, seatbelts, or a speed limit. Sure, they’ll get where they’re going faster, but someone is absolutely crashing and burning.
So how as an organization can we help mitigate some of these risks? Firstly, each business must weigh the advantages and disadvantages of allowing employees to utilize these types of tools.
Next, we have to evaluate the relative good A.I. can do for us, but also the relative bad. It’s a two-sided coin, so we have to look at both sides.
Then, we have to decide what approach is the best for the organization.
There are options.
Option 1: The Full Nope
If the business decides employees should not use these tools at all, that’s perfectly valid. But saying “don’t do it” and hoping for the best is not a viable strategy. You’ll need to:
- Clearly document the restriction in employee policies (yes, in writing, the kind lawyers like ).
- Implement prevention tools such as content web filtering to block access to generative A.I. sites.
- Enforce it consistently, because nothing undermines a policy faster than “Well…it applies to everyone except for marketing.” This approach works best if you’re comfortable trading speed and convenience for control and peace of mind.
Option 2: Yeah…Maybe
First and foremost, educate your employees! This is not optional, and no, “I watched a TikTok about A.I.” does not count as training. Employees need to understand both the risks and the proper use of generative tools. Thankfully, there’s a host of free training courses available that can be deployed quickly because learning after a data breach is generally frowned upon. Secondly, when possible, prioritize A.I. solutions that are built directly into business applications over public, cloud-based LLMs (Large Language Models).
Next, blocking access alone is not enough. Yes, content filtering can prevent employees from accessing certain generative A.I. tools, but people are creative.
A monitoring platform should also be implemented to:
- Enable audits of A.I. tool usage.
- Validate that approved tools are being used correctly.
- Identify potential exposure of sensitive data.
Lastly, incorporate a flexible, clearly defined A.I. governance policy into your employee handbook. This policy should include specific, real-world examples of what is allowed and what is absolutely not. Clear expectations eliminate confusion, reduce risk, and prevent the inevitable “Well, I didn’t know that counted” conversations.
In summary, train your people. Use controlled tools. Monitor intelligently. Write it down. Because managing A.I. without governance is like handing out power tools with no instructions and being shocked when someone shoots a nail through their hand.
Option 3: Do Nothing
A.I. has certainly flipped the landscape in the business world over the last few years, and I truly expect it to get worse before it gets better. One could argue that setting a companywide A.I. usage tool with clear expectations has a lesser risk than allowing employees to go hog wild without our knowledge.
Is doing nothing really an option? No, it’s not. We need to control how we work with A.I., and how our employees work with it, or we risk letting A.I. take charge. Help your employees understand the potential pitfalls of using A.I. without clear intention. And if you decide to allow some A.I. usage, be aware of those pitfalls as well.
Do you need help taking charge of A.I. usage in your business? Let us help you define your best practices. Call us today at 1-800-800-6197 for more information.
For more on A.I. Hallucinations, see the article, “What Are AI Hallucinations? Why Chatbots Make Things Up, and What You Need to Know” by Barbara Pazur for CNET.