Shadow AI Discovery: What Your Organisation Doesn't Know It's Using

QL Security
shadow-ai-discovery shadow-ai-detection unsanctioned-ai-tools

We regularly speak with security leaders who express the same concern: “We know our people are using AI tools, but we don’t know which ones or how many.” This uncertainty reflects the reality of how AI has proliferated across organisations. Unlike traditional software rollouts, AI adoption has often been bottom-up, employee-driven and largely invisible to IT teams.

Research consistently shows that most knowledge workers now use generative AI tools, yet many organisations struggle to account for even half of the AI tools their employees rely on daily. This gap between actual usage and visibility creates genuine security risks that require systematic discovery, not guesswork.

Shadow AI discovery is not about catching people breaking rules. It’s about understanding your actual AI environment so you can manage it effectively. A systematic approach begins with understanding where to look and what signals to track.

Start with network traffic analysis

Your network infrastructure holds the first clues about AI tool usage. Most shadow AI detection methods begin with examining outbound traffic patterns to identify connections to known AI services.

By reviewing your network and host device internet traffic to major AI platforms, an initial assessment will highlight the scale of the Shadow AI usage.

Examine browser and application data

Modern browsers store extensive data about user activity that can reveal AI tool usage. Browser history analysis across corporate devices will show which AI platforms employees access most frequently. This works particularly well for web-based AI tools, which represent the majority of Shadow AI usage.

Application logs from collaboration platforms also contain valuable signals. Microsoft Teams, Slack and similar tools often integrate with AI services. For example, meeting notetakers are often overlooked, but can lead to serious and inadvertent data egress to unknown actors and jurisdictions. Review integration logs and third-party application permissions to identify AI tools that employees have connected to your collaboration stack.

If your policies allow, email gateway logs can reveal AI-related service notifications, password reset requests and subscription confirmations, all indicators of active AI tool accounts within your organisation.

Survey your technical teams first

Technical teams typically adopt AI tools earlier and more extensively than other departments. Survey your development, IT support, and data analysis teams about their AI tool usage before expanding organisation-wide.

This approach serves two purposes: technical teams can provide detailed information about the tools they use, and they often influence AI adoption patterns across other departments. Understanding their usage helps predict where Shadow AI is likely to emerge elsewhere.

Ask specific questions about coding assistants (GitHub Copilot, Claude Code, OpenAI Codex, Amazon CodeWhisperer), data analysis tools and automated testing platforms. These tools are particularly common in technical environments and often connect to external AI services.

Deploy endpoint detection strategically

Endpoint detection and response (EDR) tools can identify AI-related applications installed on corporate devices. Configure your EDR system to monitor for AI application installations, browser extensions related to AI services and unusual network communication patterns from endpoints.

Focus particularly on browser extensions. Many AI tools operate through browser extensions that employees install without IT oversight. These extensions often have broad permissions to access web content and can represent significant data exposure risks.

Cloud access security brokers (CASB) provide another detection layer if your organisation uses them. CASBs can identify shadow AI usage by monitoring API calls and data transfers to unmanaged cloud services.

A four-week discovery framework

Use this framework to structure your shadow AI discovery efforts.

Week 1: Data collection

  • Export 30 days of firewall and proxy logs
  • Generate DNS query reports for AI-related domains
  • Collect browser history data from sample devices
  • Review email gateway logs for AI service notifications

Week 2: Technical team survey

  • Survey development teams about coding assistants
  • Survey data teams about analysis and visualisation tools
  • Survey IT support about automation and chatbot tools
  • Document integration points with existing systems

Week 3: Pattern analysis

  • Identify most frequently accessed AI platforms
  • Map usage patterns by department and role
  • Correlate network traffic with user accounts
  • Flag high-risk or unknown AI services

Week 4: Shadow AI inventory

  • Compile a comprehensive list of identified AI tools
  • Categorise tools by risk level and business function
  • Document data flows and integration points
  • Create a baseline for ongoing monitoring

Address the compliance dimension

Shadow AI discovery must account for regulatory requirements specific to your sector. Financial services organisations need to identify AI tools that might process customer financial data. Healthcare organisations must flag any AI tools that could access patient information.

Understanding which AI tools your organisation uses may help determine which AI Act obligations could apply to your operations. Organisations should consult legal counsel for specific compliance requirements under the EU regulation.

Consider reviewing your data protection impact assessments (DPIAs) against your discovered AI tools, as some shadow AI tools may process personal data in ways that could require DPIA updates. Consult your data protection officer or legal counsel for specific requirements.

Build ongoing detection capabilities

Shadow AI discovery is not a one-time exercise. New AI tools emerge constantly, and employee adoption patterns evolve rapidly. Build detection capabilities into your AI Security monitoring programme.

Configure automated alerts for connections to new AI services. Update your acceptable use policies to require disclosure of AI tool usage. Establish regular review cycles to reassess your AI tool inventory.

Consider implementing an AI tool approval process that makes it easier for employees to request access to AI tools through official channels rather than adopting them independently.

Run an AI Amnesty to surface hidden usage and adopt a Just Culture approach to user engagement. Identify the tools and use cases, replacing with approved alternatives where possible, but also examine whether the productivity gains can be replicated elsewhere.

Move beyond discovery to risk management

Discovery reveals what AI tools your organisation uses, but it doesn’t assess the risks they represent or help you manage them effectively. Different AI tools present different risk profiles depending on their data handling practices, security controls and integration points with your systems.

Once you’ve completed your Shadow AI discovery, the logical next step is comprehensive risk assessment. Our AI Security Gap Analysis provides structured evaluation of your discovered AI tools against security frameworks and regulatory requirements. This assessment helps prioritise which shadow AI tools require immediate attention and which can remain in use with appropriate controls.

Understanding what AI tools your organisation uses is the foundation of effective AI governance. After all, you can’t manage what you can’t measure. The discovery process requires systematic effort, but it’s essential preparation for managing AI risks across your organisation.

Move Beyond Discovery

Once you know what AI tools your organisation is using, our AI Security Gap Analysis assesses the risks they represent and helps you manage them with confidence.