Understanding Your AI Security Scope: A Simple Framework

QL Security

One of the most common mistakes organisations make with AI Security is treating all AI usage the same. They either over-engineer controls for simple use cases or dangerously under-protect complex implementations.

The reality is that your security requirements depend entirely on how you’re using AI. An employee using ChatGPT to brainstorm ideas has fundamentally different security needs than a team building a customer-facing chatbot powered by your proprietary data.

AWS has recognised this challenge and created the Generative AI Security Scoping Matrix. This is a framework that classifies AI usage into five distinct scopes, based on ownership and control. It’s becoming one of the most widely referenced models in the industry, adopted by OWASP, NIST practitioners and security teams globally.

Understanding where your organisation sits on this spectrum is helpful for implementing proportionate, effective AI Security.

The Five Scopes of AI Adoption

The AWS framework classifies AI usage from least ownership (Scope 1) to greatest ownership (Scope 5). Most organisations operate across different scopes simultaneously — and that’s ok.

How Do You Use AI?

Your security requirements depend on your level of ownership and control. Identify your scope to understand what protection you need.

Less ownership & control
More ownership & control
1

Consumer Apps

Using public AI services like ChatGPT or Gemini directly, with no contractual relationship.

Example: Staff using free AI chatbots to draft emails or brainstorm ideas.
2

Enterprise Apps

Using third-party business software with AI features embedded, under enterprise agreements.

Example: Microsoft 365 Copilot, Salesforce Einstein, or HubSpot AI features.
3

Pre-Trained Models

Building applications using foundation models via APIs, integrating AI into your own systems.

Example: Customer service chatbot using Claude API with your knowledge base.
4

Fine-Tuned Models

Customising existing models with your organisation's data to create specialised AI systems.

Example: Medical summarisation model trained on your clinical terminology.
5

Self-Trained Models

Building and training AI models from scratch using your own data and infrastructure.

Example: Proprietary fraud detection or recommendation engine built in-house.

Key Security Considerations Across All Scopes

Governance
Legal & Privacy
Risk Management
Controls
Resilience

Scope 1: Consumer Applications

At this level, staff use public AI services like ChatGPT, Claude or Gemini directly. There’s no enterprise agreement — only individual accounts operating under standard consumer terms of service.

What you don’t control: The model, the training data, but perhaps most importantly, the data retention policies, or how your inputs might be used.

Key risks:

  • Sensitive data inadvertently leaking other consumers through your staff prompts
  • No contractual protection for your information
  • Staff using AI without organisational awareness (shadow AI)
  • Inconsistent quality and potential for misinformation

Security focus: Usage policies, acceptable use training, data classification guidance. The priority here is governance and awareness rather than technical controls.

Scope 2: Enterprise Applications

Here, you’re using third-party SaaS products with embedded AI features under enterprise agreements. Think Microsoft 365 Copilot, Salesforce Einstein, HubSpot AI features or Notion AI.

What you gain: Contractual relationships, enterprise data protection terms, configurable settings and vendor security commitments.

What you still don’t control: The underlying model architecture, training data or core AI behaviour.

Key risks:

  • Data sharing with AI features you may not have evaluated
  • Vendor access to your data for model improvement
  • Feature changes that alter security posture
  • Supply chain risk through AI-enabled vendors

Security focus: Vendor due diligence, contract review, configuration management, and ongoing supplier risk assessment.

Scope 3: Pre-Trained Models via API

At Scope 3, you’re building your own applications using foundation models through APIs — services like Amazon Bedrock, Azure OpenAI Service, Anthropic API or Google Vertex AI.

What you control: Your application logic, prompts, data flows and how the model integrates with your systems.

What you don’t control: The underlying model weights and training data.

Key risks:

  • Prompt injection attacks
  • Data leakage through model responses
  • Retrieval-Augmented Generation (RAG) vulnerabilities
  • API security and access control
  • Output quality and hallucination risks

Security focus: Application security, prompt engineering best practices, input/output validation, access controls and technical guardrails.

Scope 4: Fine-Tuned Models

Scope 4 involves taking existing foundation models and customising them with your organisation’s data. This creates specialised models tailored to your domain — a medical summarisation model trained on British clinical terminology, for example.

What you control: The fine-tuning data, the resulting customised model and deployment configuration.

What you inherit: The base model’s characteristics, potential biases and architectural limitations.

Key risks:

  • Training data poisoning
  • Model theft or extraction
  • Inherited biases amplified by fine-tuning
  • Intellectual property concerns with training data
  • Compliance requirements for the data used

Security focus: Training data governance, model versioning and access control, bias testing and secure model storage.

Scope 5: Self-Trained Models

The most extensive scope involves building and training AI models from scratch using your own data and infrastructure. You own every aspect of the model.

What you control: Everything — architecture, training data, training process, deployment and ongoing maintenance.

What you’re responsible for: Everything — including all the security considerations that cloud providers would otherwise manage.

Key risks:

  • Full spectrum of AI Security threats
  • Training infrastructure security
  • Model integrity and availability
  • Adversarial attacks on model behaviour
  • Significant compliance obligations

Security focus: Full AI security programme covering governance, technical controls, monitoring, incident response and resilience.

Five Security Disciplines Across All Scopes

Regardless of which scope you operate in, AWS identifies five security disciplines that must be addressed — though the specific requirements vary by scope:

Governance and compliance — The policies, procedures and reporting needed to enable the business while minimising risk. At Scope 1, this might be an acceptable use policy. At Scope 5, it’s a comprehensive AI governance framework.

Legal and privacy — Regulatory, legal and privacy requirements for using or creating AI solutions. GDPR implications, intellectual property considerations and sector-specific requirements all apply differently across scopes.

Risk management — Identification of potential threats and recommended mitigations. Lower scopes focus on third-party risk; higher scopes require threat modelling specific to your AI architecture.

Controls — Implementation of security controls to mitigate identified risks. From usage policies at Scope 1 to technical guardrails, model access controls and adversarial testing at Scope 5.

Resilience — How to architect AI solutions to maintain availability and meet business requirements. Critical for any production AI system, but the complexity scales significantly with scope.

Why This Matters

Understanding your AI Security scope isn’t just an academic exercise. It directly determines:

What you need to document — A Scope 1 environment needs usage policies and training. A Scope 3+ environment needs threat models, risk assessments and technical architecture documentation.

What stakeholders will ask — When customers send AI Security questionnaires (increasingly common), your answers depend entirely on your scope. “We use Microsoft Copilot under enterprise agreement” is a different answer to “We’ve built a customer-facing chatbot using Claude API with RAG.”

What controls are proportionate — Over-engineering security for Scope 1 wastes resources. Under-protecting Scope 3+ creates genuine risk. The framework helps you invest appropriately.

What expertise you need — Scope 1-2 requires governance and policy expertise. Scope 3+ adds technical AI Security skills. Knowing your scope helps you identify capability gaps.

How Our Services Map To Your AI Scope

Different scopes require different services. Here’s how our offerings align to your AI Security needs:

Service Scope 1 Scope 2 Scope 3 Scope 4 Scope 5
AI Security Gap Analysis
AI Security Programmes
AI Security Projects
AI Advisory
AI Act Preparedness
ISO 42001 Implementation

● Recommended for this scope | ○ Available but not typically required

AI Security Gap Analysis applies across all scopes. Whether you’re managing Scope 1 shadow AI or building Scope 5 custom models, you need to understand your current posture before you can improve it.

AI Security Programmes provide ongoing governance that scales with your AI adoption. Even Scope 1 organisations will benefit from continuous policy management and supplier oversight as AI-enabled tools proliferate.

AI Security Projects become relevant from Scope 2 onwards, where technical discovery (shadow AI), testing (AI penetration testing) and control implementation (LLM guardrails) add value.

AI Advisory serves organisations at Scope 3 and above, where strategic decisions about AI architecture, governance frameworks and board-level risk management require specialist guidance.

AI Act Preparedness is most relevant for Scope 3+ organisations building or deploying AI systems that may fall under regulatory requirements — especially those with customer-facing AI applications.

ISO 42001 Implementation provides a comprehensive management system framework for organisations at Scope 3 and above who need demonstrable, certified AI governance.

Getting started

The first step is understanding where you sit on the scoping spectrum. Most organisations discover they’re operating across multiple scopes — often with Scope 1 shadow AI they weren’t fully aware of.

An AI Security Gap Analysis will map your current AI usage, identify which scopes apply and provide a prioritised roadmap for implementing appropriate controls.

If you’re not sure where to start, get in touch. We’ll help you understand your AI Security scope and what it means for your organisation.

Understand Your AI Security Scope

Not sure which security controls your organisation needs? Our AI Security Gap Analysis will map your current AI usage, identify your scopes, and provide a prioritised roadmap for implementing appropriate controls.