Glossary

Model Context Protocol (MCP) Server

A runtime component that exposes tools, data sources and system capabilities to AI agents through a standardised interface, allowing those agents to read files, execute commands and interact with internal systems on a user's behalf.

Term: Model Context Protocol (MCP) Server

A Model Context Protocol server is a runtime component that exposes tools, data sources and system capabilities to AI agents through a standardised JSON-RPC 2.0 interface, allowing those agents to read files, execute commands and interact with internal systems on a user’s behalf.

Why it matters

MCP servers have become the connective tissue between AI coding assistants, agentic development tools and the underlying systems they act upon. When a developer connects an AI agent to a local MCP server, that server typically inherits the developer’s permissions across source repositories, cloud consoles, ticketing systems, internal APIs and sometimes production infrastructure. The agent then acts through this server, often without an audit trail that security teams can inspect.

This creates a category of risk that does not map cleanly onto existing controls. Endpoint detection sees a legitimate process, network monitoring sees authenticated API calls and identity tooling sees the developer’s own credentials in use. The lateral movement and privilege escalation paths opened up by a compromised or misconfigured MCP server look like normal developer activity, which in our experience is why many security teams have not yet added them to their threat model.

For CISOs in regulated sectors, the governance implications are immediate. If an AI agent can read patient records, council case files or client matter data through an MCP connection, that access is likely to fall within the scope of existing data protection and access control obligations, and should be inventoried, justified and controlled on a similar basis to human access. In most organisations we engage with we find early on that this inventory hasn’t been completed.

How it works in practice

A typical MCP deployment in a development environment looks something like this. A developer installs an AI coding assistant in their IDE. The assistant connects to one or more MCP servers, some running locally, some hosted by third parties. Each server advertises a set of tools, for example: read_file, run_shell_command, query_database, fetch_url, post_to_slack. The agent decides when to invoke these tools based on instructions from the developer and context retrieved from the workspace.

The risks typically surface in several recurring patterns.

Prompt injection through context. An MCP server that reads files or web content can pull attacker-controlled instructions into the agent’s context window. The agent then executes those instructions using the developer’s credentials. A poisoned README, a malicious issue comment or a crafted error message can become an execution vector.

Over-scoped tool permissions. MCP servers are often installed with default configurations that grant broad filesystem or shell access. An agent that needs to read one project directory ends up with access to SSH keys, cloud credentials and unrelated client data sitting elsewhere on the machine.

Third-party server trust. Public MCP server registries are growing quickly and operate with limited centralised vetting, signing or provenance controls. A compromised or malicious server published under a familiar name can exfiltrate secrets, modify code or pivot into connected systems.

Shadow deployment. Developers add MCP servers to their personal setups faster than security teams can inventory them. The result is a parallel access layer to sensitive systems that sits outside SSO, outside privileged access management and outside change control.

Mapping exposure starts with four questions: Which AI agents and IDE assistants are in use across the organisation? Which MCP servers are connected to those agents? What credentials and data sources do those servers reach? What logging exists to reconstruct agent actions after the fact?

In our engagements, most organisations cannot answer the first question with confidence, let alone the fourth.

A structured AI Security Gap Analysis works through these questions methodically, producing an inventory of agent and MCP exposure, a mapping to existing controls and a prioritised set of governance actions. For organisations in NHS, local government and professional services contexts, this is the practical bridge between knowing the risk exists and being able to demonstrate it is managed.

If your organisation has AI coding assistants or agentic development tools in use, schedule a 30-minute AI Security Gap Analysis to map your MCP server exposure before it becomes an incident.

Want this in context?

See how this term fits into the broader programme of work.