ISO 42001 Audit Evidence: What Certifiers Actually Ask UK Organisations to Show

FAQ on the seven evidence categories UK certifiers request in ISO 42001 audits, stage 1 vs stage 2 differences and realistic preparation timelines.

This FAQ answers the practical questions UK organisations ask when preparing for ISO 42001 certification, specifically around the evidence auditors request during stage 1 and stage 2.

It is written for compliance leads, AI governance owners and security managers who have read the standard and now need to know what artefacts the certifier will open on day one.

We cover scope, timelines, the seven evidence categories, common gaps and how to close them before your auditor arrives.

This page offers general guidance and does not constitute legal or regulatory advice; organisations should seek independent professional advice for their specific certification circumstances.

What evidence do auditors require for ISO 42001 certification in the UK?

UK certifiers request seven evidence categories for ISO 42001: the AI management system policy, AI risk register, AI system impact assessments, supplier and third-party controls, operational monitoring records, incident response logs and management review minutes.

Each artefact must show dated entries, named owners and version control demonstrating the system operates continuously, not just at audit time. Auditors sample records across the period since your system went live, so a policy approved last week with no operational trail behind it fails stage 2.

The expectation is a living management system with evidence accumulating month on month. Based on QL Security’s client experience, most first-time applicants underestimate the documentary depth required, particularly around supplier controls and monitoring records, which are where we have observed stage 2 findings cluster. For a longer narrative walkthrough of each evidence category, see our companion blog post on the ISO 42001 audit evidence pack.

What is the difference between stage 1 and stage 2 ISO 42001 audits?

Stage 1 is a documentation review where the certifier confirms your AI management system policies, scope and risk approach exist on paper. Stage 2 is an operational audit testing whether the system runs in practice through interviews, sampled records and evidence walkthroughs.

Stage 1 finds gaps; stage 2 confirms the system genuinely operates.

In our experience, the gap between the two stages typically runs four to twelve weeks, which is the window organisations use to close stage 1 findings and accumulate enough operational evidence for stage 2 to sample meaningfully. In our experience, treating stage 1 as a dress rehearsal rather than a real audit is among the most common preparation errors. If your policies are still being drafted at stage 1, stage 2 will not save you.

How long does it take to prepare an ISO 42001 audit evidence pack?

Based on our client experience, most UK organisations need eight to sixteen weeks to assemble a complete ISO 42001 evidence pack from scratch. Timelines depend on AI system inventory size, existing ISO 27001 maturity and whether impact assessments have been completed.

Organisations with active AI governance can compress this to six weeks; those starting fresh typically need a full quarter. The gating factor is rarely policy drafting, which can be done quickly, but operational evidence: monitoring records, incident logs and management review minutes need real elapsed time to accumulate. A policy dated three days before stage 2 with no supporting trail will be flagged.

We recommend organisations begin evidence collection at least one quarter before their target certification window.

Can we reuse our ISO 27001 evidence for ISO 42001?

Partially, and this is one of the more useful efficiencies available. Management review processes, supplier control frameworks, incident response procedures and document control practices can extend from your ISO 27001 management system into ISO 42001 with adaptation.

What cannot transfer is anything AI-specific: AI system impact assessments, AI risk register entries, model monitoring records and AI-specific supplier clauses around training data and model behaviour.

Auditors familiar with both standards will probe exactly where the AI management system adds substance beyond the information security management system. See our comparison of ISO 42001 and ISO 27001 for a structured walkthrough of which controls transfer and which require fresh AI-specific work. In our experience, a recurring stage 2 finding is an ISO 42001 policy that simply repeats ISO 27001 language without addressing AI-specific risks such as bias, model drift or training data provenance.

What does the AI risk register need to contain?

The AI risk register must list each AI system in scope, the risks identified for that system, the controls applied, the risk owner and the review cadence. Auditors look for evidence the register is updated and not static. Risks should include both information security risks and AI-specific risks: bias, fairness, robustness, explainability, data quality and unintended use.

Each entry needs a clear treatment decision, a residual risk position and a named owner.

The register must reconcile with your AI system inventory and your impact assessments, so an AI system flagged as high-impact should appear with corresponding high-priority entries in the register. Inconsistencies between these three documents are a recurring stage 1 finding.

What are AI system impact assessments and when are they required?

Impact assessments evaluate the potential consequences of an AI system on individuals, groups, the organisation and third parties before deployment.

ISO 42001 addresses impact assessments for AI systems in scope of your management system, with depth proportionate to the system’s risk profile and therefore organisations should consult the standard text and their certifier to confirm the precise applicability to their scope. They should cover intended use, foreseeable misuse, affected populations, fairness considerations, data sources and dependencies on third-party models or services.

Auditors expect impact assessments to be dated, version-controlled and revisited when systems materially change. A frequent shortcoming we see is treating impact assessments as a one-off compliance exercise rather than a living artefact updated through the AI system lifecycle. If you cannot show an impact assessment was reviewed after a model retraining or scope change, expect a finding.

How do auditors test supplier and third-party controls?

Auditors examine your inventory of AI suppliers, the contractual clauses governing AI-specific risks, your due diligence records and your ongoing monitoring of supplier performance. Expect questions about foundation model providers, AI tooling vendors, data labelling services and any third party whose outputs influence your AI systems.

For instance, auditors expect the contractual position to address training data provenance, model updates, performance commitments, incident notification and the right to audit or review.

Due diligence cannot be a one-time exercise at procurement; auditors look for periodic reassessment, particularly when suppliers change their terms or capabilities. Supplier evidence is the category where we most frequently see stage 2 findings, because organisations underestimate how much of their AI risk now sits with external providers.

What management review evidence do certifiers expect?

Management review minutes must show senior leadership has reviewed the AI management system at planned intervals, considered performance data, risk changes, incidents, audit findings and improvement opportunities, and documented relevant decisions.

A single annual review is rarely sufficient: quarterly cadence is more defensible. Minutes should record attendees, decisions, action owners and review of previous actions. Auditors will trace decisions from management review through to operational change, so a decision recorded in minutes that never reaches the risk register or a policy or control change is a finding waiting to happen. This is where the management system either demonstrates leadership commitment or exposes it as theatre.

Can we prepare for ISO 42001 in-house or do we need external help?

It depends on internal capacity and existing maturity.

Organisations with mature ISO 27001 programmes, dedicated compliance resource and clear AI governance ownership can prepare in-house with the standard text, sector guidance and disciplined project management.

Organisations new to management system certification, or those with fragmented AI ownership across data science, IT and legal, usually benefit from external support to compress timelines and avoid the documentary depth being misjudged.

The decision often comes down to whether you can afford to learn what auditors expect during your own audit. Where the certification window is fixed, external readiness review materially reduces the risk of stage 2 findings that delay the certificate.

ISO 42001 Audit-Readiness Review

Identify documentation gaps before your certifier does, working alongside your existing governance team.