ISO 42001 certification: 7 lessons from the field

Compliance7 min read·
K

Kees van der Vlies

Partner | IT Auditor

Also available in:Nederlands

More organizations are approaching us with the question: how do we get ISO 42001 certified? The motivation is clear: clients are asking for it, the EU AI Act is coming, and there is a growing awareness that AI usage without governance is a risk. But between the intention and a successful certification process lie persistent pitfalls. These are the seven lessons we encounter in practice.

1. You have more AI than you think

The first step in any ISO 42001 process is an AI inventory. And that is where it starts. Most organizations think of AI as their own models or a ChatGPT license. But reality is broader. Marketing tools with built-in segmentation, sales platforms with lead scoring, HR software with automated CV screening, and features from existing vendors that have quietly added AI functionality. All of this is AI in the sense of the standard.

What we see is that the attack surface for AI is already much larger than what organizations have mapped. Without a complete inventory, the rest of the process has no foundation. We recommend not only interviewing IT, but also business, marketing, finance and HR. That is where the surprises are.

2. Risk classification requires concrete language

ISO 42001 requires a risk assessment for AI systems. In practice, we see that organizations fall back on classic categories: high, medium, low. The problem is that with AI risks, everything quickly becomes 'medium'. There is no anchor point, no shared language about what an AI risk concretely means.

The organizations that handle this best quantify their risks. Not necessarily in euros, but in concrete scenarios. What happens if this model makes a wrong decision? Who is affected? What is the impact on customers, reputation, compliance? By making risks tangible, clarity emerges in prioritization and the board gets a grip on where attention should go.

3. Overly strict policies create shadow AI

A common pattern: an organization drafts a strict AI policy to comply with the standard, but practice immediately works around it. Employees use personal accounts for AI tools, upload data to external services, and build workarounds because the official policy is too rigid. This phenomenon is called shadow AI, and for auditors it is a serious concern.

The problem is not with the employees. It is with policy that is not workable. Organizations that successfully implement ISO 42001 involve the business in drafting policy. They create usable guidelines that enable adoption within the established risk tolerance, instead of a paper ban that nobody follows.

4. The standard tells what, not how

ISO 42001 prescribes that you must conduct impact assessments, that you must have a governance structure, and that you must provide transparency about AI decision-making. But the standard does not tell you which questions to ask, how to scope an assessment, or what an auditor exactly expects.

This is where many organizations that try it themselves get stuck. They spend months interpreting requirements without concrete results. Our experience is that a pragmatic translation of the standard to the specific context of the organization makes the difference.

5. Security advises, business decides

One of the most common governance mistakes we encounter: the security department is made owner of AI risks. That sounds logical, but it works counterproductively. If security owns risks, everything becomes a blocking advice. Every new AI initiative must go through a bottleneck, and the organization loses speed.

The model we see working for ISO 42001 is: security advises, business decides, and business owns the outcomes. This means line managers take responsibility for AI systems in their domain, supported by security and compliance expertise.

6. Certification is not the finish line

This is perhaps the most important lesson. Many organizations approach ISO 42001 as a project with an end date: the certificate. But the reality is that AI governance is an ongoing process. Models are updated, vendors change their services, regulation shifts, and new AI applications are introduced.

We see that most programs start drifting within six months after certification if no ongoing monitoring is set up. Documentation becomes outdated, new systems are not assessed, and the AIMS (AI Management System) becomes a paper tiger. Certification is the starting point of governance, not the endpoint.

7. The business case is stronger than you think

ISO 42001 certification is often framed as a compliance obligation. But the real value lies elsewhere. What we see in certified organizations: enterprise deals close faster, security reviews with prospects become shorter, and in procurement processes you immediately have answers about AI governance.

In a market where clients increasingly ask for demonstrable AI governance, certification is a competitive advantage.

Want to get a grip on AI governance? We help organizations set up a pragmatic ISO 42001 process, from inventory to certification and ongoing monitoring. Get in touch for a no-obligation conversation.

About the author

K
Kees van der Vlies

Partner | IT Auditor

Back to knowledge base

Have a question?

Get in touch for advice on IT audit, compliance and information security.

Contact us