Module 1: AI Governance Foundations

EU AI Act Alignment

20 min
+75 XP

EU AI Act Alignment

The EU Artificial Intelligence Act (AI Act) is the world's first comprehensive legal framework for AI. Understanding how ISO 42001 aligns with the EU AI Act is crucial for organizations operating in or serving the European market.

Overview of the EU AI Act

The EU AI Act was adopted in 2024 and introduces a risk-based regulatory framework for AI systems. It aims to ensure AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

Key Features

  • Risk-based approach: Different requirements based on risk levels
  • Broad scope: Applies to providers and users of AI systems in the EU
  • Harmonized rules: Creates a unified regulatory framework across the EU
  • Enforcement: Significant penalties for non-compliance (up to €35M or 7% of global turnover)
  • Phased implementation: Gradual enforcement starting from 2024

EU AI Act Risk Categories

Unacceptable Risk (Prohibited)

AI systems that pose an unacceptable risk are banned:

Social Scoring: Government-run social credit systems that evaluate or classify people based on social behavior or personal characteristics.

Cognitive-Behavioral Manipulation: AI that exploits vulnerabilities of specific groups (age, disability) to materially distort behavior in harmful ways.

Real-Time Remote Biometric Identification: In public spaces by law enforcement (with narrow exceptions for serious crimes).

Biometric Categorization: Using sensitive characteristics (race, political opinions, sexual orientation) unless narrow exceptions apply.

Emotion Recognition: In workplace and education settings (with some exceptions).

Subliminal Techniques: AI that operates beyond a person's consciousness to materially distort behavior.

High-Risk AI Systems

These systems require strict compliance with requirements:

Safety Components: AI in products subject to EU safety legislation (toys, aviation, cars, medical devices, elevators).

Critical Infrastructure: AI managing critical infrastructure (water, gas, electricity, transport).

Education and Training:

  • Determining access to educational institutions
  • Assessing students
  • Detecting prohibited behavior during exams

Employment:

  • Recruitment and selection
  • Promotion decisions
  • Task allocation
  • Performance monitoring
  • Contract termination

Essential Services:

  • Creditworthiness assessment
  • Credit scoring
  • Emergency response dispatching
  • Risk assessment for insurance pricing

Law Enforcement:

  • Individual risk assessments
  • Polygraph and lie detection
  • Evidence evaluation
  • Crime prediction systems

Migration and Border Control:

  • Asylum and visa applications
  • Border control authentication
  • Irregular migration risk assessment

Justice and Democracy:

  • Assisting judicial research
  • Influencing elections and voting

Limited Risk (Transparency Obligations)

AI systems must disclose they are AI:

Chatbots and Virtual Assistants: Users must be informed they're interacting with AI (unless obvious from context).

Deepfakes: AI-generated or manipulated content must be clearly labeled as such.

Emotion Recognition Systems: Must inform people when their emotions are being analyzed.

Biometric Categorization: Must disclose use of biometric categorization systems.

Minimal Risk

No specific obligations under the AI Act, but general laws still apply. Examples:

  • Spam filters
  • Video game AI
  • Product recommendation systems
  • Inventory management systems

High-Risk AI Requirements

Organizations deploying high-risk AI must comply with:

1. Risk Management System

Continuous Process: Risk management throughout the AI lifecycle.

Risk Identification:

  • Known and foreseeable risks
  • Risks from reasonably foreseeable misuse
  • Risks emerging from interaction with other systems

Risk Evaluation:

  • Assess likelihood and severity
  • Consider impact on fundamental rights
  • Evaluate cumulative effects

Risk Mitigation:

  • Eliminate or reduce risks at source
  • Implement protective measures
  • Provide information to users
  • Offer training when appropriate

2. Data Governance

Training, Validation, and Testing Data:

  • Relevant, representative, free of errors
  • Complete datasets for intended purpose
  • Appropriate statistical properties
  • Consider privacy and data protection

Data Quality Measures:

  • Examination for biases
  • Identification of gaps or shortcomings
  • Assessment of completeness and representativeness
  • Documentation of data characteristics

Data Management Practices:

  • Data collection protocols
  • Data preparation and labeling
  • Dataset versioning and lineage
  • Bias detection and mitigation

3. Technical Documentation

Comprehensive documentation must include:

General Information:

  • Identity and contact details of provider
  • Description of AI system and intended purpose
  • How AI system interacts with hardware/software
  • Versions and update methods

Development Information:

  • Design specifications
  • Architecture and development process
  • Data requirements and collection methods
  • Testing and validation procedures
  • Performance metrics and benchmarks

Risk Management:

  • Risk management system description
  • Identified risks and mitigation measures
  • Residual risks and limitations
  • Testing results demonstrating conformity

Monitoring and Updates:

  • Post-market monitoring plans
  • Version control and change management
  • Cybersecurity measures
  • Quality management system

4. Record-Keeping and Logging

Automatic Logging:

  • Events while AI system operates
  • Period appropriate to intended purpose
  • Sufficient level of traceability

Logging Capabilities:

  • Recording of the period each AI use
  • Reference database against which input data has been checked
  • Input data that led to a decision
  • Identification of persons involved in verification

Data Protection:

  • Logs protected from tampering
  • Access controls and audit trails
  • Retention periods defined
  • GDPR compliance for personal data

5. Transparency and Information

User Information:

  • Capabilities and limitations
  • Intended purpose and conditions of use
  • Performance levels and accuracy
  • Known and foreseeable risks

Instructions for Use:

  • Installation and operation procedures
  • Human oversight requirements
  • Expected lifetime and maintenance
  • Incident reporting procedures

Deployment Information:

  • Computing resources needed
  • Technical specifications
  • Integration requirements
  • Compatibility information

6. Human Oversight

Oversight Measures:

  • Understanding of AI system capabilities and limitations
  • Awareness of automation bias tendency
  • Ability to interpret AI outputs correctly
  • Ability to override or disregard AI decisions

Design Features:

  • "Stop" functionality to interrupt operation
  • Controls that allow disabling the AI system
  • Alerts and warnings about risks
  • Clear interfaces for human-AI interaction

Organizational Measures:

  • Trained personnel for oversight
  • Clear roles and responsibilities
  • Escalation procedures
  • Decision review processes

7. Accuracy, Robustness, and Cybersecurity

Accuracy:

  • Appropriate level for intended purpose
  • Consistent performance across use cases
  • Clear metrics and measurement methods
  • Regular accuracy monitoring

Robustness:

  • Resilience to errors, faults, or inconsistencies
  • Performance in case of anomalies
  • Handling of exceptional situations
  • Graceful degradation

Cybersecurity:

  • Protection against unauthorized access
  • Resistance to adversarial attacks
  • Security throughout AI lifecycle
  • Incident response capabilities

ISO 42001 and EU AI Act Alignment

How ISO 42001 Supports Compliance

EU AI Act RequirementISO 42001 Support
Risk Management SystemClause 6: Planning (Risk assessment and treatment)
Data GovernanceAnnex A: Data management controls
Technical DocumentationClause 7.5: Documented information
Record-KeepingAnnex A: Monitoring and logging controls
TransparencyAnnex A: Transparency and communication controls
Human OversightAnnex A: Human oversight controls
Accuracy and RobustnessClause 9: Performance evaluation
CybersecurityIntegration with ISO 27001 controls

Benefits of ISO 42001 for AI Act Compliance

Structured Framework: ISO 42001 provides a systematic approach to meeting AI Act requirements.

Continuous Compliance: Management system approach ensures ongoing compliance, not just one-time assessment.

Evidence Generation: AIMS documentation serves as evidence for regulatory compliance.

Certification: Third-party certification demonstrates commitment to compliance.

Integration: Aligns with other management systems (ISO 27001, ISO 9001).

Best Practices: Goes beyond minimum compliance to implement AI best practices.

Conformity Assessment

High-risk AI systems must undergo conformity assessment:

Internal Control (Annex VI)

For most high-risk AI systems:

  • Internal checks by the provider
  • Technical documentation preparation
  • Quality management system
  • Declaration of conformity
  • CE marking

Third-Party Assessment (Annex VII)

Required for:

  • Biometric identification systems
  • Critical infrastructure AI
  • Some law enforcement applications

Involves:

  • Notified body examination
  • Testing and validation
  • Quality management system audit
  • Certificate issuance

ISO 42001 Role: Demonstrates robust quality management system, supporting both internal and third-party assessment.

Provider and Deployer Obligations

Providers (Those Who Develop or Supply AI)

Must ensure:

  • Conformity with requirements
  • Technical documentation
  • Quality management system
  • Automatic logging
  • CE marking and declaration
  • Post-market monitoring
  • Incident reporting

Deployers (Those Who Use AI)

Must:

  • Use AI as instructed
  • Assign human oversight
  • Monitor AI operation
  • Report serious incidents
  • Conduct fundamental rights impact assessments (for some systems)
  • Inform people when they interact with AI

ISO 42001 Coverage: Addresses both provider and deployer perspectives through lifecycle approach.

General-Purpose AI Models (GPAI)

Special requirements for foundation models:

Standard GPAI Models

Must:

  • Provide technical documentation
  • Share information with downstream providers
  • Implement copyright policy
  • Publish training data summary

GPAI with Systemic Risk

Additional requirements:

  • Model evaluation
  • Adversarial testing
  • Tracking and reporting serious incidents
  • Ensuring cybersecurity protection

Relevance: Organizations using or building on GPAIs (ChatGPT, Claude, etc.) need governance frameworks.

Implementation Timeline

Understanding the AI Act timeline:

August 2024: AI Act enters into force

February 2025: Prohibitions on unacceptable risk AI systems apply

August 2025: Obligations for general-purpose AI models apply

August 2026: Most high-risk AI requirements apply

August 2027: Full compliance required, including for existing high-risk systems

Recommendation: Start ISO 42001 implementation now to prepare for regulatory requirements.

Penalties for Non-Compliance

The EU AI Act includes significant penalties:

Violation TypeMaximum Fine
Prohibited AI systems€35M or 7% of global annual turnover
Non-compliance with obligations€15M or 3% of global annual turnover
Incorrect information to authorities€7.5M or 1.5% of global annual turnover

Note: The higher amount applies in each case.

Risk Mitigation: ISO 42001 certification demonstrates due diligence and reduces non-compliance risk.

Other Regulatory Frameworks

US AI Regulation

Executive Order on AI (October 2023):

  • Safety and security standards
  • Privacy protection
  • Equity and civil rights
  • Consumer protection
  • Innovation support

AI Bill of Rights:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives and fallback

Sector-Specific Rules:

  • FDA for medical AI
  • NHTSA for autonomous vehicles
  • FTC for consumer protection
  • Equal opportunity laws for employment

China AI Regulation

Key Regulations:

  • Algorithm Recommendation Regulation (2022)
  • Deep Synthesis Regulation (2023)
  • Generative AI Measures (2023)

Focus Areas:

  • Content security and censorship
  • Data protection and localization
  • Algorithm transparency
  • User rights protection

UK Approach

Principles-Based Framework:

  • Safety, security, and robustness
  • Transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

Sector Regulators: Existing regulators apply AI principles in their domains.

International Harmonization

OECD AI Principles: Foundation for many national frameworks

ISO/IEC Standards:

  • ISO 42001 (AI Management Systems)
  • ISO 23894 (AI Risk Management)
  • ISO 5338 (AI Lifecycle)
  • ISO 22989 (AI Concepts and Terminology)

Global Convergence: Regulatory frameworks increasingly aligned around similar principles.

Practical Steps for Compliance

1. Classify Your AI Systems

Determine which risk category each AI system falls into:

  • Review AI Act classification rules
  • Consider context and purpose
  • Document classification rationale
  • Update as systems or regulations evolve

2. Implement ISO 42001

Build an AI Management System:

  • Establish governance structure
  • Conduct risk assessments
  • Implement appropriate controls
  • Document processes and decisions
  • Monitor and improve continuously

3. Create Required Documentation

Develop comprehensive documentation:

  • Technical documentation for each high-risk AI
  • Risk management records
  • Data governance documentation
  • Testing and validation results
  • Transparency information for users

4. Establish Monitoring Systems

Implement continuous monitoring:

  • Performance monitoring
  • Bias detection and reporting
  • Incident tracking and response
  • User feedback collection
  • Post-market surveillance

5. Build Compliance Capabilities

Develop organizational capabilities:

  • Train personnel on AI Act requirements
  • Establish compliance workflows
  • Create incident response procedures
  • Develop stakeholder communication
  • Plan for conformity assessment

Case Study: High-Risk HR AI System

Scenario: A company develops an AI system for screening job applicants.

Classification: High-risk AI (employment category)

EU AI Act Requirements:

  1. Risk management system addressing bias and discrimination
  2. Representative training data covering diverse candidates
  3. Technical documentation of model and decision logic
  4. Logging of decisions for auditing
  5. Transparency to candidates about AI use
  6. Human oversight of final hiring decisions
  7. Regular testing for accuracy and fairness
  8. Cybersecurity measures

ISO 42001 Implementation:

  • Establish AI governance with HR and ethics review
  • Conduct impact assessment on job candidates
  • Implement data quality controls for training data
  • Deploy bias detection and mitigation measures
  • Create model cards explaining AI capabilities
  • Design human-in-the-loop decision process
  • Monitor for disparate impact across groups
  • Document all processes and decisions

Result: System meets both EU AI Act requirements and demonstrates best practice through ISO 42001 certification.

Summary and Key Takeaways

Regulatory Reality: AI regulation is here and growing. Organizations must adapt.

Risk-Based Approach: Focus compliance efforts on high-risk AI systems.

ISO 42001 as Foundation: Provides comprehensive framework supporting regulatory compliance.

Proactive Implementation: Start now to be ready for enforcement deadlines.

Continuous Process: Compliance is ongoing, not a one-time project.

Competitive Advantage: Responsible AI practices build trust and differentiation.

Global Perspective: Consider multiple regulatory frameworks for international operations.

Next Lesson: Explore AI ethics foundations and responsible AI principles beyond compliance.

Complete this lesson

Earn +75 XP and progress to the next lesson