Strengthening Security and Integrity in AI Across Europe
The European Union is taking significant steps to ensure the safety and compliance of artificial intelligence (AI) products, following the principles of the European Quality Infrastructure (EQI). This approach, primarily applied through the EU AI Act, aims to establish a robust, risk-based quality, conformity, and surveillance ecosystem for foundation models (general-purpose AI or GPAI).
Key elements of this approach include:
- Risk-based regulatory framework: The EU AI Act mandates providers of high-risk AI to assess, mitigate, and continuously monitor risks associated with GPAI models. This ensures a high bar for safety, ethics, and compliance, aligning with EQI’s goal of ensuring trust and reliability in technological products.
- Quality systems covering the entire AI lifecycle: Providers of high-risk AI are required to implement quality management systems that cover design, data governance, development, deployment, use, and ongoing surveillance phases. This comprehensive approach mirrors EQI’s quality assurance practices.
- Codes of Practice and Guidance: The EU is facilitating a GPAI Code of Practice, offering practical guidance in areas such as transparency, copyright, safety, and security. This ensures standardized good practice, aligned with EQI norms.
- Transparency through standardized documentation: Providers must disclose public summaries detailing the training data sources and processing methods of their models. This transparency facilitates oversight and accountability, principles at the heart of EQI infrastructure.
- Conformity assessment and market surveillance: National competent authorities conduct impartial conformity assessments and enforce compliance. An EU-level AI Office and an AI Board with Member State representatives oversee harmonized enforcement. These mechanisms reflect EQI’s system for conformity assessment, certification, and surveillance to uphold product safety and market fairness.
- Technical infrastructure for continuous compliance: Compliance with the AI Act demands real-time visibility of data provenance, model behaviour, and deployment context. This ongoing monitoring facilitates adaptability and early identification of compliance deviations, embodying EQI’s emphasis on ongoing quality control and risk management.
- Fines and enforcement timelines: Full enforcement with fines for non-compliance related to GPAI is set to start by August 2026, underscoring the regulatory weight behind the EQI framework’s application to general-purpose AI.
In addition to these measures, the certification of quality management systems examines production processes and management structures to ensure safety. The independence and competence of conformity assessment bodies are ensured by accreditation in the EQI ecosystem. Periodical inspections of AI models already on the market are becoming essential due to their capacity to develop new capabilities or deficiencies post-deployment.
Independent audits or evaluations for AI models could assess data quality, model robustness, accuracy, and bias. Adversarial testing by independent experts can help uncover potentially dangerous features in AI models and identify how they could be misused.
Currently, AI lacks safety guarantees expected in other critical sectors due to the absence of mandatory independent, third-party testing for advanced AI products. The European Quality Infrastructure aims to address this gap, ensuring products and services are safe, reliable, and conform to functional and quality requirements.
- The European Union is considering implementing independent audits or evaluations for AI models to assess their data quality, model robustness, accuracy, and potential bias, similar to what is already required in critical sectors.
- The European Quality Infrastructure (EQI) is working on providing practical guidance for AI, exemplified by the development of a GPAI Code of Practice, focusing on areas like transparency, copyright, safety, and security.
- Many AI products on the market today lack the safety guarantees expected in other critical sectors due to the absence of mandatory independent, third-party testing for advanced AI products. EQI aims to rectify this situation by ensuring products and services are safe, reliable, and conform to functional and quality requirements.
- As part of the European Quality Infrastructure's (EQI) approach to ensure the safety and compliance of AI products, it is essential for periodical inspections of AI models already on the market to identify new capabilities or deficiencies that may arise post-deployment.