Key messages from JRC report on European AI standardization

Based on the information provided in the JRC report, here are key points regarding the development and characteristics of harmonised standards for the EU AI Act:

Timeline and Progress

  • The EU AI Act entered into force on August 1st, 2024.
  • High-risk AI systems must comply with the Act’s requirements after a transition period of 2 or 3 years, starting from August 2026[1].
  • The European Commission has engaged with European Standardisation Organisations since April 2021.
  • The standardisation process has been slower than anticipated, with challenges in reaching consensus on new work items and their scope[1].

Characteristics of AI Standards

Standards supporting the AI Act are expected to have the following key attributes:

  1. Tailored to the AI Act’s objectives: Focus on risks to health, safety, and fundamental rights of individuals[1].
  2. System and product-centric: Cover all phases of the AI system lifecycle[1].
  3. Prescriptive and clear: Define explicit requirements for AI systems to meet[1].
  4. Broadly applicable: Provide horizontal requirements applicable across sectors and AI system types[1].
  5. State-of-the-art aligned: Address modern AI techniques and architectures[1].
  6. Cohesive and complementary: Ensure logical structure and capture interdependencies between requirements[1].

Key Areas of Standardisation

The European Commission has requested standardisation deliverables in 10 concrete aspects of AI, including:

  1. Risk Management: Specify a risk management system for AI products and services[1].
  2. Data Governance and Quality: Cover both data management and dataset quality aspects[1].
  3. Record Keeping: Define requirements for tracing and recording events in AI systems[1].
  4. Transparency: Outline transparency information required for compliance[1].
  5. Human Oversight: Define requirements for selecting and implementing oversight measures[1].
  6. Accuracy: Support the selection of relevant accuracy metrics and thresholds[1].

Challenges and Considerations

  • Existing international standardisation efforts often focus on organizational objectives rather than individual risks, requiring a shift in approach for AI Act standards[1].
  • Standards must balance prescriptiveness with flexibility to accommodate various AI systems and sectors[1].
  • The rapid advancement of AI technology necessitates standards that can address state-of-the-art techniques[1].

These harmonised standards, once published in the Official Journal of the EU, will grant a legal presumption of conformity to AI systems developed in accordance with them[1].

Citations:
[1] https://publications.jrc.ec.europa.eu/repository/bitstream/JRC139430/JRC139430_01.pdf

Scroll to Top