FR EN
AI Act & Training Organisations: Obligations, High-Risk Cases and 2026 Compliance Plan
AI Act & Training Organisations: Obligations, High-Risk Cases and 2026 Compliance Plan — AutomationDataCamp
April 13, 2026 ADC Team 11 min read AI Compliance

AI Act & Training Organisations: Obligations, High-Risk Cases and 2026 Compliance Plan

Training organisations and consulting firms are on the front line of the AI Act (Regulation 2024/1689). Not only must they comply like any other business, but their sector is directly targeted by high-risk use cases and formally prohibited practices. Here is what your organisation must put in place — and what it must absolutely avoid.

Key takeaways
  • Article 4 AI literacy obligation in force since 2 February 2025 — non-compliance risk: up to €7.5M or 1% of global annual turnover
  • Three high-risk educational AI categories (Annex III, point 3): learner admissions and selection, personalised learning path steering, and automated assessment systems
  • Prohibited practices since 2 February 2025 include subliminal manipulation of learners and biometric scoring of emotional states — penalty up to €35M or 7% of global turnover
  • Consulting firms that recommend non-compliant AI tools without flagging risks expose themselves to breach-of-duty-of-care litigation — AI Act compliance is now an expected professional competency

Your Dual Role Under the AI Act

A training organisation or consulting firm may be affected by the AI Act in two distinct capacities, with different obligations:

  • Provider: you develop or commission an AI tool under your brand — an adaptive learning platform, a course recommendation engine, a pedagogical assistant. You bear the heaviest obligations: technical documentation, registration, CE marking for high-risk systems.
  • Deployer: you use a third-party AI system in your activities — an online proctoring tool, an LMS with automated scoring, a learner support chatbot. You must verify the provider's compliance, inform your learners and maintain human oversight.

In most cases, training organisations are deployers — they purchase or subscribe to third-party solutions. But this position does not exempt from responsibility: the deployer remains responsible for compliant use within their context.

The Priority Obligation: Article 4 — AI Literacy (since February 2025)

The first obligation to meet — and the most immediate — is Article 4 of the AI Act, in force since 2 February 2025. It requires any organisation deploying AI systems to ensure that its staff have a sufficient level of AI literacy, taking into account:

  • Their technical level and roles
  • The AI systems they use or supervise
  • The people or processes these systems impact

What this means in practice for a training organisation or consulting firm

  • Your trainers who use AI tools (content generation, automated marking) must understand the limitations, biases and risks of these tools
  • Your sales teams who sell training with AI tools must be able to explain the safeguards in place
  • Your management and learning directors must be able to make informed decisions about adopting AI tools
  • Documentation of training received must be retained (in case of inspection)

Non-compliance risk: up to €7.5M or 1% of global annual turnover.

High-Risk Use Cases in Training (Annex III, point 3)

The AI Act explicitly classifies three categories of educational AI tools as high-risk systems — subject to the strictest documentation, human oversight and transparency requirements:

1. Learner Admissions and Selection

Any AI system that influences access to training is high-risk: application pre-screening algorithm, eligibility scoring, automated course pathway recommendation based on personal data. These systems must be documented, auditable and supervised by a human who can override the decision.

2. Exam Monitoring (AI Proctoring)

Automated remote exam monitoring tools — which detect suspicious behaviour through video analysis, movement tracking or audio analysis — are explicitly classified as high-risk. If you use an AI proctoring tool (ProctorU, Proctorio, ExamSoft, etc.), you must:

  • Obtain the provider's AI Act compliance documentation
  • Explicitly inform learners that an AI system is monitoring them
  • Allow a human to challenge and override automatic decisions
  • Keep a record of sessions for at least 6 months

3. Automated Learning Assessment

Automated assessment systems — automatic assignment marking, AI-based skills scoring, certification scoring — are high-risk as soon as they influence significant decisions for the learner (module validation, certificate award, access to a higher level). Human oversight is not optional: an adverse AI decision must be reviewable by a human instructor.

Formally Prohibited Practices in Training (Article 5)

Since 2 February 2025, certain AI practices are completely prohibited. In the training sector, two bans are particularly important:

  • Emotion recognition in classrooms (physical or virtual): Absolute prohibition on using AI to analyse learners' emotions during a training session — whether via cameras in physical rooms, webcams during virtual classes, or audio analysis. This practice was promoted by some EdTech actors to measure "engagement" or "understanding" in real time. It is now illegal in Europe.
  • Biometric categorisation to infer protected characteristics: Analysing learners' biometric data (video, fingerprints during proctoring) to infer ethnic origin, political opinions or other protected characteristics is strictly prohibited. Some historical proctoring tools analysed apparent origin for "risk profiling" purposes — a practice now banned.

Penalty: violation of prohibited practices ? up to €35M or 7% of global turnover, whichever is higher.

The French Framework: CNIL, Arcom and the SREN Act

In France, the AI Act articulates with a reinforced national framework. Three authorities are relevant for a training organisation:

AuthorityArea of competenceMaximum penalty
CNILLearner personal data (GDPR + AI Act). Traceability of algorithmic decisions.€20M or 4% turnover (GDPR) + €100,000/day (injunction)
ArcomAI-generated content distributed in training (fakes, deepfakes, disinformation)6% turnover (SREN Act)
National AI Act Authority (being designated)Overall AI Act compliance: high-risk, prohibited practices, AI literacy€35M or 7% turnover

The SREN Act (Securing and Regulating the Digital Space, adopted in 2024) adds an obligation to label AI-generated content: if you produce training materials with generative AI tools (texts, images, videos), you must clearly identify them as such.

The Consulting Case: When Your Advice Engages Your Liability

For consulting firms that help clients deploy AI tools, the AI Act creates an indirect but real liability:

  • Recommending a non-compliant AI tool to a client without flagging it exposes the firm to litigation for breach of duty of care
  • Writing specifications for a high-risk AI system without documenting compliance requirements creates contractual liability
  • Digital transformation and data audit assignments involving AI tools must now systematically include an AI Act compliance component

AI Act compliance is thus becoming an expected professional competency for any digital transformation or data consultant.

5-Step Compliance Plan for a Training Organisation or Consulting Firm

  • Step 1 — AI tool mapping (Day 0 to 30): List all AI tools used or planned — AI-powered LMS, proctoring, chatbots, content generation tools, CRM with scoring. For each, identify your role (provider or deployer) and the risk category.
  • Step 2 — Article 4 staff training (Day 15 to 60): Deploy an AI literacy programme tailored to profiles (trainers, sales staff, management). Document the training delivered and retain certificates.
  • Step 3 — Supplier audit (Day 30 to 90): For each high-risk third-party tool, request AI Act compliance documentation. Verify that contracts include the required liability and transparency clauses.
  • Step 4 — Implementing human oversight (Day 60 to 120): For proctoring and automated assessment tools, define procedures for human review of AI decisions. Train teams on these procedures.
  • Step 5 — Documentation and ongoing governance: Maintain a register of algorithmic decisions, retain logs for a minimum of 6 months, and appoint an internal AI compliance officer.

AutomationDataCamp: Compliant and Training Others

AutomationDataCamp applies these principles in its own organisation and in its courses:

  • Our AI tools (pedagogical assistants, exercise generators) are clearly identified as such in all our materials
  • No emotion recognition tools are used in our virtual classes
  • Our certifications include systematic human review of automated assessments
  • Our team has completed an AI literacy programme compliant with Article 4

Train your teams on AI Act compliance

Our AI Act for Training Organisations & Consulting Firms course (20h, certified, CPF-eligible — Personal Training Account, French government funding) covers: risk mapping, Article 4 in practice, supplier audit, and AI governance setup. Eligible for OPCO funding (French skills operators).

View the AI Compliance course Request an audit

Related articles

AI Act: The Complete Guide for Businesses in 2026

4 risk levels, compliance timeline, governance obligations.

Read more

AI Risks & Sanctions: What Companies Really Face

€35M or 7% of turnover: a complete breakdown of AI Act fines.

Read more