Faster sorting, the promise of objectivity, applications thoroughly screened, and plenty of time saved… AI loves to present itself as a faithful mirror of recruitment. Yet, we still need to verify what it is actually reflecting.
As of August 2, 2026, the European Artificial Intelligence Act reaches a major milestone. Rules applicable to high-risk AI systems listed in Annex III come into force, specifically within the scope of employment and recruitment. This particularly targets tools used for publishing targeted job ads, filtering applications, or evaluating candidates.
The reality is that AI does not recruit in a vacuum. It often reflects less of an “ideal” candidate and more of the choices already embedded within an organization. Sometimes, it even amplifies them, bringing along all the risks of hiring discrimination that entails.
Consequently, the regulation mandates keeping one’s eyes wide open. In the event of non-compliance regarding data usage, it could come at a high price: a fine up to €15 million or 3% of total worldwide annual turnover for failures related to high-risk AI systems.
Some companies have already begun to take a hard look at themselves. Albéa Tubes, a manufacturer of cosmetic and pharmaceutical packaging, has signed an agreement (with the hand of an ærige alumnus) providing for, among other things, an AI systems registry, human oversight, and continuous, transparent social dialogue regarding these tools. This approach aligns with the French “DIAL-IA” initiative.
To anticipate these changes effectively, it is essential to inventory the AI tools currently in use and audit your sorting criteria. You should also train HR teams to limit potential biases, involve the Social and Economic Committee (CSE), and consistently maintain genuine human supervision.
Because in recruitment, the goal isn’t just to go faster. It’s about knowing what you are truly projecting.
When AI holds up the mirror,
ærige helps you see clearly.





