AI IN THE WORKPLACE: DISCRIMINATION, RISKS AND THE FUTURE
Artificial Intelligence (AI) is no longer a futuristic concept—it is embedded in everyday workplace processes. From automated CV screening and facial recognition access controls to AI-driven performance reviews, these tools promise efficiency and objectivity. However, as adoption accelerates, so does scrutiny. Increasing evidence shows that AI systems can inadvertently discriminate against protected groups, creating significant legal and reputational risks for employers.
The Rising Tide of Claims
Employment Tribunal activity is climbing sharply. Between 2023 and 2024, total Tribunal receipts rose by 13% (from 86,000 to 97,000), and the caseload grew another 11% in the year to September 2025. Of significant interest is the 42% increase in disability discrimination claims over the same period. While official statistics do not yet isolate AI-related claims, both the Equality and Human Rights Commission and the ICO have flagged algorithmic bias as a key driver of emerging disputes.
Case Law Signals the Direction of Travel
The most high-profile example remains Manjang v Uber Eats UK Ltd (2301862/2021). In a 2022 preliminary judgment, the Employment Tribunal held that Uber’s real-time facial recognition system indirectly discriminated against a Black courier on grounds of race. The algorithm repeatedly failed to match his selfies to stored images, triggering an automatic account suspension. Crucially, the absence of a manual override was viewed as a significant factor as to why there was a breach the Equality Act—underscoring that employers cannot rely on technology alone.
Historic cases also offer lessons. In Owen v Royal Bank of Scotland plc [2010], an absence-trigger policy indirectly discriminated against a disabled employee. While predating AI as we know it, this principle applies to algorithmic equivalents—such as automated scoring systems that penalise part-time hours or medical absences without context. It is not a significant jump to suggest that an AI which was insufficiently trained could apply algorithms in a similar manner.
Emerging Risks
Employers face several emerging risks when deploying AI in workplace processes. One of the most significant is indirect discrimination, where automated scoring systems or rigid algorithms can disproportionately disadvantage disabled employees, carers, or those working flexible arrangements.
Another concern is bias, particularly in facial recognition and image-based verification tools, which remain vulnerable to inaccuracies caused by skewed or incomplete training data.
Finally, transparency gaps pose a serious challenge: when AI-driven decisions lack explainability, it becomes far harder for organisations to demonstrate compliance with the Equality Act and defend against discrimination claims.
Top Tips for Compliance
Conduct AI-specific Equality Impact Assessments - Identify potential bias before deployment and document findings.
Ensure Meaningful Human Oversight - Every automated decision affecting recruitment or employment should be reviewed by a trained decision-maker.
Test and Audit for Bias Regularly - Implement structured bias testing and maintain clear records for accountability.
The Regulatory Horizon
2025 marks a turning point: AI governance is no longer optional—it is a legal and ethical imperative. With regulators sharpening their focus and claim volumes rising, organisations that act now will protect themselves from costly litigation and reputational harm. By embedding equality impact assessments, human safeguards, and transparent bias testing into your compliance framework, you can harness AI’s benefits without falling foul of the Equality Act.

