Posted By Jessica Weisman-Pitts
Posted on August 18, 2021
By Tania Goodman and Patrick Kilgallon, Collyer Bristow
As technology continues to develop, many banking and financial services businesses have deployed artificial intelligence, AI, software to assist in areas such as recruitment, training, employee monitoring, disciplinary processes, and dismissals. As UK legislation continues to play catch-up with the technological developments, there is no specific legislation to deal with this growing issue. Rather, employers must turn to existing legal frameworks to see how AI interacts with employment law and the risks it poses.
AI does not have a single agreed definition but can broadly be captured by the idea of making machines more intelligent – the concept that a machine could work in the same way as a human but more efficiently and with better results. Whilst the endgame is for AI to act without human involvement, in creating an AI system there is human input. To programme the system to draw conclusions from data there must be an analysis and understanding of human thought processes and how they precede action. There must be a method of describing this analysis in the form of instructions for the AI system. Consequently, it is not surprising that sometimes human biases are coded into the software.
Discrimination
The Equality Act 2010 protects UK employees from discrimination on the grounds of protected characteristics, such as sex, race, age, and disability, and this area of employment law is already displaying some tensions in relation to AI.
In March this year, Uber came under fire for its use of facial recognition software after evidence that it was not accurate with skin colour. The lack of recognition of non-white faces resulted in some Uber drivers being banned from the platform and losing their income streams. In America, there was an example of an AI system that assisted judges in determining sentencing. However, there was an issue with the initial data set the system had been given, meaning that the AI programme was twice as likely to falsely predict that black defendants would be repeat offenders. It seems the AI had become discriminatory!
Under UK law, system errors such as those described above would open employers up to a discrimination claim. If the AI system itself treats employees differently because of one of their protected characteristics this could result in a direct discrimination claim.
Employees are also protected against is indirect discrimination, broadly arising when a ‘provision, criterion or practice’ put in place by an employer disadvantages an employee because of their protected characteristic. As an AI system is based upon an algorithm or set of rules, this could be classified as a provision, criterion or practice and give rise to an indirect discrimination claim.
In a recent 2020 paper called the Worker Experience Report, the Trades Union Congress found that the implications of AI used in the workplace were not widely understood by employees. Moreover, and worryingly, employers who had purchased AI products and implemented them for their businesses often had little understanding of their implications. Accordingly, employers should be very careful in deploying AI systems when they do not understand how the software works or face the risk of relying on an imperfect system which could result in discrimination claims.
Unfair Decisions
For a decision to dismiss an employee to be fair it must, according to UK employment legislation, be “within the range of reasonable responses” and, if needed, be explained or justified by a person. If that decision is driven by complex AI algorithms, that data may be inaccessible or difficult to explain.
If an employee is dismissed and they have not been informed of the data that has been used to come to the decision or how it has been weighted, it is likely to be an unfair dismissal.
AI is increasingly used to make decisions that impact on employees’ livelihoods, for example, performance reviews, disciplinary issues and dismissal.
Black box issues, where it proves difficult or impossible to explain the rules and logic applied by an AI software decision can cause employers very real problems. Employers cannot hide behind such issues, leaving then at risk to claims for unfair or constructive dismissal.
The future and advice for employers
AI will continue to develop and will likely outperform humans in some aspects of working life. However, the technology has wide implications and employers should be cautious. AI has two notable flaws: the human error of thinking AI is infallible and the lack of transparency in its outcomes. If employers are bringing in AI systems to assist with decision making, they should have stated limits on how it is used.
A further finding of the TUC in their 2020 paper the Worker Experience Report showed the lack of consultation with employees when AI systems were implemented. Employers should therefore involve employees at an early stage when deciding how AI should be best deployed in the business. Finally, employees should be able to access sufficient information about how the AI system is being used so they can be reassured that it is being utilised in a lawful, proportionate, and accurate way.
Tania Goodman is a Partner and Head of Employment Law and Patrick Kilgallon is an Associate in the Employment Law team at Collyer Bristow. Visit www.collyerbristow.com.