Is AI watching you?

While much of society has returned to a pre-pandemic state, the same is not true of remote and home working arrangements.

According to the ONS, before COVID-19 around 12% of UK workers reported working from home (either full or part-time). After spiking during COVID, that figure has settled at 40% (from survey data gathered in January and February 2023).

Remote working may be better for employees but many employers still worry about the productivity of people ‘not in the office’. Small wonder that Gartner reports the number of large employers tracking their workforce doubled since COVID, and is expected to increase further.

Which brings us to another (at times overwhelming) news issue: artificial intelligence. Among its many applications, AI automation potentially makes employee productivity monitoring much easier and cheaper (and therefore more attractive to employers).

Employee monitoring includes observing computer activity, keystrokes, videos, emails and other communications, even to tracking eye movement. Add artificial intelligence to the mix and that data can be used to analyse behaviour patterns and focus on unusual activity – potentially identifying security issues, breaches of company policy, or other suspicious acts. The identified patterns of behaviour can be personalised to individual employees, meaning that the system can identify when someone behaves in a way that is not normal for them. While this kind of functionality could potentially be used to spot and address stress and burnout, it can still feel highly intrusive to the workforce being monitored.

Which leads us to a recent report from the government’s Culture, Media and Sport Committee: Connected tech: smart or sinister. MPs on the Committee have suggested that employers should not use this kind of tracking technology without employees’ consent, so as to avoid feelings of betrayal, resentment and anxiety. Are they right to be worried? Well, a TUC report from 2022 found that 60% of employees feel they have been the subject of some kind of workplace surveillance or monitoring in the past year. Note the word “feel” implies that they don’t actually know for sure; that could mean employers are monitoring secretly, or simply that workers don’t trust their employee – either way, concerning.

So, should employers use AI and other software to monitor their workers?

On the one hand, it sounds perfectly reasonable. If you’re paying someone to do a job, you want to supervise the work/results and you can’t do that directly if they’re working from home. But any monitoring is by nature intrusive, so, how much is too much? And what is the legal situation?

Is employee monitoring legal?

Post-Brexit, the UK legal situation still rests on the EU’s GDPR (now enacted in UK legislation). A CIPD article from 2019 remains a good source of information and highlights three key questions:

  1. Do you have some lawful basis for using the data?
  2. Have you carried out a data protection impact assessment (DPIA)?
  3. Have you given your employees notice that they are being monitored?

For #1, the idea of ‘performance management’ is not enough – there must be a “legitimate interest” that overrides the interests and fundamental rights of employees and certain limits are suggested:

  • Monitor only in specific places, e.g. not “sensitive areas” such as bathrooms or prayer rooms.
  • No monitoring of personal files and communications.
  • Monitoring should be carried out on a sample basis and not be continuous.

The key points for employers are that notice should be given to employees and there should be transparency about how gathered data will be used. Covert monitoring should only happen in cases of suspected criminal activity, and even then there should be a blanket policy in place stating that covert monitoring may take place.

So, it’s legal, but is it ethical?

Is employee monitoring ethical?

The ethics question depends on what kind of business or company you are. Ask yourself:

  • Does monitoring fit with who you are as a business – your workplace culture, your stated values?
  • Who will have access to the information? How will it be protected from third parties, external partners or colleagues?
  • What actions do you intend to take based on the data you gather? In whose interests are those actions?

Should you monitor your employees?

Those in favour of using software and AI to monitor the workforce would cite the potential benefits: improved productivity, and managed liability (as an employer you can be liable for an employee’s actions). However, privacy remains a big issue (especially for those being monitored), the data you gather must be carefully and securely managed, and a feeling of being watched – often equated with not being trusted – tends not to boost workforce morale or employee engagement. And yet, monitoring of some sort has always taken place, if only for performance management purposes. Maybe it’s a question of degree?

At the end of the day, it’s a decision each employer must make for themselves. Whether it has a negative impact or not probably comes back to the point made by the Culture, Media and Sport Committee: consent is critical. And whether you get that consent or not probably depends on how you intend to monitor, the reasons for monitoring, and how you communicate and discuss these matters with the workforce.

 

If you’d like to explore the topic of AI in greater detail, you can book onto our open event 10 ways to save time using ChatGPT and AI, or give us a call on 01582 463463. We’re here to help!