Transparency in electronic monitoring: at home and in the office

Hands typing

Ontario has proposed a new legislation that would require employers to tell their workers how they are being monitored electronically. Will this move toward enhanced transparency be enough to protect workers’ privacy?

Propelled by the COVID-19 pandemic, the proposed law is timely, given the shift in the workplace from offices to remote locations.

“It is a positive move, but far from enough,” says artificial intelligence (AI) and labour regulation Professor Valerio De Stefano of York University’s Osgoode Hall Law School. “Transparency is important but what needs to be urgently addressed is the improper use of algorithmic management.”

The rise in remote working has increased the chances of employers constantly monitoring their employees through the algorithmic management process, which De Stefano says, is a serious problem.

“The technology has introduced new problems such as intensified electronic surveillance and reinforcement of some existing biases. This can be reduced if the law also restricts employers from using certain aspects of electronic monitoring that cause occupational health and safety issues.” He says in a recent podcast on the topic, “if you are monitored and followed all of the time, you could develop various forms of stress and psycho-social issues.”

The proposed law currently seeks employers of more than 25 workers to inform their employees on how their use of computers, cell phones, GPS systems and other electronic devices are being tracked.

Platform workers like Amazon warehouse employees and Uber Eats drivers use GPS trackers as part of their jobs. Most likely many of them are not aware of what data these trackers are collecting, says De Stefano, who has well-rounded research expertise in AI, algorithmic management and platform economy.

“The technology might be collecting data on how long a person takes to move from one location to another; how many times they visit the restroom during their shift; how long do they chat with a coworker on the hallway, etc.,” says De Stefano.

He also notes supervisors proctor both remote and in-person workers and can remotely connect to an employee’s device, access camera, control browsing data, measure keystrokes and even predict whether an employee is considering quitting their job or planning to become a parent.

“Algorithmic management providers are even promising to map emotions of prospective employees during the recruitment process, to find out if they are lying about their credentials and so on,” he adds.

An emerging threat in the recruitment process is that some companies that are making inroads eliminating racial and gender biases might inadvertently revert to their previous ways, due to implementation of AI in this area.

“For example, a good candidate might be eliminated early in the selection process, because their resumé indicated a few gaps in their work history. This could be because AI determined them to be unfit for the job, based on previously collected data, showing that candidates had been rejected due to breaks in employment,” explains De Stefano. “What AI may not recognize is that the current candidate is a woman who might have been off the workforce due to pregnancies.”