How to Protect Human Rights in an AI-Filled Workplace
The biggest worry for many concerning AI in the workplace is: Will robots take our jobs?
It’s a valid concern. McKinsey & Company estimates that automation could eliminate 45 million jobs, or a quarter of the workforce, by 2030. While AI promises to create new jobs, like prompt engineers and AI ethicists, many of us also have qualms about how AI is being integrated into our fields. Should a bot host a podcast, write an article, or replace an actor? Can AI serve as a therapist, tutor, or car builder?
According to a Workday global survey, three out of four employees say their organization isn’t collaborating on AI regulation, and the same share reports that their company has yet to provide guidelines on responsible AI use. In a recent episode of The New Way We Work podcast’s mini-series exploring how AI is changing our jobs, I spoke with Lorena Gonzalez.
Gonzalez, the president of the California Federation of Labor Unions and a former assemblywoman, has written AI transparency legislation, including a law designed to prevent algorithms from denying workers break time. She believes that some of the most effective AI regulations address common issues across various workplace types.
Robot bosses and algorithmic management
Gonzalez’s initial bill on algorithmic management targeted warehouses. She explained, “We wanted to give workers the power to question the algorithm that was speeding up their quota.” She noted that the lack of human interaction was leading to an increase in warehouse injuries.
“What we started with in the warehouse bill, we’re really seeing expand throughout different types of work. When you’re dealing with an algorithm, even the basic experience of having to leave your desk or leave your station . . . to use the restroom, becomes problematic,” Gonzalez said. “Taking away the human element obviously has a structural problem for workers, but it has a humanity problem, as well.”
Privacy
Gonzalez is also working on bills focused on worker privacy. She pointed out that some companies are going beyond basic monitoring, using AI tools for things such as heat mapping. She’s also observed companies requiring employees to wear devices that track their conversations, even in traditionally private areas like break rooms and bathrooms. She further described companies monitoring how fast employees drive, even when off the clock.
Data collection and storage
A third area of focus for Gonzalez concerns data taken from workers without their knowledge, including through facial recognition tools. She stated, “You have a right to understand what is being taken by a computer or by AI as you’re doing the work, sometimes to replace you, sometimes to evaluate you.”
These issues emerged during the SAG-AFTRA strike last year. She further observed that these issues manifest in different forms across various industries. “We’ve heard it from Longshoremen who say the computer works side-by-side to try to mimic the responses that the worker is giving,” she said. “The workers should have the right to know that they’re being monitored, that their data is being taken, and there should be some liability involved.”
Beyond specific AI regulations, Gonzalez believes business leaders should consult with their employees about how new technology will influence their jobs before implementation, not afterward. “Those at the very top get sold on new technology as being cool and being innovative and being able to do things faster and quicker and not really going through the entirety of what these jobs are and not really imagining what on a day-to-day basis that [a] worker has to deal with,” she said.
Listen to the full podcast episode for more of Gonzalez’s insights on AI regulation in industries such as healthcare and retail, and the crucial missing step in AI development from Silicon Valley.