More than 200 Facebook workers from around the world have accused the firm of forcing its content moderators back to the office despite the risks of contracting coronavirus, BBC reports.
The claims came in an open letter that said the firm was “needlessly risking” lives to maintain profits.
They called on Facebook to make changes to allow more remote work and offer other benefits, such as hazard pay.
Facebook said “a majority” of content reviewers are working from home.
“While we believe in having an open internal dialogue, these discussions need to be honest,” a spokesperson for the company said.
“The majority of these 15,000 global content reviewers have been working from home and will continue to do so for the duration of the pandemic.”
In August, Facebook said staff could work from home until the summer of 2021.
But the social media giant relies on thousands of contractors, who officially work for other companies such as Accenture and CPL, to spot materials on the site that violate its policies, such as spam, child abuse and disinformation.
In the open letter, the workers said the call to return to the office had come after Facebook’s efforts to rely more on artificial intelligence to spot problematic posts had come up short.
“After months of allowing content moderators to work from home, faced with intense pressure to keep Facebook free of hate and disinformation, you have forced us back to the office,” they said.
“Facebook needs us. It is time that you acknowledged this and valued our work. To sacrifice our health and safety for profit is immoral.”
This letter gives a fascinating behind the scenes glimpse into what is happening at Facebook – and all is not well.
Mark Zuckerberg’s dream is that AI moderation will one day solve some of the platform’s problems.
The idea is that machine learning and sophisticated software will automatically pick up and block things like hate speech or child abuse.
Facebook claims that nearly 95% of offending posts are picked up before they are flagged.
Yet it’s still easy to find grim stuff on Facebook.
On Monday I published a piece showing the kinds of racist and misogynistic content aimed at Kamala Harris on the platform.
Facebook removed some of the content, however even though I flagged it to Facebook, some of it is still there – a week after I reported it.
What this letter suggests is that AI is simply not working as Facebook execs would hope.
Of course, these are voices of moderators – Facebook will have a different take.
You could also argue that human voices may have a vested interest to say AI doesn’t work.
But clearly, as the spotlight is well and truly on Facebook, there are internal problems that have now spilled out into the open.