A few weeks ago, Twitter CEO Elon Musk asked his remaining staff for a show of loyalty by prompting them to click a “yes” link in an email. By clicking yes, the employees were telling Musk that they agreed to work longer hours—if they could keep their jobs. It was Musk’s way of seeing who on his existing team was truly ready to fall in line behind his “hardcore” efforts to build Twitter 2.0. Musk quickly learned how unattractive his offer was when an overwhelming number of employees did not click yes, and among those rejecting Musk’s severe terms was apparently almost half of Twitter’s global team dedicated to preventing child sexual exploitation on the platform.
Three people familiar with Twitter’s current staffing told Bloomberg that when 2022 started, Twitter had 20 team members responsible for reviewing and escalating reports of child sexual abuse materials (CSAM). Today, after layoffs and resignations, there are fewer than 10 specialists forming what Bloomberg described as “an overwhelmed skeleton crew.” It seems that despite Musk continually tweeting that blocking CSAM is Twitter’s top priority and even going so far as inviting users to tweet CSAM directly at him, Musk may already be losing his battle to keep the material off Twitter.
“Musk didn’t create an environment where the team wanted to stay,” sources told Bloomberg.
The staff that Musk lost, according to Bloomberg, included child safety experts and former law enforcement officers in the US, Ireland, and Singapore. Sources said that this team was already working longer hours—before Musk asked employees to commit to more hours—just trying to keep up with the constant flow of user reports and legal requests.
These people removed the CSAM, assisted in law enforcement investigations, and—relying on human reasoning instead of artificial intelligence—identified accounts grooming minors or promoting attraction to minors as healthy.
Although Twitter recently removed some known hashtags used to spread CSAM, the move was not a complete or permanent solution because hashtags change, and so does the coded language that abusers use to skirt automated content removal. Because the removal of these hashtags happened after Musk’s takeover, it’s easy to credit him with the decision and see it as his commitment to blocking CSAM. However, sources told Bloomberg that the decision to remove the hashtags happened before Musk came on board.
According to Wired, there’s only one child safety team member left to handle all the reports coming from the Asia-Pacific region. This means Twitter has one expert who understands both regional laws to coordinate with law enforcement and evolving code words used in languages other than English in that region to evade detection.
In the early morning hours today, Musk was awake and tweeting, insisting that removing CSAM from Twitter “will forever be our top priority.”
It’s a responsibility that he can’t dodge. Unlike blocking hate speech and misinformation—which can, in some cases, violate Twitter rules—Musk is legally required to block CSAM on his platform.
That means his “freedom of speech, not freedom of reach” promise (planning to contain non-criminal hate speech and information simply by not promoting it to Twitter users) is not an acceptable strategy for dealing with CSAM. Especially in Europe, lawmakers are cracking down on CSAM, with new laws dictating how platforms approach online child safety. Musk could continue losing money on Twitter if he risks fines, which could be as high as 10 percent of Twitter’s revenue, for breaking child safety laws. The UK’s Online Child Safety Bill even threatens to block platforms regionally in serious cases where CSAM cannot be adequately controlled.
Musk also seems to know that he cannot afford for Twitter to be dropped by the Apple App Store, which former head of Twitter Trust & Safety Yoel Roth wrote in The New York Times could happen if Twitter doesn’t prioritize protecting kids on the platform.
Twitter didn’t respond to Ars’ request for comment, but The Verge reported earlier this year that Twitter itself concluded in April that “Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale.” In September, Reuters reported that some brands were dropping Twitter specifically because Twitter placed their ads next to CSAM. David Maddocks, the brand president of Cole Haan, summed up advertiser concerns to Reuters, saying, “We’re horrified.”
Experts told Wired that child safety organizations help social media platforms like Twitter automatically detect and remove a lot of CSAM, but the technology can’t replace human moderators who have access to more data than outside organizations.
In September, when Twitter still had a spokesperson, Celeste Carswell told Reuters that Twitter was “investing more resources dedicated to child safety, including hiring for new positions to write policy and implement solutions.” Ars couldn’t find any current job listings for child safety positions, so the only update since then seems to be the slide Musk recently shared from a Twitter 2.0 position that broadly claimed, “We’re recruiting.”