In the last half of 2022 alone, many services—from game platforms designed with kids in mind to popular apps like TikTok or Twitter catering to all ages—were accused of endangering young users, exposing minors to self-harm and financial and sexual exploitation. Some kids died, their parents sued, and some tech companies were shielded from their legal challenges by Section 230. As regulators and parents alike continue scrutinizing how kids become hooked on visiting favorite web destinations that could put them at risk of serious harm, a pressure that’s increasingly harder to escape has mounted on tech companies to take more responsibility for protecting child safety online.
In the United States, shielding kids from online dangers is still a duty largely left up to parents, and some tech companies would prefer to keep it that way. But by 2024, a first-of-its-kind California online child-safety law is supposed to take effect, designed to shift some of that responsibility onto tech companies. California’s Age-Appropriate Design Code Act (AB 2273) will force tech companies to design products and services with child safety in mind, requiring age verification and limiting features like autoplay or minor account discoverability via friend-finding tools. That won’t happen, however, if NetChoice gets its way.
The tech industry trade association—with members including Meta, TikTok, and Google—this week sued to block the law, arguing in a complaint that the law is not only potentially unconstitutional but also poses allegedly overlooked harms to minors.
Some tech companies don’t like the California law, NetChoice said in a statement, because they allege that it “violates the First Amendment” many times over. They also say it grants California “unchecked power to coerce moderation decisions the government prefers.” By keeping the law’s terms purposefully vague and never really defining what’s considered “harmful,” even companies attempting to comply in good faith could find themselves charged with unforeseeable violations, the complaint alleges.
Some tech companies have already taken steps to tighten up online protections for young users this year. AB 2273 is based on a British online child-safety law passed last year that prompted many tech companies to change their policies, including Google, Instagram, Facebook, Pinterest, TikTok, Snap, and YouTube, The New York Times reported. None of these tech companies immediately responded to Ars’ request for comment.
California’s law goes further, however, by requiring tech companies to submit “Data Protection Impact Assessments,” which would detail child-safety risks and provisions before launching any new features. All online companies are currently required to submit these DPIAs before AB 2273 goes into effect in July 2024, and then they’re required to submit to biennial reviews.
These DPIAs are intended to increase accountability by prompting companies to consider how product features could cause harm to young users, then create timelines for mitigation efforts to prevent any harm identified. They also work to ensure that companies are actually enforcing their own posted policies, which NetChoice’s complaint specifically claims is unreasonable without the state defining the law in more concrete terms:
“AB 2273 unconstitutionally deputizes online service providers to act as roving Internet censors at the State’s behest. Providers must (i) assess the undefined risks their services and content ‘could’ pose to the ‘well-being’ and ‘best interests’ of children; (ii) devise a plan to prevent or mitigate any such risks; and (iii) develop, publish, and enforce terms of service and “community standards.”
According to the complaint, NetChoice views the law as an improper attempt by the state to censor tech companies while threatening “crushing financial penalties” for any perceived violations—alleging that what constitutes a violation will be determined fully at the government’s discretion.
“Guessing wrong about what these provisions proscribe is prohibitively expensive—penalties for even negligent errors could exceed $20 billion,” the complaint said.