An anonymous reader quotes The Verge as saying: California Governor Gavin Newsom has signed legislation aimed at ensuring that web platforms monitor hate speech, extremism, harassment and other unwanted behavior. Newsom signed AB 587 after it passed the state legislature last month, despite concerns that the bill could violate First Amendment speech protections. AB 587 requires social media companies to post their terms of service online and to submit a biannual report to the state attorney general. The report must include details on whether the platform identifies and moderates several categories of content, including “hate speech or racism,” “extremism or radicalization,” “misinformation or disinformation,” harassment, and “foreign political interference.” It should also offer details about automated content moderation, how many times people viewed the content flagged for removal, and how flagged content was handled. It’s one of several recent California plans to regulate social media, including AB 2273, which is designed to tighten rules on children’s use of social media.
Newsom’s office called the law “a first-of-its-kind social media transparency measure” aimed at combating extremism. In a statement, he said, “California will not stand by when social media is used to spread hate and misinformation that threaten our communities and the nation’s founding values.” But the transparency measures are similar to those of several other proposals, including parts of two currently blocked laws in Texas and Florida. (Ironically, other parts of those bills are aimed at preventing companies from removing conservative content, which often runs afoul of hate speech and misinformation rules.) Courts haven’t necessarily concluded that the First Amendment blocks social media transparency rules. But the rules still raise red flags. Depending on how they are defined, they can require companies to disclose unpublished rules that help bad actors beat the system. And the bill singles out specific categories of “horrible but legal” content — such as racism and misinformation — that is harmful but often constitutionally protected, potentially putting a thumb on speech.