3 results found

The L.A. jury ordered Meta and YouTube to pay $3M for designing addictive apps that harmed a child's mental health. This bellwether case highlights the impact of features like infinite scroll and algorithms, setting a precedent for future litigation.

A New Mexico jury has found Meta Platforms Inc. liable for deliberately misleading users about product safety and engaging in unethical trade practices, imposing a $375 million penalty. This historic verdict is the first of its kind, emerging from a state investigation that used decoy accounts to expose potential child exploitation on Meta's platforms. The ruling could set a precedent for numerous other lawsuits against social media companies.

OpenAI has open-sourced new prompt-based safety policies for developers, aimed at making AI applications safer for teenagers. This move comes as the company faces numerous lawsuits alleging that its ChatGPT product contributed to the deaths of young users. The policies address five categories of harm and were developed in collaboration with child safety organizations.