5 results found
Ofcom has fined 4chan £450,000 for failing to implement age checks, £50,000 for neglecting risk assessments, and £20,000 for unclear terms of service under the UK's Online Safety Act. This highlights the critical need for online platforms serving UK users to adopt robust age assurance, proactive risk management for illegal content, and transparent policies. The move underscores Ofcom's strong enforcement powers, including potential business disruption measures for non-compliance.

OpenAI's delayed "adult mode" for ChatGPT is expected to launch with text-based "smut" conversations, not images or video. The rollout was postponed due to significant internal safety concerns, technical content moderation challenges, and an age-prediction system prone to misclassifying minors. This cautious, text-only strategy distinguishes it from more visual rival AI offerings.
YouTube has launched a pilot program on March 10, 2026, offering government officials, political candidates, and journalists a new AI deepfake detection tool. Users can verify their identity, access a dashboard to monitor AI-generated videos using their likeness, and report them for removal. This initiative addresses the growing challenge of synthetic media and strengthens content moderation efforts.
An in-depth review of the Oversight Board's critical recommendations for Meta's AI-generated content policies, highlighting the urgent need for dedicated rules, improved detection, and greater transparency to combat misinformation.

Social media platform X will suspend creators from its revenue-sharing program for 90 days if they post AI-generated videos of armed conflict without disclosure. This move, announced by X's head of product Nikita Bier, aims to combat misinformation and ensure authentic information during critical times. Repeat offenses will result in a permanent ban from the program.