Children under the age of 16 needed protecting and the moral argument wasn’t winning. Government regulation can change the terms of debate

O

n 10 December, the world watched as Australia enacted the first social media ban for under-16s. Whether it will have the desired effect of improving young people’s lives we are yet to find out. But what the ban has achieved already is clear.

Many politicians, along with academics and philosophers, have noted that self-regulation has not been an effective safeguard against the harms of social media – especially when the bottom line for people like Mark Zuckerberg and Elon Musk depends on keeping eyes on screens. For too long, these companies resisted, decrying censorship and prioritising “free speech” over moderation. The Australian government decided waiting was no longer an option. The social media ban and similar regulation across the world is now dragging tech companies kicking and screaming toward change. That it has taken the force of the law to ensure basic standards – such as robust age verification, teen-friendly user accounts and deactivation where appropriate – are met shows the moral argument alone was not enough.

While Malaysia, Denmark and Brazil are looking at similar bans, countries such as the UK have decided to see if platforms can be made safer before prohibiting young people from using them. To what extent this is possible remains a question. Features such as infinite scroll – which encourages users to spend whole hours, even days, on their phones – and variable reward systems that mimic gambling, making these platforms like neverending slot machines, have been deemed problematic enough for the state of California to plan to limit teenagers’ exposure to “addictive feeds” to one hour a day unless their parents allow otherwise. In the UK, there are currently no such limitations.