The federal government is seeking public feedback on whether the eSafety commissioner’s online enforcement powers should extend to cover hate speech, ‘pile-ons’ and deepfakes.
It comes amid a legal feud between eSafety and social media platform X, formerly known as twitter, over the exercise of eSafety’s existing power to order the take-down of footage of violent crime.
X is promising to challenge an order that it remove footage of the stabbing of Bishop Mar Mari Emmanuel, which it has called an “overreach”.
But now Communications Minister Michelle Rowland, who has backed the eSafety commissioner in its dispute with X, has floated the possibility of new laws to further extend eSafety’s reach in a new issues paper.
The government committed to review eSafety’s powers last November with a view to deciding whether they should be strengthened. Ms Rowland tasked senior public servant Delia Rickard with making independent recommendations for reform options. This issues paper is the first step in that direction.
It does not make specific recommendations, but it identifies for the first time the full scope of options under consideration.
One idea canvassed is harsher fines for online platforms or individuals who seriously or systematically fail to comply with eSafety’s orders. Currently companies who refuse to follow content take-down orders can face daily fines of up to $782,500, and individuals who ignore directions about image-based abuse (such as the non-consensual sharing of intimate images) can face daily fines of up to $156,500.
But the issue paper noted “significantly higher” penalties were in place in Ireland and the UK, where platforms can be fined up to 10 per cent of their annual global turnover for failing to comply.
The government had already flagged its consideration of tougher penalties. It is also of the view that the laws, passed by the Coalition in 2021, have already fallen out of date and need to be updated. It has sped up Ms Rickard’s review process with this in mind.
“Our laws… are not set-and-forget,” Ms Rowland said. “[We need] to ensure these laws remain responsive to the rapidly changing digital environment.”
New frontiers for content regulation
To this end, the issues paper canvassed a wide array of online content which may be considered harmful but is not captured by current laws, and where new powers could be needed.
One item on the list was hate speech, which Ms Rickard noted was not unique to online spaces, but could spread online “at a magnitude and order not seen before,” and could be a candidate for a similar anonymous complaints process to the one eSafety currently uses for cyber bullying.
A similar approach was floated for online ‘pile-ons’, which can be related to cyber bullying but are not neatly captured by existing laws.
Also on the list was technology-facilitated abuse — that is, the use of technology to facilitate offline abuse, in particular male violence against women.
Other candidates floated for regulation included ‘cyber flashing’ (the sharing of sexual material without the recipient’s consent), body image harms, self-harm promotion, the abuse of public figures and emerging harms related to artificial intelligence, such as pornographic ‘deepfakes’ and synthetic child sexual abuse material.
“While regulatory frameworks cannot address every potential online harm… there [may be] new or emerging harms that should be specifically addressed,” the paper argued.
The issues raised in the paper are open for public consultation, and a spokesperson for Ms Rowland told the ABC the government would not rule in or out any specific changes until it had received the final recommendations of the independent review.
“This is an opportunity for the community and civil society to have a role in reforms to strengthen our online safety laws, so they are fit for purpose in an ever-changing online environment,” Ms Rowland said.
The government is separately developing a pilot of age verification technology to limit child access to pornography online, which the Coalition wants to be more widely adopted.