Social media inquiry to put big tech under the microscope
The sickening online abuse suffered by women and girls will be interrogated when social media giants are put under the microscope in Canberra on Tuesday.
A parliamentary Inquiry, which started in December, is building on world-first legislation to compel social media giants to unmask anonymous trolls in defamation cases, and is looking into toxic material on online platforms and the dangers it poses to Australian.
Scott Morrison, who announced the social media crackdown in November, said big tech companies created the platforms so had a responsibly to keep users safe.
“Keeping our kids safe online is no doubt one of the biggest topics of conversation at family dinner tables across Australia over the summer holidays,” the Prime Minister said.
“Online safety keeps parents awake at night, and for our kids the impacts can be far worse and longer lasting.”
Among those expected to appear at the public hearings over the next two weeks are AFL players, as well as representatives from Facebook, Twitter, TikTok, Google, Match.com, Apple, and former NSW sex crimes squad head John Kerlatec.
First up before the committee on Tuesday are Western Sydney University criminology associate professor Dr Michael Salter, lawyer and image-based abuse victims’ advocate Noelle Martin, and TV presenter Erin Molan.
Mr Morrison said it was “utterly unacceptable” that girls and women were more often the victims of “harmful and sickening conduct online”.
“This is why it’s so important that people like Erin bravely share their experiences, so we can hold big social media to account,” he said.
Communications Minister Paul Fletcher said the inquiry would give parents, community organisations, experts and victims of online abuse an opportunity to say what they expected from the tech companies.
“We expect these companies to respond,” Mr Fletcher said.
“Since we established the eSafety Commissioner in 2015, we’ve put big tech on notice that the days in which they could generate enormous revenues with little or no concern for users’ safety are over.”
In a submission to the inquiry, Twitter argues users are able to mute and block harassment — saying it deployed 143 million anti-spam challenges to accounts causing trouble over a six-month period in 2020.
“Twitter works to prevent spam and fake accounts from harassing other people on the service both at the sign-up stage so they won’t be able to join, and by removing accounts that have been proven to cause trouble,” the submission says.
“The issues raised in the committee’s terms of reference are broad and complex, and cannot be fully explored in the time allotted for this inquiry.
“There’s a desire to deal with the companies and issues that are most commonly in the headlines today, without sufficient consideration of how this will impact the future of the internet or where different policy objectives might be creating contradictions.”
Get the latest news from thewest.com.au in your inbox.
Sign up for our emails