Some of the largest tech companies in the world have been criticised over policies and guidelines that they say aim to keep users safe online. They are due to appear before a House of Representatives select committee which is looking into online safety.
Representatives of Google, TikTok and Meta – the company that owns Facebook and Instagram – were questioned Thursday by a parliamentary committee about bills that would hold them accountable for harassment and abuse online on their platforms.
Aside from the tech giants, there are also smaller firms that are considered to be “Big” in the industry. These companies have become the most popular in the world and are widely criticised for their actions.
The federal government wants to introduce the laws that would force social media platforms to take down offending posts and, in some circumstances, reveal the identity of anonymous posters.
Social media companies say they have a commercial interest in keeping Australians safe online as they would otherwise lose users.
But Reset Australia – an organisation working to address digital threats to Australian democracy – says the public don’t want safety issues to be left to social media companies to regulate.
Data policy director Rys Farthing said social media companies needed to be legally held to account for the risks their platforms create, particularly for children.
“When you hold companies account for the risks they are creating you see platforms changing the way they develop and release their products where safety and risk reduction are more central,” she said.
Google Australia representative Lucinda Longcroft said the company’s guidelines took context into account when evaluating content on their platforms.
“While I might personally find content objectionable, our guidelines are enforced by trained trust and safety employees who look both at the nature of the material … and the context,” she said
Labor MP Tim Watts asked Ms Longcroft about Google’s ‘three strikes’ policy for YouTube, where accounts that post content against the company’s guidelines three times are shut down.
He referred to nine complaints he had filed against videos on the United Australia Party’s YouTube channel.
She said six videos were removed following the complaints, but the account was still active.
Ms. Longcroft said that if multiple complaints are filed at the same time, they are grouped into a single “strike”.
Meanwhile, Meta policy chief Mia Garlick said any reports that Facebook put profits above its users’ safety were “categorically untrue.” She told the committee, safety is at the core of our business.
UAP MP Craig Kelly – who was banned from Facebook in 2021 for posting misleading content about COVID-19 – said the company had “blood on its hands” for blocking treatment information from being published against the virus.
Garlick said Facebook would take the same action against a user, whether or not he was a public figure.
She said where it comes to harmful health misinformation (our policies) are applied across the board regardless of who is making the claims.
The competition between these companies is fierce, and there are many ways to make profits without censorship. But they are not regulated, and therefore, it is hard to be a “Big Tech” company. They are hardly the only ones that are causing trouble.
Meta and Twitter called on the Australian federal government to review the effectiveness of the regulation of the country’s digital platforms in light of the passage of the Online Safety Act, as well as anti-trolling and online privacy laws currently under review.
Both tech giants have made these demands in Social Media and Online Safety Select Committee submissions, with Twitter writing that the committee should conduct a review of Australia’s online safety space a year after its initial report.
The Select Committee on Social Media and Online Safety was formed late last year to investigate the practices of major tech companies and review evidence on the impact of social media platforms on Australians’ mental health.
The committee’s investigation was approved by the federal government with the intention of building on proposed social media legislation to “expose trolls”.
Twitter said the recent passing of the Online Safety Act and the government’s federal probe only running for three months is not enough time to effectively implement digital platforms legislation.
“With the range of factors that need to considered to holistically advance online safety, we therefore ask for the timeline be extended for the Select Committee Inquiry into Social Media and Online Safety to allow for the effective introduction and implementation of the Online Safety Act 2021 (Cth) and to ensure meaningful consultation with the community,” Twitter wrote to the committee.
Meta, meanwhile, wrote in a statement that the federal government should make it mandatory to review legislation on new digital platforms to ensure they are effective and fit for purpose, specifically citing “the significant amount of new legislation that has been enacted.”
“Policymakers should be alive to the risk of overlapping, duplicative or inconsistent rules across different laws,” Meta said.
Cracking down against big tech has been big on Prime Minister Scott Morrison’s agenda as late, with the Prime Minister last year saying social media platforms are a “coward’s palace” and that they would be viewed as publishers if they are unwilling to identify users that post foul and offensive content.