facebook
There was something of a social media storm in New Zealand on Monday, as Privacy Commissioner John Edwards attacked Facebook for refusing to accept fundamental changes to their platform.

In a series of such tweets, since deleted given the "volume of toxic and misinformed traffic they prompted," Edwards said that "Facebook cannot be trusted. They are morally bankrupt pathological liars who enable genocide (Myanmar), facilitate foreign undermining of democratic institutions... [They] allow the live streaming of suicides, rapes, and murders, continue to host and publish the mosque attack video, allow advertisers to target 'Jew haters' and other hateful market segments, and refuse to accept any responsibility for any content or harm. They #DontGiveAZuck."

After last month's attack in Christchurch was live streamed and then shared, Facebook claimed that their systems failed to pick up the footage given the lack of relevant training data, but that such systems must and will improve.

Edwards refuted this, tweeting: "You didn't have any systems."

All about 'live'

Edwards was responding in part to an interview given by Facebook CEO Mark Zuckerberg last week, in which he commented on the events in New Zealand and the role Facebook had played, and in which he dismissed calls for radical change to the company's Facebook Live streaming service.

"Would a delay help, any delay of live streaming?" George Stephanopoulos asked him on Good Morning America.

"You know, it might, in this case," Zuckerberg admitted. "But it would also fundamentally break what live streaming is for people. Most people are live streaming, you know, a birthday party or hanging out with friends when they can't be together. And it's one of the things that's magical about live streaming is that it's bi-directional, right? So you're not just broadcasting. You're communicating. And people are commenting back. So if you had a delay that would break that."

In a later interview on Radio New Zealand, Edwards said that he found Zuckerberg's comments "pretty disingenuous. Maybe a delay until they sort out their AI would be a good thing. Maybe they just need to turn it off altogether."

Regulation is (finally) now here

A week ago, Australia became the first country to introduce legislation that would hold social media executives personally liable for the content on their platforms, with significant fines and even up to three-years jail time on the table. "Internet platforms must take the spread of abhorrent violent material online seriously," explained Christan Porter, Australia's attorney general. "Platforms should not be weaponized for these purposes. This [legislation] is most likely a world first."

On Monday, the U.K. is due to kick off its own 'online harms' consultation period that could result in something similar, with a belated end to social media's exemption from responsibility for content. It is anticipated that the U.K. will introduce a specific regulator, funded by the tech companies themselves, as well as heavy company fines for breaches and even personal sanctions (including criminal charges) for company execs who fail to enforce regulations.

Referring to the proposed regulation, U.K. Prime Minister Theresa May said that "it is time to do things differently, we have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe."

"Tech companies have not done enough to protect their users and stop this shocking content from appearing in the first place," the country's Home Secretary said in a statement. "Our new proposals will protect U.K. citizens and ensure tech firms will no longer be able to ignore their responsibilities."

The prompt for the U.K. action has been the self-harm material shared on Facebook and other platforms, with the allegation that the platform's algorithms even targeted vulnerable people with such material based on their clicks and likes. The problem isn't terrorism or far-right hatred, the problem is much wider and deeper than that.

'This gives a lie to what Zuckerberg talked about the greater good," Edwards said in his radio interview. "[Zuckerberg] can't tell us or won't tell us how many suicides are live streamed, how many murders, how many sexual assaults. I have asked Facebook exactly that and they don't have those figures or they won't give them to us."

"We recognize that the immediacy of Facebook Live brings unique challenges," Facebook had acknowledged in a blog post shortly after Christchurch. "We use artificial intelligence to detect and prioritize videos that are likely to contain suicidal or harmful acts." The challenge, though, is that those "AI systems are based on 'training data', which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video." And so they need to rely on moderators and user reports, but "during the entire live broadcast, we did not get a single user report."

When asked about the new wave of regulation, Edwards said that governments finally seem to be waking up to the danger and the need to act, and that "the legal protection... with no liability for content... what we are seeing around the world is a push back on that."


Comment: Yet the government only takes action when it impacts the political bottom line.

See: Social media censorship is way more dangerous than the censored material


Missing the point

Shortly after the Christchurch attacks, Edwards made headlines when he dismissed Facebook's silence as "an insult to our grief," accusing the company of failing to "mitigate the deep, deep pain and harm from the live-streamed massacre of our colleagues, family members and countrymen broadcast over your network."

In his own interview, Zuckerberg belatedly said of Christchurch, "that was a really terrible event. And we've worked with the police in New Zealand, and we still do." The senior silence at Facebook was not broken until two weeks after the attacks, in a letter from the company's COO Sheryl Sandberg.

Addressing the people of New Zealand, Sandberg wrote: "We have heard feedback that we must do more, and we agree. In the wake of the terror attack, we are taking three steps: strengthening the rules for using Facebook Live, taking further steps to address hate on our platforms, and supporting the New Zealand community."

Zuckerberg's more recent comments seem to suggest that the company maintains that it can stick with monitoring systems to preserve the magic of 'Live'. The fact is that the level of engagement from a video is higher than for other types of content, and live opens the door to lengthier streams than pre-recorded. Forget about the 'magic', think about the 'profits'.

Since then Facebook has changed policy in another area, banning far-right 'white' nationalism, albeit there are still almost daily reports of content that has escaped censure. But live streaming remains in place. Two weeks after Christchurch, Zuckerberg posted an opinion piece in the Washington Post, to speak in general terms about the direction that internet regulation might now take, and seeming to abdicate corporate responsibility for policing the internet.

Regulators seem to agree, they appear keen to do things their own way.