mobile phones cellphones
On Thursday, Facebook and Twitter cast doubt on a US State Department claim that Russia has been secretly controlling thousands of fake accounts to spread disinformation about the coronavirus outbreak on social media.

"In general, our investigation hasn't substantiated this claim," Yoel Roth, Twitter's head of site integrity, said during a panel at the RSA security conference.

"We have asked (the State Department) for any evidence that they have to support this, and we haven't received anything yet," added Nathaniel Gleicher, Facebook's head of cybersecurity policy.

The discussion comes almost a week after the State Department told news agency AFP that it identified thousands of fake, Russia-linked accounts on Facebook, Instagram, and Twitter that are out to sow fear by falsely claiming that the US created the coronavirus strain.

But the social networks say US officials have not shared their findings with Facebook and Twitter. "We'd love to get a briefing on this," Roth said.

So far, Twitter has seen some Russian accounts tweet out medical disinformation about the coronavirus strain. But all these accounts were clearly marked as having ties to Russia, and included Kremlin-backed news agencies. "You can identify these accounts because they have names like Russia Today," Roth said. "But are there clandestine efforts on Twitter or on Instagram or on Facebook that are engaged in some sort of 2016 (presidential election) covert activity? Our experience thus far is no, we haven't identified anything like that."


Comment: Funny how they downplay one hoax by comparing it to another hoax! That's how routine anti-Russian hysteria has become.


Gleicher said Facebook works closely with the US Department of Homeland Security and the FBI to stop state-sponsored propaganda campaigns on social media. "There are a lot of organizations involved in this (information sharing), and so you will always see new organizations get involved."

The State Department declined to publicly comment on the AFP report and according to the BBC, Russia has denied accusations that it's spreading coronavirus disinformation online. But the discrepancy may create more public confusion when the two social media platforms are already battling disinformation about the coronavirus outbreak.

"When you don't share the evidence behind it, but you make a broad claim, it becomes incredibly difficult to understand if anything is there. But the theory that something is there is off to the races," Gleicher said.

Evolving Threats

Overall, however, Roth acknowledged that Russia and other advanced threat actors are always going to deploy all the tools at their disposal to influence conversations around the world. "Since 2016, we have seen an evolution of tactics set by these threat actors."


Comment: Just like every other major power: the U.S., China, Israel, Saudi Arabia...


But instead of massive "botnet style" armies of bogus accounts to spread disinformation, the bad guys have begun to favor quality over quantity. Twitter has seen a "continued shift across platforms to increasing investment in individual, high value false account," Roth said.

Gleicher agreed. "We're seeing influence operations today coming from these threat actors looking more like threat actors from the 80s and 90s," meaning a small set of very trusted accounts.


Comment: There's one such group Gleicher probably can't even bring himself to acknowledge: the Twitter-approved blue checkmarked accounts!


Those campaigns have also moved cross-platform. Camille Francois, Chief Innovation Officer at social media analysis company Graphika, described how one Russian influence campaign stretched across 20 different platforms "We think of this as a Google, Twitter, Facebook problem," said Francois. "It is not."

The wide breadth of the campaigns means they don't always look the same. "It does manifest differently on the platforms," said Toni Gidwani, Analyst Lead at Google's Threat Analysis Group.

These more sophisticated attacks sometimes also target journalists or notable, trusted people to amplify disinformation. "We see more and more campaigns that are designed to reach out to authentic voices and do their work for them," said Gleicher.


Comment: For Moses' sake, that's exactly what the CIA does!


Attackers might, he said, create several blog posts that push a certain point of view. They might then use other personas to direct reporters to those messages. "We see ourselves being used a stepping stone," he said.

Another new factor in the ongoing disinformation war is the rise of so-called disinformation as a service. "You now have the for-hire market," said Francois. "There's a cottage industry of disinformation for hire."

Defenders Get Better, Faster

One thing working in the social networks' favor: advanced campaigns are slow and require a lot of resources. It's "very expensive on the part of the threat actors...to create false organizations and get people to trust them over years and years," Gleicher said. "We see actors getting better, but more and more we see defenders getting better, faster."

Coronavirus report aside, the gathered panelists attributed their successes to improved communication and coordination between individual companies and the government, as well as each other. "Overall there is a recognition that there is no one group... that can address these threats on their own," said Roth.

Facebook and Twitter investigate claims of influence campaigns and release that information once they feel they can definitively prove their case, Roth and Gleicher said. "The fact that Twitter didn't say anything is a less appealing headline, and harder to take comfort in," said Roth. "It's the most accurate thing we could possibly do."

Meanwhile, it's not just the Russians. The panelists pointed out the challenges of home-grown disinformation. "Domestic threat actors emerged as some of the primary voices who were sowing discord...in American elections," said Roth. "Americans are doing a great job of fighting amongst themselves."

Domestic misinformation presents the double problem of not only identifying it, but then handling it in a manner that doesn't impugn on ideals such as the freedom of expression. The solution Gleicher proposed was to focus on defining and banning specific bad behaviors. That has sometimes proved challenging for social media companies, which have not always been consistent in dealing with content that violates their terms of service. When behavior is the criteria for taking action against a misinformation campaign, Gleicher said, "it doesn't matter whose behind, it doesn't matter what they're saying."