facebook zuckerberg
© AP Photo / Jeff Chiu
Facebook's "automatic alternative text" feature silently implemented by the IT giant on its social media platforms may be used to track users and could potentially be abused by hackers and intelligence agencies, say cyber security experts, suggesting that EU authorities may roast the tech giant for violating the bloc's data protection rules.

Massive outage and loading problems have unexpectedly revealed that Facebook's AI is adding a text description to every photo posted on its social media platforms, including Instagram.

According to Facebook, this feature, called an "automatic alternative (alt) text" uses "object recognition technology to create a description of a photo for the blind and vision-loss community".

Cyber security experts, however, believe that there is more to the tech giant's software than meets the eye.

Facebook Uses Facial Recognition AI for Data Mining, Advertising & Tracking of Users

"There are many purposes, text versions of the images are used to increase the user experience of the platform, for example, they can be used to make the site accessible to people are unable to see the images (i.e. blind people or partially sighted)", explains Pierluigi Paganini, CTO at Cybaze and member of ENISA ETL group. "Another scenario sees the use of AI-based systems in order to identify people in the picture and use the collected information to build a graph of relationship for a person of interest. These graphs could be used for advertising purposes, but we cannot ignore that potential abuses could open the door to surveillance".

According to Paganini, the latter scenario is particularly disturbing because Facebook's alt text feature "could be abused by intelligence agencies and law enforcement for dragnet surveillance".

Charles R. Smith, CEO Software Inc expert and encryption security programmer, elaborated that "the system appears to an automated photo tag programme, providing text content to describe the picture for other programmes which do text based data mining".

"Facebook already has the facial recognition system in place and is currently using it for data mining, advertising and tracking of users", he said. "Expect law enforcement agencies (and intelligence agencies) to obtain access to Facebook's database - either via the courts or illegally by hacking".

Smith believes that the technology could be used for various purposes: one of them is censorship, but a more likely use is "tracking". According to the programmer, although Facebook currently appears to be uninterested in the political applications and is just making money on advertisement, it may one day weaponise this feature "against dissidents or political opponents".


Comment: Too late. They appear to be quite interested in political applications and there's no reason why this feature wouldn't be used in the same way (e.g. censorship of difference political views).


Facebook logo
© AP Photo / Tony AvelarAttendees stick notes on a Facebook logo at F8, the Facebook's developer conference, Tuesday, April 30, 2019, in San Jose, Calif.
Facebook's Alt Text Feature Might Violate EU Data Protection Laws

For his part, Kevin Curran, professor of cyber security at the Department of Computing, Engineering & Built Environment at Ulster University, suggested that the Silicon Valley giant's "automatic alternative text" feature violates EU laws.

"It seems that Facebook may be using technology to classify image contents", he said. "Facebook however will find this practice comes under close scrutiny as this is forbidden in Europe".

He emphasised that "the General Data Protection Regulation (GDPR) laws in Europe prevent such a practice - or at the very least, the practice of facial recognition being used on images would have to be explicitly outlined in their terms and consented to".

"It is hard to see how they can continue to practice such an intrusive technique", Curran remarked.

In 2012, the tech giant promised European regulators "that it would forgo using facial recognition software and delete the data used to identify Facebook users by their pictures", according to The New York Times.

EU regulators voiced concerns in 2011 over the tech firm's "tag suggestion", a feature that automatically matched pictures with names, and biometric facial recognition technology used by the company since 2010. They insisted that Facebook had not obtained people's consent for their images to be scanned and identified. As a result, the "Tag Suggest" feature was banned in Europe.

"We will continue to work together to ensure we remain compliant with European data protection law", Facebook asserted to European authorities.

'Social Media Users Should be Aware of Privacy More Than Ever'

Recent developments, however, indicate that the Silicon Valley giant does not appear to be walking-the-talk.

"The consequence as ever for users is that there is no guarantee whatsoever that our image data description will not wind up in the hands of others outside of the circle we decided to share it in", Kevin Curran highlighted. "In fact, a healthy attitude is to presume that any data or images you release to a third party will eventually leak. The only defence you have is strong encryption and also to not send beyond the walls of your residence".

According to the professor, social media users "should be aware of privacy more than ever".

"Facebook stores more information on users than they know", Curran warned. "Facebook of course is in the ad business and currently taking a significant proportion of global ad revenue. Facebook is not forthcoming however in telling us what type of information they collect, how much they collect".

The tech giant has repeatedly faced criticism over its controversial policies. On 2 July 2019, Germany's Federal Office of Justice announced that it had fined Facebook 2 million euros over the tech giant's failure to meet transparency rules about how it tackles offensive content.

Earlier, in May 2019, the tech giant was accused of apparent political bias after removing numerous accounts of right-wing bloggers and media pundits. In 2018, Facebook found itself in a heap of trouble after it turned out that it had exposed the data of up to 87 million users without direct consent to Cambridge Analytica, a British political consulting firm, involved in data mining.