© Global Look Press via ZUMA Press / Jaap Arriens Youtube
Facebook is using bots to simulate the behavior of users when updates occur on the social meda platform.
The MIT Technology Review on Wednesday, April 15, cited
a paper by Facebook engineers, titled "WES: Agent-based User Interaction Simulation on Real Infrastructure." The paper describes how Facebook created a scaled-down version of the platform, called WW, to simulate user behavior with the help of bots.
In WW, hard-coded and machine-learning-based bots act as users with different goals or agendas which play out differently depending on the scenario set up by the engineers.
For instance, a scammer might be trying to exploit users. In this scenario, the bots who act as scammers are trying to find the best targets to scam. The targets, meanwhile, are hard-coded with the most common vulnerable behaviors done by real users. Thousands of these little scenarios happen at the same time in WW.
The WW system then adjusts various settings as the scenarios play out, judging which combination of parameters is deemed to create the most desired community behavior, then automatically recommending the changes that can be done to improve user experience on Facebook to platform developers.
Facebook says that while the current application of WW is towards preventing bad actors from using Facebook to violate community guidelines, it's also possible to adapt the platform to test for other things, such as increasing engagement or other specific metrics.
Comment: RT writer Helen Buyinski
explains why this is such a concerning technological development:
While the writers have cloaked their and their bots' activities in several layers of academic language, the report reveals their creations are interacting through the real-life Facebook platform, not a simulation. The bots are set up to model different "negative" behaviors - scamming, phishing, posting 'wrongthink' - that Facebook wants to curtail, and the simulation allows Facebook to tweak its control mechanisms for suppressing these behaviors.
Even though the bots are technically operating on real-life Facebook, with only the thinnest veil of programming separating them from interacting with real-world users, the researchers seem convinced enough of their ability to keep fantasy and reality separate that they feel comfortable hinting in the paper of new and different ways of invading Facebook users' privacy.
Comment: RT writer Helen Buyinski explains why this is such a concerning technological development: