A Blog by Jonathan Low

 

Apr 17, 2020

Facebook Built System To Simulate Users' Expected Behavior

Yes, humans are that predictable. Who Facebook will sell the data to and for what purposes is the most serious concern. JL

Karen Hao reports in MIT Technology Review:

Facebook built a scaled-down version of its platform to simulate user behavior. It helps engineers identify and fix undesired consequences of new updates before they’re deployed. Bots interact directly with the back-end code. This allows them to coexist with real users and more accurately simulate different scenarios. The company is using it to test features that would make it harder for bad actors to violate the platform’s guidelines. But it sees potential applications, such as testing how platform updates might affect engagement and other metrics.
The context: Like any software company, the tech giant needs to test its product any time it pushes updates. But the sorts of debugging methods that normal-size companies use aren’t really enough when you’ve got 2.5 billion users. Such methods usually focus on checking how a single user might experience the platform and whether the software responds to those individual users’ actions as expected. In contrast, as many as 25% of Facebook’s major issues emerge only when users begin interacting with one another. It can be difficult to see how the introduction of a feature or updates to a privacy setting might play out across billions of user interactions.
SimCity: In response, Facebook built a scaled-down version of its platform to simulate user behavior. Called WW, it helps engineers identify and fix the undesired consequences of new updates before they’re deployed. It also automatically recommends changes that can be made to the platform to improve the community experience.
Bot doppelgangers: Facebook simulates hundreds to thousands of its users at a time with a mix of hard-coded and machine-learning-based bots. The latter are trained using a reinforcement-learning algorithm, which learns through trial and error to optimize their behavior in light of some objective. The bots are then made to play out different scenarios, such as a scammer trying to exploit other users or a hacker trying to access someone’s private photos. In a scamming scenario, for example, the scammer bots are given the objective of finding the best targets to scam. The target bots, by contrast, are hard-coded with the most common vulnerable behaviors exhibited by users. Each scenario may have only a few bots acting them out, but the system is designed to have thousands of different scenarios running in parallel.
Automatic design: While the scenarios play out, the system automatically adjusts different parameters in the simulation, such as the bots’ privacy settings or the constraints on their actions. With every adjustment, it evaluates which combination of parameters achieves the most desired community behavior, and then recommends the best version to Facebook’s platform developers.
Hidden from view: In order to create as realistic a simulation as possible, WW is actually built directly on the live platform rather than a separate testing version—another key difference from most testing schemes. The bots, however, stay behind the scenes. While a typical user interacts with Facebook through a front-end user interface, such as a profile and other website features, fake bot users can interact directly with the back-end code. This allows them to coexist with real users and more accurately simulate different scenarios on the platform without having those users mistakenly interact with them as well.
Future perfect: Right now the company is using it to test and improve features that would make it much harder for bad actors to violate the platform’s community guidelines. But it also sees other potential applications for the system, such as testing how platform updates might affect engagement and other metrics.

0 comments:

Post a Comment