The social networking site Facebook uses bots in a testing environment to investigate the fight against harassment on the Internet. Facebook researchers are developing new technology that they hope will support ongoing artificial intelligence efforts – AI on their platforms that can prevent harassment. In a web-activated simulation (WES), an army of bots is programmed to mimic the bad behavior of people who work freely in a testing environment, and then Facebook engineers will work to find a solution. Best deal.
WES has three main aspects, researcher Mark Harman said in a statement. First, he trains machine learning to train bots to simulate real human behavior on Facebook. Second, WES can automate bot interactions on a large scale from thousands to millions. Finally, WES uses bots based on Facebook’s actual production code, which allows bots to interact with each other and with real Facebook content. However, this is different from real users.
In a test environment, known as WW, bots perform misconduct like trying to buy and sell prohibited items like guns and drugs. The bot can use Facebook like a normal person, it will conduct searches and access pages. Engineers can then check if the bot can bypass protections and violate community standards. The plan is for engineers to find patterns in the results of these tests and use that data to test ways that make it difficult for users to want to violate Community Standards.
Facebook has long said it develops methods to prevent harassment, criminal activity, misinformation and other types of offenses on the platform. At the Facebook F8 2018 conference, Chief Technology Officer Mike Schroepfer said the company is investing heavily in artificial intelligence research and seeking to make it work on a large scale without human supervision.