The internet behemoth said its efforts in artificial intelligence (AI) and other automated filtering processes have become so sophisticated that 99 percent of the terror content related to the Islamic State militant group and al Qaeda that’s removed from the site is detected even before being flagged by a human. And once Facebook identifies photos, videos or text related to terror, it successfully removes 83 percent of the copies it finds elsewhere on the site within an hour.
Facebook declined to share specific data on how much terrorist content it actually deletes on a daily basis, so it’s not clear how much content is removed from the site.
One of the dangers there is that we’re dealing with a nimble set of organizations that frequently change the way that they behave.
Brian Fishman, head of counterterrorism policy at Facebook
Monika Bickert, Facebook’s head of global policy management, and Brian Fishman, head of counterterrorism policy, explained the developments in a release, noting that the company has partnered with numerous other online platforms to share anti-terror data.
That’s notable, considering that the internet in general ― and social media sites in particular ― has become a prime recruiting ground for terrorists.
Facebook and other companies use “hashes,” or unique identification data, to find and remove terror content across multiple different websites. So a propaganda video first uploaded to Facebook, for instance, could easily be deleted from YouTube and Twitter after Facebook identifies and shares the video’s hash.
Despite the success, Fishman told the Wall Street Journal that Facebook still faces an uphill battle.
“One of the dangers there is that we’re dealing with a nimble set of organizations that frequently change the way that they behave,” Fishman explained. “We need to keep training our machines so that they stay current.”
In addition to sorting out ISIS propaganda, Facebook is also using AI to detect and reach out to people sharing suicidal thoughts.