Facebook has spoken for the first time about the artificial intelligence programmes it uses to deter and remove terrorist propaganda online
Facebook has spoken for the first time about the artificial intelligence programmes it uses to deter and remove terrorist propaganda online CREDIT: AP
by Kate McCann
Facebookhas spoken for the first time about the artificial intelligence programmes it uses to deter and remove terrorist propaganda online after the platform was criticised for not doing enough to tackle extremism.
The social media giant also revealed it is employing 3,000 extra people this year in order to trawl through posts and remove those that break the law or the sites’ community guidelines.
It also plans to boost it’s “counter-speech” efforts, to encourage influential voices to condemn and call-out terrorism online to prevent people from being radicalised.
In a landmark post titled “hard questions”, Monika Bickert, Director of Global Policy Management, and Brian Fishman, Counterterrorism Policy Manager explained Facebook has been developing artificial intelligence to detect terror videos and messages before they are posted live and preventing them from appearing on the site.
The pair state: “In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online. We want to answer those questions head on.”
Explaining how Facebook works to stop extremist content being posted the post continues: “We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course.
“When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site.
“We have also recently started to experiment with using AI to understand text that might be advocating for terrorism.” Facebook also detailed how it is working with other platforms, clamping down on accounts being re-activated by people who have previously been banned from the site and identifying and removing clusters of terror supporters online.
The social media platform, which is used by billions of people around the world, also explained it employs thousands of people to check posts and has a dedicated counter-terrorism team.
“Our Community Operations teams around the world — which we are growing by 3,000 people over the next year — work 24 hours a day and in dozens of languages to review these reports and determine the context. This can be incredibly difficult work, and we support these reviewers with onsite counseling and resiliency training,” it said.
Facebook came under pressure from ministers after a number of recent terror attacks for failing to do more tackle and remove extremist posts.
Amber Rudd, the Home Secretary, said earlier this year: “Each attack confirms again the role that the internet is playing in serving as a conduit, inciting and inspiring violence, and spreading extremist ideology of all kinds,” she writes.
“But we can’t tackle it by ourselves … We need [social media companies] to take a more proactive and leading role in tackling the terrorist abuse of their platforms.”