In a blog post yesterday, social media giant Facebook said that it will use artificial intelligence (AI) to find and remove terrorist content before other users see it after the platform was criticised for not doing enough to tackle extremism. Homer Strong, Director of Data Science at Cylance commented below.
Homer Strong, Director of Data Science at Cylance:
Both the confidence and the decision of sufficiently sophisticated AI can be bypassed using adversarial learning techniques. A terrorist who is blocked by Facebook is more likely to switch to some other platform rather than bypass the AI, but Facebook can never completely remove terrorist content.”
The opinions expressed in this article belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.