It has been reported today that the Home Office have announced new technology which will be made available to all internet platforms that could stop the majority of Isis videos from reaching the internet by analysing the audio and images of a video file during the uploading process, and rejecting extremist content.
The government hasn’t ruled out forcing technology companies to implement it by law. IT security experts commented below.
Bill Evans, Senior Director at One Identity:
“While at the surface, this seems like a wonderful idea – to protect the homeland from the insidious content being distributed via cyberspace by our enemies in the war on terror – but it is indeed a slippery slope. To be sure, there is evidence that this technology has proven effective; the tool can automatically detect 94% of Daesh propaganda with 99.995% accuracy. However, there are risks, longer-term risks, to using this type of technology that can “automagically” find and remove “terrorist” content.
The most-pressing risk relates to how this new technology could be used in the future. For example, most ordinary people today would likely agree on what is “terrorist content.” However, if the government were to mandate the use of this technology, we would be at the mercy of what would be tantamount to censors who would determine where that line actually lives. Ads from rival political parties? Information about views that are not congruent with a certain set of beliefs? These could all end up being removed from view by this government-mandated tool. Likely? Probably not, but possible.
It’s best for the government to work with the private sector to enhance the efficacy of this type of technology and then encourage the various internet platform vendors to use it. In a political system which involves free speech as a basic right of the people, that’s a much better deployment scenario than evoking the “thought police.”
Lee Munson, Security Researcher at Comparitech.com:
“In theory, the creation of a tool to combat extremist material on the web has to be a good thing but the issue here is that practical implementation and theory are often not the same thing, especially when governments are involved.
While removing ISIS material is to be applauded, the artificial intelligence system favoured by the Home Office could, potentially, be used to search out any type of material, and who is to say what is offensive and what sits quite happily within our own laws protecting freedom of speech?
Without the correct levels of oversight – and we’ve seen recently how that angle has proved problematic with the Investigatory Powers Act – machine learning could be used to quash all kinds of legitimate videos and other types of content that have their place in a free and democratic society.
The only way to ensure such a system does not overstep its remit, therefore, is to infuse it with the right level of human interaction, not only to guard against false positives but also to police against any potential abuse of power and censorship.
Given how government works, its already proven inability to understand technology in the terrorism sphere, and a public concerned about privacy and security, fair implementation and self-policing will be of paramount importance and, in my opinion, almost impossible to guarantee.”
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.