AI Lie Detector Developed For Airport Security

By   ISBuzz Team
Writer , Information Security Buzz | Aug 06, 2019 12:47 am PST

It has been reported a group of researchers are quietly commercialising an artificial intelligence-driven lie detector, which they hope will be the future of airport security. Discern Science International is the start-up behind a deception detection tool named the Avatar, which features a virtual border guard that asks travellers questions. The machine, which has been tested by border services and in airports, is designed to make the screening process at border security more efficient, and to weed out people with dangerous or illegal intentions more accurately than human guards are able to do. But its development also raises questions about whether a person’s propensity to lie can be accurately measured by an algorithm.

Subscribe
Notify of
guest
2 Expert Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Hugo van Den Toorn
Hugo van Den Toorn , Manager, Offensive Security
August 6, 2019 9:01 am

Could a technology like this be at risk of getting hacked?
This is absolutely a risk, like any technology but with a bigger potential fallout. Things that come to mind are: Manipulating/rewriting the code to alter the detection. Blocking off the traffic, so even though you are clearly lying, the Avatar cannot notify anyone. Or creeping into the brain of the Avatar, just to capture all the information it is processing.
What kind of security risks should be considered?
We need to be aware of both ‘offensive’ cyber-attacks against the Avatar and also the ‘digital’ development side of things. Obviously as with any new product, we need to make sure it’s functionalities work as intended (is the Avatar a good enough border security guard?) and on top of that, we need to test the security from an attacker’s perspective (can anyone break or manipulate the Avatar?). Beside the Avatar being a goldmine for cyber criminals with recordings of individuals being questioned and their expressions analysed. The functional requirements might at this point even be a bigger risk before adopting this technology. Not just false-positives (stopping too many people because of potential lying), false-negatives might be an even bigger threat in this case (letting too many lying people through). What if cultural differences, languages, accents, dialects and unique individual characteristics can throw the agent off? Can we ‘hack’ the Avatar by sending someone with blonde hair and blue eyes, because the Avatar only knows dark hair and brown eyes? Or can we simply manipulate the Avatar by feeding it lies, like hackers did to Microsoft’s Chatbot Tay?
Do you expect the technology to take off?
I hope not yet, although a promising technology, I would expect a really slow adoption rate and a testing period in great lengths. There are too many open questions at this point to warrant a maximum of 31% increase in accuracy. Until it has become proven technology and is also security checked by multiple recognised cybersecurity vendors, I think the Avatar and the border security guards will have to work in teams to offer the best of both worlds.

Last edited 4 years ago by Hugo van Den Toorn
Igor Baikalov
Igor Baikalov , Chief Scientist
August 6, 2019 8:52 am

It is a very interesting and promising application of AI, and I\’m surprised at the scepticism that it encounters. It mimics the same techniques used by interrogators all over the world: analysis of micro expressions, voice and verbal responses. With proper training, I\’d expect AVATAR to perform a lot better than an average human interrogator, since it\’ll be more observant, as machines are, less tired or distracted, and less biased. It\’s unlikely the AI trained on micro expressions would give preferential treatment to a pretty young woman over an unkempt bearded middle-aged man. The tests seem to support this expectation, with humans performing just slightly better than random choice, at 54%, and AI hitting accuracy over 80%. Additional biometric sensors, like the ones used on exercise machines to measure you pulse, can give it a serious edge over even the most experienced human interrogator.

I wouldn\’t worry much about false positives: this system is not replacing human guards, it augments their ability to detect deception and significantly reduces their workload, allowing them to focus on truly suspicious subjects. As long as the results of human vetting are fed back into the system, it will continually improve its accuracy, and aggregating models trained at different locations by variety of guards will further improve its predictive power. Care should be taken in the quality of training feedback so not to inadvertently poison the model by introducing, for example, racial or gender bias.

Can the AVATAR be beaten? Absolutely, just like the best interrogators and the most accurate polygraph machines. No one system or control should be completely relied on to prevent threats; defence in depth is what will keep us safe. Will this technology take off? I surely hope so. It has the potential to improve security and streamline the screening process.

Last edited 4 years ago by Igor Baikalov

Recent Posts

2
0
Would love your thoughts, please comment.x
()
x