Police Officers Raise Concerns About ‘Biased’ AI Data – Comments

By   ISBuzz Team
Writer , Information Security Buzz | Sep 17, 2019 07:59 am PST

Police officers have raised concerns about using “biased” artificial-intelligence tools, a report commissioned by one of the UK government’s advisory bodies reveals. The study warns such software may “amplify” prejudices, meaning some groups could become more likely to be stopped in the street and searched.

Subscribe
Notify of
guest
1 Expert Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Zach Jarvinen
Zach Jarvinen , Head of Product Marketing, AI and Analytics
September 17, 2019 3:56 pm

As we move into an era in which organisations rely more and more on machine-enabled decision making, we must confront the ethical questions raised by AI head on. This is especially important for public sector organisations, whose decisions have a long-term – and often profound – impact on the lives of citizens and the culture of a country.

The best way to prevent bias in AI systems is to implement ethical code at the data collection phase. This must begin with a large enough sample of data to yield trustworthy insights and minimise subjectivity. Thus, a robust system capable of collecting and processing the richest and most complex sets of information, including both structured data and unstructured, including textual content, is necessary to generate the most accurate insights.

Data collection principles should be overseen by teams representing a rich blend of views, backgrounds, and characteristics (race, gender, etc.). In addition, organisations should consider having an HR or ethics specialist working in tandem with data scientists to ensure that AI recommendations align with the organisation’s cultural values.

Of course, even a preventive approach like the one outlined above can never safeguard data entirely against bias. It is therefore critical that results are examined for signs of prejudices. Any noteworthy correlations among race, sexuality, age, gender, religion and similar factors should be investigated. If a bias is detected, mitigation strategies such as adjustments of sample distributions can be implemented.

With the stakes so high, it is vital that public bodies start out with a clear goal that aligns to ethical values and routinely monitor AI practices and outcomes.

Last edited 4 years ago by Zach Jarvinen

Recent Posts

1
0
Would love your thoughts, please comment.x
()
x