Expert Comment on Amending Human Review of AI Decisions

Following the news that the government is suggesting amendments to GDPR and removing the human review of AI decisions, cybersecurity experts commented below.

Experts Comments

September 11, 2021
Andy Patel
Researcher
F-Secure

Decisions made by automated systems, and especially machine learning-based algorithms can be prone to error and/or bias. Concrete examples of such errors have already been demonstrated in systems used in insurance, hiring, education, and law enforcement. It is impossible to include every possible real-world scenario and corner case in the data used to train and validate machine learning algorithms. Article 22 of GDPR safeguards individuals against this problem, and, as such, removing the

.....Read More

Decisions made by automated systems, and especially machine learning-based algorithms can be prone to error and/or bias. Concrete examples of such errors have already been demonstrated in systems used in insurance, hiring, education, and law enforcement. It is impossible to include every possible real-world scenario and corner case in the data used to train and validate machine learning algorithms. Article 22 of GDPR safeguards individuals against this problem, and, as such, removing the provisions of article 22 from UK law is not only dangerous, but also a step in the wrong direction.

  Read Less

Submit Your Expert Comments

What do you think of the topic? Do you agree with expert(s) or share your expert opinion below.
Be part of our growing Information Security Expert Community (1000+), please register here.

Write Your Expert Comments *
Your Registered Email *
Notification Email (If different from your registered email)
* By using this form you agree with the storage and handling of your data by this web site.