Yesterday, the European Commission released its own guidelines calling for “trustworthy AI.” According to the EU, AI should adhere to the basic ethical principles of respect for human autonomy, prevention of harm, fairness and accountability. The guidelines include seven requirements — listed below — and call particular attention to protecting vulnerable groups, like children and people with disabilities. They also state that citizens should have full control over their data.
The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren’t meant to be — or interfere with — policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As we develop the potential to build things like autonomous weapons and fake news-generating algorithms, it’s likely more governments will take a stand on the ethical concerns AI brings to the table.
Corin Imai, Senior Security Advisor at DomainTools:
The list of regulations that the EU has compiled seems to be heading precisely in that direction, with a focus on privacy, transparency and technical safety. Although not legally binding, providing an ethical framework for the future development of AI technology is the first step to create a product that people can trust.”
The opinions expressed in this article belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.