Yesterday, the European Commission released its own guidelines calling for “trustworthy AI.” According to the EU, AI should adhere to the basic ethical principles of respect for human autonomy, prevention of harm, fairness and accountability. The guidelines include seven requirements — listed below — and call particular attention to protecting vulnerable groups, like children and people with disabilities. They also state that citizens should have full control over their data.
The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren’t meant to be — or interfere with — policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As we develop the potential to build things like autonomous weapons and fake news-generating algorithms, it’s likely more governments will take a stand on the ethical concerns AI brings to the table.
Corin Imai, Senior Security Advisor at DomainTools:
“Artificial Intelligence is a valuable tool for enterprises looking to make the IT security function’s workload more manageable and to reduce the time that employee spend on mundane, time-consuming and non-cost-effective tasks. A significant number of security professionals I’ve spoken to say that AI is a trusted security tool in their organisation and that it improves IT staff’s ability to do their jobs. This suggests that AI and automation are an already integral part of many enterprises, and it is only natural that regulatory bodies would be looking at ways to keep its future development under control, as they would with any technology that has the potential to impact not only the digital realm, but the reality of private individuals.
The list of regulations that the EU has compiled seems to be heading precisely in that direction, with a focus on privacy, transparency and technical safety. Although not legally binding, providing an ethical framework for the future development of AI technology is the first step to create a product that people can trust.”
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.