Skip to content

New ethics guidelines for artificial intelligence put citizens at its core

Posted in News

The EU High-Level Expert Group on AI recently presented their ethics guidelines for trustworthy artificial intelligence. This follows the publication of the guidelines’ first draft in December 2018 on which more than 500 comments were received through an open consultation.

According to the guidelines, trustworthy AI should be:

  1. lawful – respecting all applicable laws and regulations
  2. ethical – respecting ethical principles and values
  3. robust – both from a technical perspective while taking into account its social environment

The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Next Steps

A piloting process will be set up as a means of gathering practical feedback on how the assessment list, that operationalises the key requirements, can be improved. All interested stakeholders can already register their interest to participate in the piloting process that will be kicked-off in summer 2019

Moreover, a forum discussion was set up to foster the exchange of best practices on the implementation of Trustworthy AI.

Following the piloting phase and building on the feedback received, the High-Level Expert Group on AI will review the assessment lists for the key requirements in early 2020. Based on this review, the Commission will evaluate the outcome and propose any next steps.

All relevant information on the document as well as the next steps towards the review of the assessment list can be found on the new AI Alliance, page dedicated to the guidelines.

For more information:

Communication: “Building trust in human-centric artificial intelligence”

AI ethics guidelines

Factsheet artificial intelligence

High-Level Expert Group on AI

European AI Alliance

Artificial Intelligence: A European Perspective

Artificial Intelligence Watch

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Font Resize
Contrast

By continuing to use the PROGRESSIVE Project website, you agree to its use of cookies, as described in the Privacy Policy. More information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close