Commission unveils 7 step strategy to build trust in AI

On 8 April 2019, the EC published a communication entitled “Building Trust in Human-Centric Artificial Intelligence”, presenting the next steps of its work on ethical guidelines on artificial intelligence (AI).

The purpose of the communication is to launch a comprehensive piloting phase involving stakeholders to test the practical implementation of ethical guidance for AI development and use. In its communication, the Commission unveils seven key requirements to ensure trustworthy AI applications:

  • Human agency and oversight – ensuring that AI systems support individuals in making better, more informed choices in accordance with their goals.
  • Technical robustness and safety – requiring algorithms to deal with errors or inconsistencies during all life cycle phases of the AI system, and to adequately cope with erroneous outcomes. 
  • Privacy and Data Governance – guaranteeing privacy and data protection at all stages of AI system’s life cycle and ensuring high quality AI systems.
  • Transparency – ensuring the traceability of AI systems to log and document both the decisions made by the systems as well as the entire process that yielded the decisions. 
  • Diversity, non-discrimination and fairness - establishing diverse design teams and setting up mechanisms ensuring participation, in particular of citizens, in AI development.
  • Societal and environmental well-being – taking into account the impact of AI on the environment and other sentient beings.
  • Accountability – ensuring the responsibility and accountability for AI systems and their outcomes, both before and after their implementation.

The Commission also published an assessment list to help stakeholders check whether the requirements are fulfilled.

The Commission will now launch a targeted piloting phase to obtain feedback from stakeholders. This exercise will focus in particular on the assessment list developed by the high-level expert group on AI, to help verify whether each of the key requirements is fulfilled.

At the beginning of 2020, based on the evaluation of feedback received during the piloting phase, the AI high-level expert group will review and update the ethics guidelines published in December 2018. The Commission will also evaluate the outcome of the review and propose any next steps.

The Commission considers that the trustworthiness of AI should be ensured by the implementation of ethical guidelines built on the existing regulatory framework. The Commission also believes that these guidelines should be applied by developers, suppliers and users of AI in the internal market. That being the case, the proposed guidelines are non-binding and therefore do not create any new legal obligations.

Background

In December, the European Commission presented a coordinated plan on Artificial Intelligence to foster the development and use of AI in Europe. Moreover, the High-Level Expert Group on Artificial Intelligence released the first draft of its Ethics Guidelines on the development and use of artificial intelligence (AI).

Following a stakeholder consultation and meetings with Member State representatives, the AI expert group shared a revised document with the Commission in March 2019. Overall, stakeholders so far welcomed the ethical guidelines.


Date: 06/05/2019 | Tag: | News: 943 of 997
All news

News

More news

Events

More events
newsletter

Subscribe to the VPH Institute Newsletter

Accept Privacy Policy

ARCHIVE

Read all the newsletters of the VPH Institute

GO