Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation

A new report entitled “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation” has been released by the HUB4NGI project that reports results from a consultation with cross-disciplinary experts in and around the subject of Responsible Artificial Intelligence (AI).

“Responsible AI” is an umbrella term for investigations into legal, ethical and moral standpoints of autonomous algorithms or applications of AI whose actions may be safety-critical or impact the lives of citizens in significant and disruptive ways.

Artificial Intelligence is indeed playing a major role in the way the Internet is transforming and for this transformation to support people’s real needs and deliver better value to the individuals and to the society, it is of utmost importance to ensure a “responsible” approach to AI.

This document’s purpose is to provide input into the advisory processes that determine European support for both research into Responsible AI and how innovation using AI that takes into account issues of responsibility can be supported. “Responsible AI” is an umbrella term for investigations into legal, ethical and moral standpoints of autonomous algorithms or applications of AI whose actions may be safety-critical or impact the lives of citizens in significant and disruptive ways. It reports a summary of results from a consultation with cross-disciplinary experts in and around the subject of Responsible AI.

The chosen methodology for the consultation is the Delphi Method, a well-established methodology that aims to determine consensus or highlight differences through iteration from a panel of selected consultees. This consultation has resulted in key recommendations, grouped into several main themes:

  • Ethics (ethical implications for AI & autonomous machines and their applications);
  • Transparency (considerations regarding transparency, justification and explicability of AI & autonomous machines’ decisions and actions);
  • Regulation & Control (regulatory aspects such as law, and how AI & automated systems’ behaviour may be monitored and if necessary corrected or stopped);
  • Socioeconomic Impact (how society and the economy are impacted by AI & autonomous machines);
  • Design (design-time considerations for AI & autonomous machines) and
  • Responsibility (issues and considerations regarding moral and legal responsibility for scenarios involving AI & autonomous machines).

The recommendations arising from the panel are discussed and compared with other recent European studies into similar subjects. Overall, the studies broadly concur on the main themes, and differences are in specific points. The recommendations are presented in a stand-alone section “Summary of Key Recommendations”, which serves as an Executive Summary.

The report is available at here.