Taking responsibility for
Responsible Artificial Intelligence

A free online symposium hosted by the initiative ‘RRI in Horizon Europe

Wed 16 Dec 2020, 14.30 – 16.30 CET

Video recording to be published very soon

Concept for the webinar

Throughout this year, Artificial Intelligence (AI) has dominated the science policy agenda in Brussels and far beyond. Numerous studies, committees and public consultations[1] culminated in the conclusion that we are basically part of an ongoing “social experiment”. The responsibility and accountability of the technologies in question will therefore entirely depend on how they are researched and designed, regulated and deployed.

Standard approaches toward risk assessment may not fully capture important ethical implications (many of which will not be quantifiable and some even entirely unobservable). Research and innovation funders and promoters therefore need to explicitly expect responsibility in AI programs and projects. How can we safeguard that AI research and innovation take democratic values sufficiently into account? How are citizens protected from impacts of AI of which they may not even be aware? What does this mean for applying principles of precaution? To which extent can we trust the research communities and industry to self-regulate itself when it comes to creating level playing fields?

Shoshana Zuboff, who coined the phrase ‘Surveillance Capitalism’, has warned against “marching naked into the digital century without the charters of rights, legal frameworks, regulatory paradigms, and institutions necessary to ensure a digital future that is compatible with democracy”.

How to ‘get dressed’ for the policy challenges described above, will be discussed with key actors in the field at our 2 hour online symposium (fully open to register).

To get in touch about this event, please feel free to use the following GoogleForms link. If this does not work for you technically, please contact Prof. Gerber directly: a.gerber@inscico.eu


14.30 – 14.40: Welcome: Ellen-Marie Forsberg

14.40 – 15.00: Kick-off: Max Erik Tegmark.  Life 3.0: Being Human in the Age of Artificial Intelligence

15.00 – 15.15: Virginia Dignum.  Statutory Regulation Safeguarding Social and Ethical Responsibility of AI

15.15 – 15.30: Luc Steels.  History Lessons for the Future: Toward a Human-centric governance of AI

15.30 – 15.45: Cecilie Mathiesen.  Integrating RRI in Funding on Emerging Technologies

15.45 – 16.00: Walter van de Velde.  Towards responsible AI in Horizon Europe

16.00 – 16.30: Discussion with the panel and the audience.  
Is Europe AI-ready? Are we equipped for responsible research, development and use of AI in Europe? What more is needed? Moderated by Alexander Gerber and Ellen-Marie Forsberg

About the speakers

Max Erik Tegmark, Massachusetts Institute of Technology (MIT): The Swedish-American physicist, professor at MIT and co-founder of the Future of Life Institute has been commissioned by Elon Musk to investigate existential risks of advanced AI. His inter­national bestseller “Life 3.0”, published three years ago, has triggered a vigorous debate ranging from journals like Nature and Science to the popular media.

Virginia Dignum, University of Umeå: The professor of Social and Ethical Artificial Intelligence at the University of Umeå (Sweden) and expert on statutory regulation of decisions made by agent systems on moral questions. She is a member of the European Commission’s High-Level Expert Group on AI; Fellow of the European Artificial Intelligence Association (EURAI), and associated with the Delft University of Technology. Professor Dignum recently published a book on “Responsible Artificial Intelligence” at Springer.

Luc Steels, Catalan Institute for Research and Advanced Studies (ICREA): Steels was founding director (in 1983) of the Artificial Intelligence lab at the Free University of Brussels (VUB) and of the Sony Computer Science Laboratory in Paris (in 1996). Currently, he is research professor at the Catalan Institute for Research and Advanced Studies (ICREA) in Barcelona. He co-founded the European Observatory on Society and Artificial Intelligence and is Scientific Director of the EU FET project MUHAI (Meaning and Understanding in Human-centric AI).

Cecilie Mathiesen, Research Council of Norway (RCN): The senior adviser at the Norwegian Research Council and Dr in biochemistry, works on making the Responsible Research and Innovation concept concrete in nanotech and medical projects, at RCN and in European R&I funding collaborations. She is RCN’s representative in relevant ERA-Nets, Norway’s expert representative in the FET part of the committee and delegate in FET Flagships Board of
Funders in Horizon 2020.

Walter van de Velde, European Innovation Council: Originally an AI researcher himself, with experience both in academia and in industrial innovation, Walter van de Velde has worked for the European Commission for many years now. One of his key responsibilities at the moment is the interface between the upcoming ‘Horizon Europe’ Framework Programme and the European Innovation Council.

Ellen-Marie Forsberg & Alexander Gerber (Moderators & Hosts) Both Ellen Marie Forsberg (NORSUS) and Alexander Gerber (INSCICO) have coordinated research projects towards Responsible Research and Innovation (RRI) also in the context of AI. Together with Siri Granum Carson (NTNU) they have run an engagement project for the past year, funded by the Norwegian Research Council, to mainstream RRI in the next EU Framework Programme.

[1] The European Commission’s White Paper in February proclaimed a strategy towards “ecosystems of excellence and trust” in AI, followed by a Report on safety and liability, a public consultation until June, and the European Parliament’s decision to establish a new special committee on AI. Meanwhile, a High-Level Expert Group had published its Guidelines, leading into a piloting process with over 350 stakeholders, and finally an Assessment tool for developing ‘Trustworthy AI’, released in July. This list defines seven core requirements, from privacy to accountability, and from transparency to societal well-being. To clarify how to move “From ethics to policy”, the Parliament’s scientific foresight unit STOA commissioned the TU Delft to conduct a study, from which some of the conclusions are quoted above.
Also the newly-established European Innovation Council (EIC) is orienting a prime ‘Strategic Challenge’ toward self-developing and aware artificial systems.

%d bloggers like this: