Recommendations on the European Commission’s White Paper on Artificial Intelligence

Beitrag als PDF (Download)

The European Commission launched a Consultation on Artificial Intelligence on February 19, 2020, with the aim of gathering feedback from all interested parties. The following paper will provide some of the actions that would be essential to implement to facilitate the creation of a European Ecosystem of Excellence and Trust in AI.
Recommendation 1 – Establishing the Principles of an AI Regulatory Framework
The development of a legal and technical regulatory framework for AI in the EU should focus on three principles (the “AI Principles”):
Understanding the potential of AI. At this moment we can distinguish three types of AI: Basic AI (which is capable of autonomous data gathering and analysis, task and process automation, document understanding, machine learning, etc.); Autonomous AI (capable of executing tasks while also autonomously learning to become more efficient, fast and accurate and at a more superior level than we currently have under the Basic AI stage) and Advanced AI (which will be able to process and generate independent solutions to the field in which it is deployed as well as independently using various technologies currently operated by humans to achieve the objectives set by these humans or even efficiency and process improvements as defined by its own learning tools). It is imperative that we make the distinction between them even at this early stage through research and analysis of already existing data and systems.
Preparing for full-scale AI Systems (of all types) being adopted and implemented in Europe, including for the disruption this will cause; and putting in place the necessary safeguards required to maintain the designed AI framework. Such measures will be essential to ensure integrity of EU developed AI Systems when these are correlated or integrated within regional or even global AI Systems. The goal is to ensure compliance with any minimal legal and technical principles and controls established by the EU. This will enable a truly ethical, trustworthy and secure development and adoption of AI in the EU, consistent with the values and rights of EU citizens. We need to ensure that AI Systems designed in the US or Asia, which will be integrated with EU developed systems, or used within the EU, use the same core values in their autonomous decision-making process (or we risk having the same issues we now have with privacy and the ineffective post-factum frameworks put in place such as Safe Harbor or Privacy Shield).

Recommendation 2 – Creating a European Artificial Intelligence Agency (EAIA)
Current efforts on developing a legal and technical regulatory framework seem fragmentary and insufficient given the potential impact of AI on society. A more efficient and productive approach would be to set up a dedicated EU Agency (European Artificial Intelligence Agency – EAIA). Based on the envisaged importance of AI and its disruptive capabilities for society and day-to-day life, it is vital to allocate the right resources and people for the proper development of this industry. An EU Agency would have the capability to tackle issues related to its activity not only with a cross-border dimension but also an inter-institutional dimension within the EU itself (with EAIA local departments). Thus, in line with the above recommendations, the EU should look at adopting a regulation on artificial intelligence which would have direct applicability across the EU.

Recommendation 3 – EU AI Objectives
The main objectives of the EAIA would be:
Cooperation. Working together with leading companies, start-ups, research labs, etc. to create certain technical and legal standards for the development and implementation of AI solutions.
Institutional Understanding. Developing institutional knowledge and skills at EU level from the start by having EAIA and its departments pioneer mechanisms to learn new skills and competencies relevant to the new technical developments.
Evolutionary AI Based Society. We are currently in the presence of Basic AI Systems which are rapidly evolving towards Autonomous AI Systems, the ultimate destination being Advanced AI Systems, the capabilities and impact of which we can only suspect at this point by corroborating separate elements of technologies which are now independently in development. It will take a certain amount of time until a more integrated ecosystem will take shape, such as comprehensive interlinked systems currently called Internet of Things, Autonomous Hardware powered by Autonomous AI Systems and other physical machinery, neuromorphic solutions, quantum computing, the evolution and possible impact of cryptocurrencies, implementation of blockchain contracts, the future of computational law, etc. It is still unclear how all these elements will fit together: if some will be excluded, incorporated into some other category or compete with one another. The impact of blockchain on how contracts may be executed, cryptocurrency on how payments are made or if computational law has a future in ensuring mechanical legal compliance independent of human intervention are matters that at this point require specific granular analysis by simultaneously taking a holistic approach to ensure the bigger picture is not overlooked while also delving into the weeds of each branch of these and other new technologies. The EAIA would be tasked with understanding and contributing to how all the pieces of the puzzle will fit together.
Coordinated Adoption. Implementing various pre-AI tools with the goal of ultimately adopting real AI technologies should be the roadmap towards the digital transformation of every Member State authority and EAIA would be the right institution for this. For this purpose, a unique strategy document should be adopted outlining the principles and framework for every public acquisition in order to ensure consistency and the required technical synergies.

Recommendation 4 – AI Minimum Standard: European Values
The current wording in the AI White Paper only includes a possibility to promote EU values when working and collaborating on international AI projects. However, unequivocal adherence to a minimum of the EU’s legal and technical standards should be a precondition to any possible international cooperation on AI matters as a legally binding minimum standard from which the Union will not deviate in its initiatives either internally, in relation to its own Member States, or externally, in relation to third parties, when collaborating on AI projects.

Recommendation 5 – Updating the seven Key Requirements and the categories impacted by AI as defined by the AI High-Level Expert Group (HLEG)
The AI HLEG stated in its guidelines that, in order to be trustworthy, any AI System should comply with seven key fundamental rights and ethical principles (the “seven key requirements”). The HLEG further identifies the different stakeholders involved in the AI Systems life cycle to which the seven key requirements are applicable, categorized as (the “AI Stakeholders”):

  • developers (those who research, design and/or develop AI Systems);
  • deployers (public or private organizations that use AI Systems within their business processes and to offer products and services to others);
  • end-users (those engaging with the AI System, directly or indirectly); and
  • broader society (all others that are directly or indirectly affected by AI Systems).

However, stating that all seven key requirements apply to all of the above AI Stakeholders does not take into account their respective role in the life cycle of the AI System. It should be made clear which of the seven key requirements each AI Stakeholder should focus on and apply separately. In particular, developers need to know which of the seven key requirements to apply to their design, deployers must verify that the design of the AI Systems they use corresponds to the key requirements applicable to the developers and that they themselves comply in their use of the system with any key requirements applicable to them, while the end users and society in general are always informed about the compliance of the systems they use with all of the seven key requirements (see Fig.1).
Fig 1. The requirements applicable to each category and the relation of auto-control of the requirements for each specific category highlighting that there should be different responsibility, requirements and liability for the AI Stakeholders. A similar analogy is the separation of powers in a democratic system through legislative, executive and judicial branches, where each branch ensures that the other is not abusing its powers while at the same time each branch has its responsibilities clearly defined. By applying this mechanism, we can ensure clarity over which requirements apply to each category and the extent of the potential liability inherent in a certain category and the stakeholders forming it (joint or separate liability of stakeholders within the same or different categories depending on the requirements applicable to them and their actual role).
It is safe to assume that there are only a number of principles which can be reasonably identified and enforced at this point in the evolution of AI while others will need to be developed once the AI Systems reach their next stage of evolution and should therefore be corelated with the three AI stages of development by considering the future new principles and anchoring them in these key requirements.

To address this view, additional requirements should be added to or included in the seven key requirements, as follows:

  • Hard-coded values: Similar to the principle of privacy by design, we should identify other human rights or values which might be impacted by unpredictable technological advancements and ensure that these are protected as well.
  • Learning parameters: We would need to establish the right rules for learning and the safeguards and limitations needed to avoid infringing the rights of EU citizens; such rules should always be in compliance with all of the above key principles and requirements; in other words, instead of treating any breach of AI we would be preventing it.
  • Most Favourable Technique (“MFT”): We need to clarify the development techniques and the priority of their use versus the possible impact on the individual’s rights. This means that the development technique which is not negatively impacting individual’s rights, transparency of the process, explainability or other key requirement applicable to developers, should be used. When there are two or more relatively similar options for development but some of them do not conform with the applicable Key Requirements, they should not be used if there is the alternative to use one which complies with the applicable key requirements. If there is no alternative, then an analysis which would consider all applicable key requirements regardless of the AI Stakeholder category will be used to determine which MFT from another category can be used to limit or reduce the possible negative impact.

Nevertheless, all the seven key requirements and the AI Stakeholders to which they apply, as described by the AI HLEG, have as a premise and are focused purely around human actions and their instructions or involvement in the development and design of AI Systems. This human-centric premise misses an important purpose of AI regulation, namely that AI Systems also perform independent actions in all of their stages of development (from Basic to Advanced AI Systems).

Recommendation 6 – Designing the EU’s Regulatory Framework for AI Systems
The Commission is considering creating a regulatory framework for AI around the concept of “high risk” AI Systems. It is also stated that the criteria and their applicability are fluid and other applications deemed to be non-high-risk in “safe” sectors might nevertheless become subject to the high-risk status as long as they could have an impact of individuals. Due to this carveout the entire high-risk assessment versus “safe” applications loses its value as it is achieving exactly the opposite of what the regulator wanted to achieve, namely clear framework and legal certainty. Therefore, instead of using this risk criteria assessment, which is discretionary by placing certain sectors as high-risk in their entirety but also has the generic exception which can make anything subject to it, thus regulating uncertainty, the Commission should focus on simply clarifying what it wants to safeguard and protect. It should, therefore, indicate a very clear set of instructions related to the values it wants to protect, leaving it to the developers and deployers to find creative means to ensure compliance with the EU’s values and its citizens’ fundamental rights and liberties while also continuing to innovate in their respective fields.

A possible solution could be to define an evolving system of rights and values which can be clearly applied to the development process similar to the Global Data Protection Regulation’s privacy by design principle when developing new solutions or the right to be forgotten when an existing solution needs to be capable of removing certain information from public access. The human and AI developers need to easily understand what and why the regulator requires to be considered at the development stage but also post-development. In this way, the system can be focused on “hard coding” the seven key requirements to become legally and technically applicable principles in the development of any AI System, irrespective of the sector or how it will be used. As long as an AI System has in some form or another the Key Requirements embedded during the development or post-development process, it should be able to exist without the need for any further special or extra regulations. The Key Requirements need to be translated into technical capabilities, as follows: Human oversight = privacy by design, possibility of choice, accurate information and source, data quality, possibility of rejection, deletion, change, etc.; Technical safety = overriding & security controls, accuracy & reliability; Transparency & Accountability = explainability & traceability, audit; Non-discrimination = accurate data & unfair manipulation of data; Environmental impact = sustainability via low resources use & saving energy; Prior Version Reversal = reversal by default to a prior uncorrupted version of an AI System; Most Favourable Technique = establishing the right techniques used when creating AI Systems; Learning Parameters = establishing the right rules used by an AI System to improve.

The European Union has a choice to be either bold and transformative or complacent and expectative. In the first scenario the EU would take an aggressive approach towards reducing the existing gap with its main counterparts, such as US and China, and take the lead in maximizing its existing potential. Or it can take the second option and maintain its international status quo of a slow-responding and bureaucratic entity which will continue to be a top provider of talent and technology to the US and other international markets and fall in line with the direction these markets set in terms of AI development, evolution and adoption. The stakes are very high and could lead to a shift in global markets if the EU goes with the first scenario. According to McKinsey Global Institute, if Europe on average develops and diffuses AI according to its current assets and digital position relative to the world, it could add some €2.7 trillion, or 20 percent, to its combined economic output by 2030. If Europe were to catch up with the US AI frontier, a total of €3.6 trillion could be added to collective GDP in this period. This finding is based on an average estimated effort, while if the EU were to take an aggressive stance on designing and executing an AI strategy, this would be truly transformative for its single digital market. It would also represent a unique achievement considering the EU’s challenges in providing a single viable framework for its Member States in various fields of cooperation and EU integration compared to the US and China, which have fewer internal complex challenges from an institutional point of view. The current technological and economic gaps can be overcome only with innovation which needs to start from an agile legal framework up to lean institutional mechanisms which will set the right premises for technical development and a trustworthy European AI ecosystem.

Aktuelle Beiträge