Regulating future AI systems - enabling future AI innovation through an increased level of legal certainty in technology neutral regulation

Academy of Finland project 09/2022-08/2026, grant number: 347221

In the AI-REG project we study ambiguity that arises from the European Artificial Intelligence Act proposal (AI Act) that has been proposed by the European Commission in April 2021. The AI Act would regulate the development and use of AI systems in Europe.

The underlying goal of the AI Act is to ensure that Europeans could trust  AI systems. The AI Act distinguishes 4 risk classes of AI systems: unacceptable-, high-, limited-, and minimal risk. AI systems which would be judged to have unacceptable risk would be outright forbidden, and high-risk AI systems would have to conform to requirements both during development of the system and in the use of such a system. The AI Act is thus expected to heavily affect the development and use especially of high-risk AI systems. However, the AI Act is also ambiguous on many parts, beginnig with the definition of what will be classified as an AI system, or how a specific system would be classified as high-risk vs. low-risk system. Ambiguity means that something can be interpreted in several different ways, and ambiguity in legal regulation is potentially problematic as it reduces legal certainty.

The AI Act is a horizontal and broad regulation proposal. In the AI-REG project we have two focus areas: 1) High risk AI-system use in public sector organizations, and 2) Development of high-risk AI-based healthcare technology.

In the AI-REG project, we study

  • What kind of ambiguities arise from the AI Act
  • What legal formulations and requirements cause the ambiguity
  • What characteristics of AI systems make an AI system subject to ambiguity
  • How does regulation affect development of AI-based healthcare technology and AI-system use in public sector organizations

In the AI-REG project, we have a qualitative research approach

  • Analysis of the AI Act proposal, its subsequent versions, and statements on the proposal to identify ambiguity and which legal formulations they arise from.
  • Interviews in Finland, Sweden and Norway, with 1) public sector organizations that use AI systems, and 2) organizations that develop AI-based healthcare technology.
  • Collect examples of AI systems to figure out how to characterize AI systems in light of the law and develop a taxonomy.

The People

Principal Investigator: Karin Väyrynen

Project researchers: Arto Lanamäki, Fanny Vainionpää

University of Oulu

Faculty of Information Technology and Electrical Engineering

INTERACT Research Unit