Human Oversight

Die jüngst bekannt gewordenen Turbulenzen bei Open AI, offensichtlich ausgelöst durch einen Brief des CEOs an das Board von Open AI, signalisieren das ernsthafte Bedenken bestehen in Bezug auf die (Weiter-) Entwicklung Künstlicher Intelligenz (KI). Die damit verbundenen Risken werden als exitenzielle Bedrohung begriffen sogar für die gesamte Menschheit. Bekannte Köpfe haben schon des Öfteren davor gewarnt, dass die Menschen die Kontrolle über die KI verlieren könnten. Die öffentliche Debatte darüber hat sich auf staatlicher Ebene intensiviert. Beispielhaft sei hier auf die sogenannte Bletchley-Erklärung verwiesen. „Human Oversight“ wird somit zu einem existenziellen Faktor und entscheidendem Prinzip. In einer Sequenz von Blog-Beiträgen wollen wir das Thema „Human Oversight“ und daraus resultierende Implikationen für Aufsichtsräte näher vorstellen.

Zum besseren Verständnis soll zunächst auf den Begriff Human Oversight eingegangen werden. In der derzeit vermutlich finalen Fassung des EU – Artificial Intelligence Act mit den im Juni vorgeschlagenen Ergänzungen wird Human Oversight wie folgt definiert:


Article 14 – Human oversight

1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they be effectively overseen by natural persons as proportionate to the risks associated with those systems. Natural persons in charge of ensuring human oversight shall have sufficient level of AI literacy in accordance with Article 4b and the necessary support and authority to exercise that function, during the period in which the AI system is in use and to allow for thorough investigation after an incident.

2. Human oversight shall aim at preventing or minimising the risks to health, safety, fundamental rights or environment that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter and where decisions based solely on automated processing by AI systems produce legal or otherwise significant effects on the persons or groups of persons on which the system is to be used.

3. Human oversight shall be ensured through either one or all of the following measures:

    (a) identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
    (b) identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user.

4. For the purpose of implementing paragraphs 1 to 3, the high-risk AI system shall be provided to the user in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate to the circumstances:

    (a) be aware of and sufficiently understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
    (b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons
    (c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available;
    d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
    e) be able to intervene on the operation of the high-risk AI system or interrupt, the system through a “stop” button or a similar procedure that allows the system to come to a halt in a safe state, except if the human interference increases the risks or would negatively impact the performance in consideration of generally acknowledged state-of-the-art.

5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons with the necessary competence, training and authority.

Wenig überraschend wird „Human Oversight“ in Bezug auf die erforderlichen Kenntnisse , hier AI Literacy“, über die die Verantwortlichen und Handelnden verfügen sollen, präzisiert. Diese Regelung dürfte auch auf die Überwachungsarbeit und Kompetenzprofile von Aufsichtsräten ausstrahlen[1]. Interessant ist in diesem Zusammenhang die Frage, welche Mindestkentnisse Aufsichtsräte in Bezug auf das Thema ‚Artificial Intelligence‘ zur Erfüllung Ihrer Überwachungsaufgaben haben müssen. Darauf soll in einem weiteren Blog – Beitrag eingegangen werden.

Kategorien

more insights