As AI made decisions are becoming more and more frequent, explainable AI is a growing field of computer science and legal research. Yet, it is still unclear what an explanation should be, when legal texts require a decision-maker to explain its decisions. For instance, no legal definition exists for the notion of 'explanation', at this stage. In this talk, Michael Lognoul intend to discuss elements that are common to existing legal requirements on decisions' explanations, in order to propose common features of such requirements, and help to shape the content of future potential obligations regarding explanation of AI made decisions.
About the speaker
Michael Lognoul is a lecturer, teaching assistant and legal researcher at the Law Faculty of UNamur, since 2017. He is a member of the Research Center on Information, Law and Society (CRIDS), and of the Namur Digital Institute (NADI). He has particular expertise in EU Law and in ICT Law (LLM), notably regarding competition law and intellectual property law in the information society. His research mainly focuses on Artificial Intelligence, both from the legal and technical sides, as he is currently involved in a PhD research on AI decision-making explainability. He also studies the interactions of AI with several fields of law, notably in terms of consumer protection and data economy.
Target Audience
Legal scholars and computer scientists active in AI, as well as anyone interested in AI policy as the question to know how AI made decisions are explained is of paramount importance (i) to ensure that recipients of decisions will be able to contest them where appropriate (access to justice), and (ii) to make sure that any future legal requirements on this matter will be sound from a technical perspective.
The training and use of machine learning models goes hand-in-hand with the processing of vast amounts of data, some of which might qualify as personal data under Art. 4 (1) of the EU General Data Protection Regulation (GDPR). Besides, the development and deployment of these techniques usually require the collaboration of multiple actors playing a wide variety of roles, from data providers to software developers to end-users. This, in turn, raises the issue as to (i) who has to comply with the legal requirements stemming from the GDPR and (ii) what are the main principles and obligations they must comply with. In his talk, Pierre Dewitte will discuss the key legal concepts needed to address both these questions, and provide an overview of the most relevant provisions computer scientists should keep in mind when training and/or using machine learning models.
About the speaker
Pierre Dewitte joined the KU Leuven Centre for IT & IP in October 2017 where he conducts interdisciplinary research on privacy engineering, smart cities and algorithmic transparency. Among other initiatives, his main research track seeks to bridge the gap between software engineering practices and data protection regulations by creating a common conceptual framework for both disciplines and providing decision and trade-off support for technical and organizational mitigation strategies in the software development life-cycle.
Target Audience
This talk should be of interest to (i) legal scholars with no background in EU data protection law, or who are familiar with the topic but have not looked at it though the lenses of machine learning and (ii) computer scientists involved in the development and/or the use of any AI system. Anyone interested in data protection and AI policy is also welcome, as the presentation will touch upon broader topics such as the so-called ‘right to explanation’.
Data is increasingly used as an asset, from which value can be extracted, which has led many to lament about the misalignment of the law with data-intensive technological and business developments, such as AI. With the exception of personal data protection, the legal framework applicable to data exchange and processing for AI, has indeed been mainly consisting in legacy frameworks dealing with data only indirectly. This situation has been changing fast in recent times, with the EU legislator being extremely active to structure data exchange institutions and to apportion the (economic) value of data. There is however no (or hardly any) ‘AI data’-specific data regulation. On the one hand, many sectoral data sharing legal regimes have been adopted while, on the other, the Data Governance Act and Data Act proposals are designed to provide a horizontal data-related framework. In her talk, Charlotte will outline the legal framework applicable to data exchange and processing at EU level, by providing a comprehensive and synthetic picture of the legislation (yet partly in the making), focused on how it applies to AI development and use. She will also identify the main (expected) trends in EU data law.
About the speaker
Charlotte Ducuing is a PhD fellow researcher at the Centre for IT and IP law (CITIP) of KU Leuven. Since 2021, her doctoral research deals with the regulation of data as an economic resource in EU law. In this respect, she is also involved since 2021 in a C2 interdisciplinary KU Leuven research project (with Duurzaam Materialenbeheer) on the datafication of the circular economy through the notion of 'materials passports'.
Target Audience
This talk should be of interest to legal scholars with an interest in but no or little background in the regulation of data at EU level, to computer scientists involved in the development and/or use of AI systems and to businesses involved in AI-based activities. Anyone interested in the EU ‘data law’ emerging as a result of new data-specific economic legislations is welcome to join.