LUE OLKi Project presents it’s 4th seminar about Philosophical aspects of computer sciences.
Karoline Reinhardt, researcher at the Eberhard Karls Universität Tübingen, Internationales Zentrum für Ethik/IZEW, will give a talk entitled “Dimensions of trust in AI Ethics“.
- The event will take place online.
- Both the lecture and the discussion will be in English.
- To contact the organisation team, please fill this form.
Abstract
Due to the extensive progress of research in Artificial Intelligence (AI) as well as its deployment and application, the public debate on AI systems has also gained momentum in recent years. With the publication of the Ethics Guidelines for Trustworthy AI (2019), notions of trust and trustworthiness gained particular attention within AI ethics-debates: Despite an apparent consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI. In this paper, I give a detailed overview on the notion of trust employed in AI Ethics Guidelines thus far.
Based on that, I assess their overlaps and their omissions from the perspective of practical philosophy. I argue that, currently, AI Ethics tends to overload the notion of trustworthiness. It thus runs the risk of becoming a buzzword that cannot be operationalized into a working concept for AI research. What is needed, however, is an approach that is also informed with findings of the research on trust in other fields, for instance, in social sciences and humanities, especially in the field of practical philosophy. In this paper I sketch out which insights from political philosophy and social philosophy might be particularly helpful here. The concept of “insitutionalised mistrust” will play a special role here.