Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI)

Authors

  • Alexis Fritz Katholische Universität Eichstätt-Ingolstadt, Germany
  • Wiebke Brandt Katholische Universität Eichstätt-Ingolstadt, Germany
  • Henner Gimpel Fraunhofer-Institut für Angewandte Informationstechnik FIT, Germany
  • Sarah Bayer FIM Kernkompetenzzentrum, Germany

DOI:

https://doi.org/10.3384/de-ethica.2001-8819.20613

Keywords:

human-computer interaction, responsibility, technical philosophy, Luciano Floridi, Deborah G. Johnson, Peter-Paul Verbeek

Abstract

Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) agent’ and ‘(moral) agency’ are exclusively related to human agents. Initially, the division between symbolic and sub-symbolic AI, the black box character of (deep) machine learning, and the complex relationship network in the provision and application of machine learning are outlined. Next, the ontological and action-theoretical basic assumptions of an ‘agency’ attribution regarding both the current teleology-naturalism debate and the explanatory model of actor network theory are examined. On this basis, the technical-philosophical approaches of Luciano Floridi, Deborah G. Johnson, and Peter-Paul Verbeek will all be critically discussed. Despite their different approaches, they tend to fully integrate computational behavior into their concept of ‘(moral) agency.’ By contrast, this essay recommends distinguishing conceptually between the different entities, causalities, and relationships in a human-computer interaction, arguing that this is the only way to do justice to both human responsibility and the moral significance and causality of computational behavior.

References

Akrich, Madeleine and Bruno Latour. ‘A Summary of a Convenient Vocabulary for the Semiotics of Human and Nonhuman Assemblies’, in Shaping Technology/ Building Society. Studies in Sociotechnical Change, edited by Wiebe E. Bijker and John Law. Cambridge, Mass.: The MIT Press, 1992, pp. 259-264.

Anderson, Michael and Susan Leigh Anderson. ‘Machine Ethics. Creating an Ethical Intelligent Agent’, AI Magazine 28:4 (2007), pp. 15-26.

Belliger, Andreá and David J. Krieger. ‘Einführung in die Akteur-Netzwerk-Theorie’, in ANThology. Ein einführendes Handbuch zur Akteur-Netzwerk-Theorie, edited by Andréa Belliger and David J. Krieger. Bielefeld: transcript, 2006, pp. 13-50.

Biran, Or and Kathleen McKeown. ‘Human-Centric Justification of Machine Learning Predictions’, Proceedings of International Joint Conferences on Artificial Intelligence (2017), pp. 1461-1467.

Budnik, Christian. ‘Handlungsindividuation’, in Handbuch Handlungstheorie. Grundlagen, Kontexte, Perspektiven, edited by Michael Kühler and Markus Rüther. Stuttgart: J. B. Metzler Verlag, 2016, pp. 60-68.

Callon, Michel. ‘Einige Elemente einer Soziologie der Übersetzung: Die Domestikation der Kammmuscheln und der Fischer der S. Brieuc-Bucht’, in ANThology. Ein einführendes Handbuch zur Akteur-Netzwerk-Theorie, edited by Andréa Belliger and David J. Krieger. Bielefeld: transcript, 2006, pp. 135-174.

[original English version:

Callon, Michel. ‘Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St. Brieuc Bay’, in Power, Action and Belief: A New Sociology of Knowledge?, edited by John Law. London: Routledge, 1986, pp. 196-233.]

Callon, Michel. ‘Techno-ökonomische Netzwerke und Irreversibilität’, in ANThology. Ein einführendes Handbuch zur Akteur-Netzwerk-Theorie, edited by Andréa Belliger and David J. Krieger. Bielefeld: transcript, 2006, pp. 309-342.

[original English version:

Callon, Michel. ‘Techno-Economic Networks and Irreversibility’, in A Sociology of Monsters? Essays on Power, Technology and Domination, edited by John Law. London/ New York: Routledge, 1991, pp. 132-161.]

Carpenter, Julia. ‘Google’s Algorithm Shows Prestigious Job Ads to Men, But Not to Women. Here’s Why That Should Worry You’, The Washington Post (July 6, 2015), online at https://www.washingtonpost.com/news/the-intersect/wp/2015/07/06/googles-algorithm-shows-prestigious-job-ads-to-men-but-not-to-women-heres-why-that-should-worry-you/ (accessed 2019-11-10).

Castelvecchi, Davide. ‘Can we open the black box of AI?’, Nature 538:7623 (2016), pp. 20-23.

Corbett-Davies, Sam, Emma Pierson, Avi Feller and Sharad Goel. ‘A Computer Program Used for Bail and Sentencing Decisions was Labeled Biased Against Blacks. It’s Actually Not That Clear’, The Washington Post (October 17, 2016), online at www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas (accessed 2019-11-10).

Crnkovic, Gordana Dodig and Baran Çürüklü. ‘Robots: Ethical by Design’, Ethics and Information Technology 14:1 (2012), pp. 61-71.

Davidson, Donald. ‘Handlungen, Gründe und Ursachen’, in Gründe und Zwecke. Texte zur aktuellen Handlungstheorie, edited by Christoph Horn and Guido Löhrer. Berlin: Suhrkamp, 2010, pp. 46-69.

Dressel, Julia and Hany Farid. ‘The Accuracy, Fairness, and Limits of Predicting Recidivism’, Science Advances 4:1 (2018).

Flores, Anthony W., Kristin Bechtel and Christopher T. Lowenkamp. ‘False Positives, False Negatives, and False Analyses: A Rejoinder to ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks’‘, Federal Probation Journal 80:2 (2016), pp. 38-46.

Floridi, Luciano and Jeff W. Sanders. ‘Artificial Evil and the Foundation of Computer Ethics’, Ethics and Information Technology 3 (2001), pp. 55-66.

Floridi, Luciano and Jeff W. Sanders. ‘On the Morality of Artificial Agents’, Minds and Machines 14 (2004), pp. 349-379.

Floridi, Luciano. ‘Faultless Responsibility: On the Nature and Allocation of Moral Responsibility for Distributed Moral Actions’, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374 (2016), Issue 2083.

Floridi, Luciano. ‘Levels of Abstraction and the Turing Test’, Kybernetes 39 (2010), pp. 423-440.

Fong, Ruth C. and Andrea Vedaldi. ‘Interpretable Explanations of Black Boxes by Meaningful Perturbation’, Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 3429-3437.

Häußling, Roger. Techniksoziologie. Eine Einführung. Opladen, Toronto: Verlag Barbara Budrich, 2019.

Hern, Alex. ‘Google's Solution to Accidental Algorithmic Racism: Ban Gorillas’, The Guardian (January 12, 2018), online at https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people (accessed 2019-11-10).

Horn, Christoph and Guido Löhrer. ‘Einleitung: Die Wiederentdeckung teleologischer Handlungserklärungen’, in Gründe und Zwecke. Texte zur aktuellen Handlungstheorie, edited by Christoph Horn and Guido Löhrer. Berlin: Suhrkamp, 2010, pp. 7-45.

Johnson, Deborah G. and Mario Verdicchio. ‘AI, Agency and Responsibility: The VW Fraud Case and Beyond’, AI & SOCIETY (2018), online at https://doi.org/10.1007/s00146-017-0781-9 (accessed 2019-11-15).

Johnson, Jim. ‘Mixing Humans and Nonhumans Together: The Sociology of a Door-Closer’, Social Problems 35:3 (1988), pp. 298-310.

Kamp, Georg. ‘Basishandlungen’, in Handbuch Handlungstheorie. Grundlagen, Kontexte, Perspektiven, edited by Michael Kühler and Markus Rüther. Stuttgart: J. B. Metzler Verlag, 2016, pp. 69-77.

Kaplan, Andreas and Michael Haenlein. ‘Siri, Siri, in My Hand: Who's the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence’, Business Horizons 62:1 (2019), pp. 15–25.

Lacave, Carmen and Francisco J. Díez. ‘A Review of Explanation Methods for Bayesian Networks’, The Knowledge Engineering Review 17:2 (2002), pp. 107-127.

Latour, Bruno. ‘Social Theory and the Study of Computerized Work Sites’, in Information Technology and Changes in Organizational Work, edited by W. J. Orlinokowsky and Geoff Walsham. London: Chapman and Hall, 1996, pp. 295-307.

Latour, Bruno. ‘Technology is Society Made Durable’, in A Sociology of Monsters? Essays on Power, Technology and Domination, edited by John Law. London/ New York: Routledge, 1991, pp. 103-131.

Latour, Bruno. Pandora’s Hope. Essays on the Reality of Science Studies. Cambridge, Mass.: Harvard Univ. Press, 1999.

Latour, Bruno. Reassembling the Social. An Introduction to Actor-Network-Theory. Oxford: Oxford University Press, 2007.

Latour, Bruno. Wir sind nie modern gewesen. Versuch einer symmetrischen Anthropologie. Berlin: Akad.-Verl., 1995.

[original English version:

Latour, Bruno. We Have Never Been Modern. Cambridge, Mass.: Harvard Univ. Press, 1993.]

Mitchell, Tom M. Machine Learning. Boston, Mass.: WBC/McGraw-Hill, 1997.

Montavon, Grégoire, Wojciech Samek and Klaus-Robert Müller. ‘Methods for Interpreting and Understanding Deep Neural Networks’, Digital Signal Processing 73 (2018), pp. 1-15.

Quitterer, Josef. ‘Basishandlungen und die Naturalisierung von Handlungserklärungen’, in Soziologische Handlungstheorie. Einheit oder Vielfalt, edited by Andreas Balog and Manfred Gabriel. Opladen: Westdeutscher Verlag, 1998, pp. 105-122.

Rammert, Werner. Technik – Handeln – Wissen. Zu einer pragmatistischen Technik- und Sozialtheorie. Wiesbaden: Springer VS, 2016 [2007].

Reuters. ‘Amazon Ditched AI Recruiting Tool that Favored Men for Technical Jobs’, The Guardian (October 11, 2018), online at https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine (accessed 2019-11-10).

Ricken, Friedo. Allgemeine Ethik. Stuttgart: W. Kohlhammer, 2013 [1983].

Runggaldier, Edmund. Was sind Handlungen? Eine philosophische Auseinandersetzung mit dem Naturalismus. Stuttgart: W. Kohlhammer, 1996.

Russell, Stuart J. and Peter Norvig. Artificial Intelligence. A Modern Approach. Boston: Pearson, 2016.

Schlosser, Markus. ‘Agency’, The Stanford Encyclopedia of Philosophy (2015), online at https://plato.stanford.edu/entries/agency/ (accessed 2019-11-15).

Schulz-Schaeffer, Ingo. Sozialtheorie der Technik. Frankfurt am Main: Campus-Verlag, 2000.

Sehon, Scott R. ‘Abweichende Kausalketten und die Irreduzibilität telologischer Erklärungen’, in Gründe und Zwecke. Texte zur aktuellen Handlungstheorie, edited by Christoph Horn and Guido Löhrer. Berlin: Suhrkamp, 2010, pp. 85-111.

Verbeek, Peter-Paul. ‘Beyond Interaction: A Short Introduction to Mediation Theory’, Interactions 22 (2015), pp. 26-31.

Verbeek, Peter-Paul. ‘Designing the Morality of Things: The Ethics of Behaviour-Guiding Technology’, in Designing in Ethics, edited by Jeroen van den Hoven, Seumas Miller and Thomas Pogge. New York: Cambridge Univ. Press, 2017, pp. 78-94.

Verbeek, Peter-Paul. ‘Materializing Morality. Design Ethics and Technological Mediation’, Science, Technology, & Human Values 31 (2006), pp. 361-380.

Verbeek, Peter-Paul. ‘Some Misunderstandings About the Moral Significance of Technology’, in The Moral Status of Technical Artefacts, edited by Peter Kroes and Peter-Paul Verbeek. Dordrecht: Springer, 2014, pp. 75-88.

Verbeek, Peter-Paul. Moralizing Technology. Understanding and Designing the Morality of Things. Chicago: Univ. of Chicago Press, 2011.

Wieser, Matthias. Das Netzwerk von Bruno Latour. Die Akteur-Netzwerk-Theorie zwischen Science & Technology Studies und poststrukturalistischer Soziologie. Bielefeld: transcript, 2012.

Zhu, Jichen, Antonios Liapis, Sebastian Risi, Rafael Bidarra and G. Michael Youngblood. ‘Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation’, IEEE Conference on Computational Intelligence and Games (2018), pp. 1-8.

Downloads

Published

2020-06-30

How to Cite

Fritz, A., Brandt, W., Gimpel, H. and Bayer, S. (2020) “Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI)”, De Ethica, 6(1), pp. 3–22. doi: 10.3384/de-ethica.2001-8819.20613.

Issue

Section

Articles