Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency

Tollon, Fabio (2019-12)

Thesis (MA)--Stellenbosch University, 2019.


ENGLISH ABSTRACT: The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as a moral agent. This view claims that because artificial agents (AAs) lack sentience, they cannot be proper subjects of moral concern and hence cannot be considered to be moral agents. I raise conceptual and epistemic issues with regards to the sense of sentience employed on this view, and I argue that the Organic View does not succeed in showing that machines cannot be moral patients. Nevertheless, irrespective of this failure, I also argue that the entire project is misdirected in that moral patiency need not be a necessary condition for moral agency. Moreover, I claim that whereas machines may conceivably be moral patients in the future, there is a strong case to be made that they are (or will very soon be) moral agents. Whereas it is often argued that machines cannot be agents simpliciter, let alone moral agents, I claim that this argument is predicated on a conception of agency that makes unwarranted metaphysical assumptions even in the case of human agents. Once I have established the shortcomings of this “standard account”, I move to elaborate on other, more plausible, conceptions of agency, on which some machines clearly qualify as agents. Nevertheless, the argument is still often made that while some machines may be agents, they cannot be moral agents, given their ostensible lack of the requisite phenomenal states. Against this thesis, I argue that the requirement of internal states for moral agency is philosophically unsound, as it runs up against the problem of other minds. In place of such intentional accounts of moral agency, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. The implications of this thesis are that at some point in the future we may be faced with situations for which no human being is morally responsible, but a machine may be. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not.

AFRIKAANSE OPSOMMING: Hierdie tesis het ten doel om ʼn filosofies-geregverdigde beskrywing van Kunsmatige Morele Agentskap (KMA) te ontwikkel. Gewoonlik behels die vraagstuk na die morele status van Kunsmatige Intelligensie (KI) twee vrae: die morele belang waarop sulke stelsels geregtig is (dus, of hulle morele pasiënte is) en of sulke stelsels die bron van morele optrede kan wees (dus, of hulle morele agente is). Die Organiese Benadering tot Etiese Status hou voor dat om ʼn morele pasiënt te wees ʼn voorvereiste daarvoor is om ʼn morele agent te kan wees. Daar word dan verder aangevoer dat Kunsmatige Agente (KA) nie bewus is nie en gevolglik nie morele pasiënte kan wees nie. Uiteraard kan hulle dan ook nie morele agente wees nie. Die verstaan van “bewustheid” wat hier bearbei word, is egter konseptueel en epistemies verdag en ek voer gevolglik aan dat die Organiese Siening nie genoegsame bewys lewer dat masjiene nie morele pasiënte kan wees nie. Ongeag hierdie bevinding voer ek dan ook verder aan dat die aanname waarop die hele projek berus foutief is—om ʼn morele pasiënt te wees, is nie ʼn noodsaaklike voorvereiste daarvoor om ʼn morele agent te kan wees nie. Verder voer ek aan dat, terwyl masjiene in die toekoms morele pasiënte mag wees, hulle beslis morele agente sal wees (of selfs alreeds is). Daar word dikwels aangevoer dat masjiene nie eens agente kan wees nie, wat nog van morele agente. Ek voer egter aan dat hierdie siening ʼn verstaan van “agentskap” voorveronderstel wat op ongeregverdige metafisiese aannames berus, selfs in die geval van die mens se agentskap. Ek bespreek hierdie tekortkominge en stel dan ʼn meer geloofwaardige siening van agentskap voor, een wat terselfdertyd ook ruimte laat vir masjienagentskap. Terwyl sommige denkers toegee dat masjiene wel agente kan wees, hou hulle steeds vol dat masjiene te kort skiet as morele agente, siende dat hulle nie oor die nodige fenomenele vermoëns beskik nie. Hierdie vereiste word egter deur die “anderverstandsprobleem” ondermyn—ons kan doodeenvoudig nie vasstel of enigiemand anders (hetsy mens of masjien) oor sulke fenomenele vermoëns besit nie. Teenoor sulke intensionele verstane van morele agentskap stel ek dan ʼn funksionalistiese verstaan, wat terselfdertyd ook ruimte laat vir masjiene as morele agente. My bevindinge impliseer dat ons in die toekoms ons in situasies sal bevind waarvoor geen mens moreel verantwoordelik is nie, maar ʼn masjien wel. Hierdie verantwoordelikheid word nie beïnvloed deur die masjien se kapasiteit om gestraf te word nie.

Please refer to this item in SUNScholar by using the following persistent URL:
This item appears in the following collections: