Department of Philosophy
Permanent URI for this community
Browse
Browsing Department of Philosophy by browse.metadata.advisor "De Villiers-Botha, Tanya"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
- ItemAttenuating the problem of moral luck : how moral luck either does not exist or does not create a paradox for our moral systems(Stellenbosch : Stellenbosch University., 2020-03) Bock, Ivan; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH ABSTRACT: In the 1970’s Bernard Williams and Thomas Nagel formally introduced the problem of moral luck. Moral luck can be understood as the seeming paradox between the control principle and the moral judgements we confer on others. The control principle states that an agent can only be held morally responsible for an action if, and only if, said agent had control over it. Contrary to this, we often do judge people for many things out of their control. The consequences of our actions, the circumstances we find ourselves in, and our own characters are all things we either wholly or partially lack control over, yet, we hold people responsible for these things. This lack of control and accompanying moral judgements are what is referred to as “moral luck”, and we must therefore either conclude that agents cannot be held responsible for their actions, or that we can hold people responsible for things out of their control, both being framed as problems. Here, I will attempt to give a solution to the problem of moral luck. I will do this by discussing some of the most influential writings on the problem, each section of the thesis focusing on a separate type of luck, addressing the mistakes philosophers have made while inferring that moral luck is real. I will argue that each type of moral luck only exists because we have misunderstood important concepts, and once we revise our conception of control, agency, and responsibility the problem of moral luck disappears. In particular, I will argue that 1) Resultant luck is only a problem because we are focusing on the consequences of actions rather than the intentions of the agent, 2) Circumstantial luck is only a problem because we fallaciously transfer the luck of the world onto moral considerations, and 3) Constitutive luck is only a problem because we are misapplying the concept of control onto character. The thesis will also include a section on relevant implication if I am successful in solving the paradox, including theoretical and practical implications. My conclusion will thus be, contrary to the thesis of moral luck, that we can still hold agents morally responsible without having to reject the control principle, however, this is only possible if we accept revisions to important moral concepts.
- ItemAutonomous weapons systems: the permissible use of lethal force, international humanitarian law and arms control(Stellenbosch : Stellenbosch University, 2017-12) Herbert, Carmen Kendell; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH SUMMARY: This thesis examines both the ethical and legal issues associated with the use of fully autonomous weapons systems. Firstly, it addresses the question of whether or not an autonomous weapon may lawfully use lethal force against a target in armed conflict, given the constraints of International Humanitarian Law, and secondly, the question of the appropriate loci of responsibility for the actions of such machines. This dissertation first clarifies the terminology associated with autonomous weapons systems, which includes a discussion on artificial intelligence, the difference between automation and autonomy, and the difference between partially and fully autonomous systems. The structure is such that the legal question of the permissible use of lethal force is addressed first, which includes discussion on the current International Humanitarian Law requirements of proportionality and distinction. Thereafter a discussion on potential candidates for responsibility (and consequentially liability) for the actions of autonomous weapons that violate the principles of International Humanitarian Law follows. Addressing the aforementioned questions is critical if we are to decide whether to use these weapons and how we could use them in a manner that is both legal and ethical. The position here is that the use of autonomous weapons systems is inevitable, thus the best strategy to ensure compliance with International Humanitarian Law is to forge arms control measures that address the associated issues explored in this dissertation. The ultimate aim in asking the associated legal and ethical questions is to bring attention to areas where the law is currently underequipped to deal with this new technology, and thus to make recommendations for future legal reform to control the use of autonomous weapons systems and ensure compliance with the existing principles of International Humanitarian Law.
- ItemBeing harmed and harming: government responsibility for inadequate healthcare(Stellenbosch : Stellenbosch University, 2021-12) Komu, Philbert Joseph; Roodt, Vasti; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH ABSTRACT: Despite the importance of the concept of harm in formal and applied ethics, the concept itself has received comparatively little attention. This dissertation aims to develop a concept of harm that can carry the weight of the moral arguments that rely on it. It is generally considered wrong to harm others, and as a good thing to act in a way that avoids, prevents, or lessens harm to them. Yet we are often hard-pressed to say what it is for a thing to be harmed, or to cause harm. Traditionally, there are non-comparative accounts of harm whose generalised view is that harms are at the same time intrinsic bads, and comparative accounts which commonly view harms as events that leave us worse off than we historically were before their occurrence or than we would have counterfactually been had the events not occurred. On the one hand, both of these accounts are inconsistent with some of our moral intuitions about harm; on the other, they do accept – but fail to show why – it is impossible to think of harms that are not bad for their victims in some respect. In this dissertation, I defend the concept of harm as prudential disvalue, which coherently holds that harms are bad for their victims from their own perspective. In order to avoid being radically subjective and including trivial things under the definition of harm, I adopt the “appealing-life view”, arguing that harms are things that detract from the appealworthiness of being in someone’s position. I then apply this revised concept of harm to a real-world example. I show why the lack of access to adequate healthcare services in Tanzania is a harm, in what respects the Tanzanian government is responsible for this harm, and why this harm is not justified, which renders the government morally blameworthy. In the domain of formal moral theory, the dissertation contributes to the scholarly literature on the problem of harm. In the field of applied ethics, the dissertation helps us to understand not only the nature of hardships endured by people within an inadequately-resourced and -managed healthcare system, but also the responsibility for the harm suffered as a result of these institutional failures.
- ItemComplexity and hermeneutic phenomenology(Stellenbosch : Stellenbosch University, 2008-12) Collender, Michael; Van der Merwe, W.; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.This thesis argues that the study of the brain as a system, which includes the disciplines of cognitive science and neuroscience, is a kind of textual exegesis, like literary criticism. Through research in scientific modeling in the 20th and early 21st centuries, anong with the advances of nonlinear science, and both cognitive science and neuroscience, along with the work of Aristotle, Saussure, and Paul Ricoeur, I argue that the parts of the brain have multiple functions, like words have multiple uses. Ricoeur, through Aristotle, argues that words only have meaning in the act of predication, the sentence. Likewise, a brain act must corporately employ a certain set of parts in the brain system. Using Aristotle, I make the case that human cognition cannot be reduced to mere brain events because the parts, the whole, and the context are integrally important to understanding the function of any given brain process. It follows then that to understand any given brain event we need to know the fullness of human experience as lived experience, not lab experience. Science should progress from what is best known to what is least known. The methodology of reductionist neuroscience does the exact opposite, at times leading to the denial of personhood or even intelligence. I advocate that the relationship between the phenomenology of human experience (which Merleau-Ponty explored famously) and brain science should be that of data to model. When neuroscience interprets the brain as separated from the lived human world, it “reads into the text” in a sense. The lived human world must intersect intimately with whatever the brain and body are doing. The cognitive science research project has traditionally required the researcher to artificially segment human experience into it pure material constituents and then reassemble it. Is the creature reanimated at the end of the dissections really human consciousness? I will suggest that we not assemble the whole out of the parts; rather human brain science should be an exegesis inward. So, brain activities are aspects of human acts, because they are performed by humans, as humans, and interpreting them is a human activity.
- ItemComplexity, peacebuilding and coherence : implications of complexity for the peacebuilding coherence dilemma(Stellenbosch : Stellenbosch University, 2012-12) De Coning, Cedric Hattingh; De Villiers-Botha, Tanya; Jordaan, Barney; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH ABSTRACT: This dissertation explores the utility of using Complexity studies to improve our understanding of peacebuilding and the coherence dilemma, which is regarded as one of the most significant problems facing peacebuilding interventions. Peacebuilding is said to be complex, and this study investigates what this implies, and asks whether Complexity could be of use in improving our understanding of the assumed causal link between coherence, effectiveness and sustainability. Peacebuilding refers to all actions undertaken by the international community and local actors to consolidate the peace – to prevent a (re)lapse into violent conflict – in a given conflict-prone system. The nexus between development, governance, politics and security has become a central focus of the international effort to manage transitions, and peacebuilding is increasingly seen as the collective framework within which these diverse dimensions of conflict management can be brought together in one common framework. The coherence dilemma refers to the persistent gap between policy-level assumptions about the value and causal role of coherence in the effectiveness of peacebuilding and empirical evidence to the contrary from peacebuilding practice. The dissertation argues that the peacebuilding process is challenged by enduring and deep-rooted tensions and contradictions, and that there are thus inherent limits and constraints regarding the degree to which coherence can be achieved in any particular peacebuilding context. On the basis of the application of the general characteristics of Complexity to peacebuilding, the following three recommendations reflect the core findings of the study: (1) Peacebuilders need to concede that they cannot, from the outside, definitively analyse complex conflicts and design ‘solutions’ on behalf of a local society. Instead, they should facilitate inductive processes that assist knowledge to emerge from the local context, and such knowledge needs to be understood as provisional and subject to a continuous process of refinement and adaptation. (2) Peacebuilders have to recognise that self-sustainable peace is directly linked to, and influenced by, the extent to which a society has the capacity, and space, to selforganise. For peace consolidation to be self-sustainable, it has to be the result of a home-grown, bottom-up and context-specific process. (3) Peacebuilders need to acknowledge that they cannot defend the choices they make on the basis of pre-determined models or lessons learned elsewhere. The ethical implications of their choices have to be considered in the local context, and the effects of their interventions - intended and unintended - need to be continuously assessed against the lived-experience of the societies they are assisting. Peacebuilding should be guided by the principle that those who will have to live with the consequences should have the agency to make decisions about their own future. The art of peacebuilding lies in pursuing the appropriate balance between international support and home-grown solutions. The dissertation argues that the international community has, to date, failed to find this balance. As a result, peacebuilding has often contributed to the very societal weaknesses and fragilities that it was meant to resolve. On the basis of these insights, the dissertation concludes with a call for a significant re-balancing of the relationship between international influence and local agency, where the role of the external peacebuilder is limited to assisting, facilitating and stimulating the capacity of the local society to self-organise. The dissertation thus argues for reframing peacebuilding as something that must be essentially local.
- ItemThe deliberate design argument for the predictive success of science(Stellenbosch : Stellenbosch University, 2023-07) Knoetze, Fred; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH ABSTRACT: In this thesis I offer an antirealist, non-truth-based account for the predictive success of science. This is in direct contrast to classic scientific realism, in which predictive success is attributed to the approximate truth of scientific theories. I start by giving an overview of the history of scientific realism, the role of the no-miracles argument and several critiques of scientific realism. The critiques include both traditional antirealist arguments against realism, like the underdetermination of theory by evidence, and more contemporary critiques like the base-rate fallacy. Following these critiques, I begin to lay out an alternative to a truth-based account for predictive success. Instead of focusing on the approximate truth of our theories I suggest that the scientific method itself acts as a kind of cognitive tool. I define what a cognitive tool is and how it might develop at the hand of three theories: radical constructivism, evolutionary epistemology, and pragmatism. I argue that the scientific method as a cognitive tool is aimed at not delivering approximately true theories, but rather at delivering theories that enable us to reliably causally influence the external world. Having established a potential alternative account for the predictive success of science, I elaborate on what I call the deliberate-design argument. I distinguish this from other antirealist explanations, specifically van Fraassen’s constructive empiricism and surrealism. I then establish the metaphysical, epistemological, and semantic stances of this explanation for predictive success. Metaphysically, I argue that the mind-independent world is primarily causally accessible. Epistemically, I argue that we can know our theories can lead to predictive success but not that they are approximately true. Semantically, I argue that the primary purpose of theories is to provide reproducible steps for the successful causal influence of external reality. I then address some anticipated objections, including: whether the scientific method selects for anything but approximate truth, the value Stellenbosch of novel predictive success for establishing a theory’s approximate truth and, lastly, the threat of epistemic relativism. Ultimately, this thesis is intended to argue against classical scientific realism and the role approximate truth plays in its explanation for predictive success. The deliberate design argument is intended as an antirealist alternative for predictive success that does not require our theories to be approximately true of the external world.
- ItemThe moral community and moral consideration : a pragmatic approach(Stellenbosch : Stellenbosch University, 2015-04) Stephens, Christopher; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH ABSTRACT: The aim of this thesis is to argue for a new metric for determining the moral status of another being. Determining this status is of foundational importance in a number of legal, political, and ethical concerns, including but not limited to animal rights, the treatment of criminals, and the treatment of the psychologically afflicted. This metric will be based upon one’s capacity to morally consider others. In other words, in order to have full moral status, one must be able to have moral concern for others and act upon this concern to even a minimal degree. In doing so, one will be considered to belong to a “moral community”, which affords the member a certain set of rights, privileges, and duties towards other community members. Arguing for the existence of such a community achieves the pragmatic aspect of this thesis. I argue that morality is geared towards group-survival strategies which have been evolutionarily selected for, and thus by organizing societal structures towards the tools which nature has armed us with, we may maximize the powers and capacities of the community members. In order to achieve these aims, I defend a concept of morality as based in emotion, requiring certain neurological structures, which gives the first set of criteria for identifying potential members of the moral community. I then discuss the issue of identifying the capacity for morality in non-human minds, arguing that we may infer moral capacities from behaviourism. In summary, the findings of this paper are that first, morality is essentially emotional in nature and is a product of the nature of our neurological system, although rational processes and enculturation shape particular moral sensitivities and priorities. Second, one can infer the existence of moral capacities in animals from their behaviour, and, at risk of engaging in anthropomorphism, to deny these capacities completely entails solipsism. Thirdly, and most importantly, those who are capable of morally considering others ought to be afforded full moral status themselves and be brought into a “moral community” wherein special rights, freedoms, and privileges allow the members to most efficiently contribute to the community, maximizing the powers and benefits of the community.
- ItemMoral encounters of the artificial kind : towards a non-anthropocentric account of machine moral agency(Stellenbosch : Stellenbosch University, 2019-12) Tollon, Fabio; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH ABSTRACT: The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as a moral agent. This view claims that because artificial agents (AAs) lack sentience, they cannot be proper subjects of moral concern and hence cannot be considered to be moral agents. I raise conceptual and epistemic issues with regards to the sense of sentience employed on this view, and I argue that the Organic View does not succeed in showing that machines cannot be moral patients. Nevertheless, irrespective of this failure, I also argue that the entire project is misdirected in that moral patiency need not be a necessary condition for moral agency. Moreover, I claim that whereas machines may conceivably be moral patients in the future, there is a strong case to be made that they are (or will very soon be) moral agents. Whereas it is often argued that machines cannot be agents simpliciter, let alone moral agents, I claim that this argument is predicated on a conception of agency that makes unwarranted metaphysical assumptions even in the case of human agents. Once I have established the shortcomings of this “standard account”, I move to elaborate on other, more plausible, conceptions of agency, on which some machines clearly qualify as agents. Nevertheless, the argument is still often made that while some machines may be agents, they cannot be moral agents, given their ostensible lack of the requisite phenomenal states. Against this thesis, I argue that the requirement of internal states for moral agency is philosophically unsound, as it runs up against the problem of other minds. In place of such intentional accounts of moral agency, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. The implications of this thesis are that at some point in the future we may be faced with situations for which no human being is morally responsible, but a machine may be. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not.
- ItemPrivacy as a common good in the age of big data(Stellenbosch : Stellenbosch University, 2022-04) Roux, Josephine Anne; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH SUMMARY: In this thesis, I support the claim that Big Data poses a significant threat to liberal democracy through its violation of citizens’ privacy and consequently argue that in order to address this threat, it is necessary to re-assess the way that we value privacy in a liberal society. Big Data's role in the erosion of liberal democracy has been increasingly raised in the media and the philosophical literature, but the precise role that violations of privacy play in undermining democracy is not always clearly spelt out. I unpack the claims made in this regard and show that the central issue here is that Big Data threatens democracy because of its unique ability to undermine citizens’ autonomy. I go on to show that it is able to do so thanks to its unprecedented large-scale and consistent invasions of privacy. That is to say, I show how, by invading privacy, Big Data can and does undermine autonomy. And as I will argue, without an autonomous citizenry, liberal democracy cannot thrive. Having made the argument that democracy is under threat because of Big Data’s erosion of autonomy through privacy-invasions, I go on to assess arguments for valuing privacy as a public good. I show the limitations that stem from viewing privacy as a public good, and I conclude that in the Age of Big Data, it is crucial that we view privacy as a common good instead. I argue that the traditional evaluation of privacy as an individual good is a central obstacle in the struggle to address privacyinvasions in the Age of Big Data. Hence, in order to protect our privacy and, ultimately, liberal democracy, we need to reconceive of the value of privacy.
- ItemToward a naturalistically explicable folk psychology(Stellenbosch : Stellenbosch University, 2019-12) Hougaard, Deryck Simon; De Villiers-Botha, Tanya; Stellenbosch University. Faculty of Arts and Social Sciences. Dept. of Philosophy.ENGLISH ABSTRACT: This thesis investigates the gap between what our science says and how many theorists and everyday people have characterised how we conceive of mental states. I argue that looking at our folk psychology (FP) in light of an understanding of real-world, current science yields beneficial philosophical results. FP, in the iteration that I shall be concerned with here, refers to every person’s ability to apply reason explanations to conspecifics’ behaviour. I focus predominantly on the underlying processes that we allegedly pick out in our folk psychologising, those of beliefs and desires (the propositional attitudes). The reason for this focus lies in the gap between our intuitive beliefs and understanding of our mental processes and the picture painted by the empirical sciences. I first explore some issues concerning traditional theorising on the topic, before discussing current scientific research into pur cognitive processes in the form of predictive processing (PP) as advocated by Friston (2003, 2008, 2010), Hohwy (2013), and Clark (2016). PP depicts our brains not as passive, stimulus-driven organs, but as active constructors of our environment. An implication of this approach is that the way in which we represent the environment within our mind is different to how it is typically conceived within traditional FP. I also explore Hutto’s (2008a; Hutto & Myin 2013a, 2017) claim that our minds are wired to be attuned to the environment in terms of minimal content, which allows me to develop a minimal conception of representation in terms of content, one in which direct correspondence between mental states and the supposed representation of the environment need not obtain for the mental to do causally efficacious work. I conclude that the beliefs and desires utilised in FP are socio-cultural impositions upon the neural substrate with no counterpart in reality. This has clear implications for our understanding of how we think about mental states within the cognitive sciences and philosophy of mind, if what we are aiming toward is a clarification of just what the mental is. Additionally, these new insights may ensure that the cognitive sciences are better informed about what it is that is being explained and where to focus further research concerning the mental.