by Benjamin P. Goldfein 18C
University of St Andrews ‘19, MLitt (Bobby Jones Fellow)
As a Philosophy major and Ethics minor, I used to frequently struggle with whether or not I would have been better off if I pursued a STEM degree. This feeling branched from being constantly confronted by the narrative that the humanities are neither as rigorous nor as impactful as the hard sciences. However, I have since come to realize the value of the humanities beyond my own personal interest. I am now indebted to the Fox Center for giving me the invaluable opportunity to surround myself with scholars who share the importance of the humanities with their colleagues and their communities.
This made me change my perspective on humanistic study. Namely, humanities-focused scholars should not be ‘on defense’ and feel the need to justify humanistic inquiry. Instead, we should be ‘on offense,’ priding ourselves on our work and its impact. Learning this lesson has been especially powerful, for it has instilled in me the confidence that I can pursue a career in Philosophy and know that I will be able to use my research to positively affect people beyond academia. This has influenced my decision to apply for a PhD in Philosophy, which I hope to begin in the Fall of 2019.
I have since been able to take these lessons and apply them to my own endeavors. For example, in my thesis, Virtuous Artificial Intelligence, I champion a philosophical approach for examining the socio-ethical implications of artificial intelligence (AI). I also challenge the notion that technology research only involves the hard sciences. This is because research that concerns humanity must also recognize its underlying humanistic elements in order to have the most desirable impact on society.
I organized my thesis into three distinct chapters: 1) Problematizing Artificial Intelligence, 2) Regarding Moral Sentience, and 3) Considering Ethical Systems. I begin by asserting that we must understand how to conceive AI before we can sketch a blueprint of a future with person and non-person morally-sentient beings. I then argued that due to recent explosive advancements in AI, we will soon reach a point when persons’ and AIs’ moral sentiences are indistinguishable. When this happens, AIs should abide by a system of ethics to ensure the protection of person and non-person morally-sentient beings. Furthermore, I assert that ethical morally-sen(ent AIs would necessarily follow a system of Aristotelean virtue ethics. Following a virtue-ethics system would equip AIs to balance competing obligations through creative wise decision-making, or corrective prohairesis. I conclude by showing how an AI virtue ethicist would be able to learn prima facie virtues, thus removing the supererogatory burden off of programmers to code the ‘perfectly-ethical’ AI.
Now that my time at Emory is coming to a close, I can proudly say that being at the Fox Center has been one of the highlights of my undergraduate career. I am humbled to have been welcomed to such an inspiring community of scholars, and I am excited to continue the legacy of Bill and Carol Fox with promise and pride.
Benjamin Goldfein is a senior majoring in Philosophy and minoring in Ethics. His senior honors thesis champions a philosophical approach for examining the socio-ethical implications of artificial intelligence. Namely, Ben focuses on how recent technological advancements force us to reconsider what it means to ‘be human’ in regards to our personal identities and relationships with other morally-sentient beings.