Zach Gudmunsen

Profile

I am researching the possibility of Artificial Intelligences having intuitions, particularly ethical intuitions. 

AI systems are very capable, but humans have experiences, like intuitions, that are difficult to understand from a first person perspective and resistant to analysis. These experiences would be very valuable for an AI system, particularly my main focus: ethical intuitions. If an AI system can have ethical intuitions they’ll be more trustworthy and effective when given tasks that require integration with humans. However, our typical approach of using computational methods to achieve the functions of human abilities is, because of our lack of understanding of the exact mechanisms of intuitions, unlikely to succeed. My project explores two paths out of this: either we can refine our understanding of intuitions and approximate them with our typical computational methods, or we can work on developing an AI system that generates it’s own type of ‘intuitions’ by simulating an environment that encourages ‘intuitive’ problem solving. If intuitions prove to be genuinely opaque to humans, then the latter would be more effective. The consequence of where the barrier between what we can approximate with traditional AI systems and what we have to use other methods for has many important consequences for how we should plan our progress in AI technology and where our focus should lie – this project aims to be a step towards defining that barrier.

Funding

I am grateful to the IDEA centre for awarding me a studentship in applied ethics.

Research interests

Metaethics, Ethics, Ethics of Technology, Artificial Intelligence, Epistemology, Philosophy of Mind, Computationalism

Secondary Interests:

 Neoplatonism, aesthetics

Qualifications

  • BA in Philosophy at University of Liverpool (2013-2016)
  • MA in Philosophy at Bilkent University (2016-2018)