Gabriela Arriagada Bruneau
Professor Vincent C. Müller – Technical University of Eindhoven (TU/e) – Inter-disciplinary Applied Ethics Centre (IDEA) – The Alan Turing Institute
Professor Mark Gilthorpe – Leeds Institute for Data Analytics (LIDA) – The Alan Turing Institute
One of the main discussions in the fields of data science and AI relates to how we can deal with bias and fairness, particularly due to big data’s capacity to reflect societal biases. Biased datasets pose a threat, given that when used to train algorithmic models, they replicate or escalate societal biases, thus making AI systems ‘unfair’ and discriminatory against minority groups, particularly when used for decision-making processes. Most efforts to overcome unfairness have considered technical de-biasing fixes, merely hinting towards broader ethical implications but without further theorization on how human/ethical notions of bias and fairness influence their technical counterparts.
My research aims to contribute to this debate by developing an ethical framework for data science based on the following analysis:
1. The theoretical distinctions on how to define/understand bias and fairness and their relation.
I claim that bias is in a different category from unfairness, so AI systems, algorithms, or datasets cannot be unfair, but merely biased. My proposal states there is a conflation in the use and application of these terms, with multiple references (technical and ethical) coexisting. Lack of theorization leaves notions of bias being defined as a contrast concept from fairness, overlooking at elements of ethical fairness that are needed to develop robust ethical frameworks.
2. The practical and ethical concerns related to discrimination and injustice (inequality, data gaps, and systemic issues), can benefit from the theoretical distinction, helping to address the inequality in the field.
The problem with the current bias/fairness conflation is that many of the technical solutions incoporate the limitation that bias-centric approaches entail. Introducing a firm notion of ethical fairness as a distinct concept from bias, allows to establish causal links to responsibility, judgement, and action, that ‘bias-fixings’ cannot. Hence, dealing with a broader and more profound analysis of the origins of injustice and its impact in data-driven endeavours.
3. Provide guidelines to deal with the elements of fairness and bias that unfold within the field, processes, and outcomes of data science.
Finally, the thesis does not only work on a theoretical level to provide much needed conceptual engineering, but also offers a practical guidelines to apply these concepts. Furthermore, clarifying bias from fairness, and introducing ethical fairness as a term on its own, facilitates the discussion and analysis of proximal concepts such as transparency, interpretability, and explainability, often discussed within the scope of fairness, but lacking the clarity to differentiate the causal moral chain driving their impact.
Arriagada-Bruneau, G., Gilthorpe, M., & Müller, V. C. (2020). The ethical imperatives of the COVID-19 pandemic: A review from data ethics. Veritas, 46, 13-36. http://dx.doi.org/10.4067/S0718-92732020000200013
Arriagada Bruneau, Gabriela. (2018). Do we have moral obligations towards future people? Addressing the moral vagueness of future environmental scenarios. Veritas, 40, 49-65. https://dx.doi.org/10.4067/S0718-92732018000200049
Denis G. Arnold (ed.) 2009, Ethics and the Business of Biomedicine. Cambridge: Cambridge University Press, in Dilemata Nº 20 (2016), pp. 125-131.
Susan Lufkin Krantz, Refuting Peter Singer’s Ethical Theory: The importance of Human Dignity, Praeger, 155 pp: Westport, in Aporia N° 7 (2014), p. 93-97.
in Spanish: http://ojs.uc.cl/index.php/aporia/issue/view/37
David A. Crocker, Enfrentando la desigualdad y la corrupción: Agencia, empoderamiento y desarrollo democrático [Original Title: Confronting inequality and corruption: Agency, empowerment, and democratic development], Veritas Nº34 (2016), pp. 65-76.
THE IDEA POD – Inter-disciplinary Applied Ethics Centre new podcast
I am a presenter at the IDEA pod, a fortnightly podcast that explores and interrogates applied ethics across a range of contemporary issues. Here you will be able to hear interviews on topical concerns for our society and, of course, for us as an Applied Ethics centre. Ranging from fellow postgraduate researchers, master students, to professionals from private and public areas, and a variety of topics such as medical ethics, data and technology, artificial intelligence, philosophy of love, aesthetics, and more.
Bringing Philosophy of Science and Technology closer to the Spanish-speaking world: FICICO
With fellow latin-american PhD students from the University of Cambridge and the University of Bristol, we launched an outreach initiative called FICICO (for the Spanish acronym for Philosophy, Science, and Community) to feature fellow hispanic academic researchers in the discipline and present their work to the public in their native language and to promote talks and workshops for the general public. This is an attempt to expand the access to high-quality research beyond the Anglocentric academic world. You can find our interviews here: https://www.sochific.cl/ficico?lang=es (Spanish only)
My other profiles
Applied Ethics, Data ethics, AI ethics, Ethics and Technology, Bias, Fairness, Equality.
- MSc in Philosophy - University of Edinburgh
- BA in Philosophy - Pontifical Catholic University of Chile