Kathleen A. Creel

Research Interests

My work broadly concerns philosophy of machine learning, ethics of AI, and general philosophy of science. I am interested in how humans can best use computation to understand themselves and their world. How can we gain scientific understanding with opaque, black-box computational methods? What ways of explaining machine learning best serve scientific and public life? How should supposedly autonomously generated concepts inspire us to revise our own? In answering these sorts of questions, I connect traditional topics in philosophy of science such as explanation, reference, and natural kinds with a practice-based approach to the study of methods in contemporary machine learning. By examining these epistemic and normative questions, I outline more fruitful uses of machine learning for human flourishing.

Philosophy of
Machine Learning

Deep neural networks are often thought to be opaque black boxes. However, the sense in which they are opaque is philosophically interesting since every component of the system can be individually surveyed. What would it mean for such a sytem to be transparent? Is transparency required for trust? I explore explanatory strategies for opaque machine learning both as it is used for science and as it is used in public life. For an example of this work see my paper on transparency below.
I am also interested in how machine learning influences scientific and social categories. When new features crosscut our existing scientific or social categories, how do we or should we revise our understanding of the world and its contents?

Ethics of
Artificial Intelligence

Answers to epistemic questions about transparency and explanation are relevant to the use of algorithmic decision making in public life. I consider case studies of non-state use of automated decision-making, such as automated hiring systems and loan approval algorithms. What implications do adversarial examples and other known features of deep learning systems have for transparency and fairness? A paper from this project (abstract) has been accepted at ACM FAccT 2021.
Whether or not these systems are black boxes, if they are treated as such we may come to trust them on a testimonial basis. I explore questions of machine testimony and of appropriate trust in automated decisionmaking systems.

General Philosophy of Science

Machine learning is only one species of a genus of scientific methods for finding patterns in data. This pattern-finding capacity is often thought to support the discovery of scientific phenomena, or the recognition of patterns that reflect activity and causal processes in the world rather than noise or instrument-caused artifacts of the data. In my work in general philosophy of science, I investigate the distinguishment of signal from noise and phenomena from artifact. I am also interested in the normativity of scientific beliefs. Arguments for popular forms of scientific explanation such as mechanistic explanation implicitly rely on normative theories of epistemic reason-giving. I am interested in borrowing tools from metaethics to examine the nature/normativity of scientific belief formation.

History of Philosophy

Google’s Ali Rahimi has called machine learning a "new alchemy": a pre-paradigmatic science whose notable successes outstrip the scientific theory meant to explain them. Early modern "natural philosophers" like Bacon, Boyle, and du Châtelet faced a similar gap between their practical ability to predict or control and their capacity to explain those successes with existing scientific theories. In this gap flowered an integrated pursuit of observation, experimentation, epistemology, and metaphysics. Lessons from this period, especially the methodological pursuits of Scottish enlightenment scientists such as Joseph Black and James Hutton, inform my work.
Likewise, machine learning holds out the promise, or perhaps illusion, that our technology-enhanced capacities can outstrip the human -- that we can get outside ourselves. My research considers how to make automated decision-making systems more fair and just while grounding them in a naturalistic understanding of human sympathy and social relationships. This work focuses on early modern sentimentalists such as David Hume, Adam Smith, and Sophie de Grouchy. For example, Hume's theory of justice and the caprice of power underlies my most recent manuscript: the Algorithmic Leviathan, concerning arbitrariness by automated decision-making systems.


Transparency in Complex Computational Systems
Philosophy of Science (October, 2020, Volume 87 Issue 4)

Abstract: Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have suggested treating opaque systems instrumentally, but computer scientists developing strategies for increasing transparency are correct in finding this unsatisfying. Instead, I propose an analysis of transparency as having three forms: transparency of the algorithm, the realization of the algorithm in code, and the way that code is run on particular hardware and data. This targets the transparency most useful for a task, avoiding instrumentalism by providing partial transparency when full transparency is impossible.