Kathleen A. Creel

Research Interests

My work broadly concerns philosophy of machine learning, ethics of AI, and general philosophy of science. I am interested in how humans can best use computation to understand themselves and their world. How can we gain scientific understanding with opaque, black-box computational methods? What ways of explaining machine learning best serve scientific and public life? How should supposedly autonomously generated concepts inspire us to revise our own? In answering these sorts of questions, I connect traditional topics in philosophy of science such as explanation, reference, and natural kinds with a practice-based approach to the study of methods in contemporary machine learning. By examining these epistemic and normative questions, I outline more fruitful uses of machine learning for human flourishing.

Philosophy of
Machine Learning

Deep neural networks are often thought to be opaque black boxes. However, the sense in which they are opaque is philosophically interesting since every component of the system can be individually surveyed. What would it mean for such a sytem to be transparent? Is transparency required for trust? I explore explanatory strategies for opaque machine learning both as it is used for science and as it is used in public life. For an example of this work see my paper on transparency below.
I am also interested in how machine learning influences scientific and social categories. When new features crosscut our existing scientific or social categories, how do we or should we revise our understanding of the world and its contents?

Ethics of
Artificial Intelligence

Answers to epistemic questions about transparency and explanation are relevant to the use of algorithmic decision making in public life. I consider case studies of non-state use of automated decision-making, such as automated hiring systems and loan approval algorithms. What implications do adversarial examples and other known features of deep learning systems have for transparency and fairness? A paper from this project has been accepted at ACM FAccT 2021.
Whether or not these systems are black boxes, if they are treated as such we may come to trust them on a testimonial basis. I explore questions of machine testimony and of appropriate trust in automated decisionmaking systems.

General Philosophy of Science

Machine learning is only one species of a genus of scientific methods for finding patterns in data. This pattern-finding capacity is often thought to support the discovery of scientific phenomena, or the recognition of patterns that reflect activity and causal processes in the world rather than noise or instrument-caused artifacts of the data. In my work in general philosophy of science, I investigate the distinguishment of signal from noise and phenomena from artifact. I am also interested in the normativity of scientific beliefs. Arguments for popular forms of scientific explanation such as mechanistic explanation implicitly rely on normative theories of epistemic reason-giving. I am interested in borrowing tools from metaethics to examine the nature/normativity of scientific belief formation.

History of Philosophy

Google’s Ali Rahimi has called machine learning a "new alchemy": a pre-paradigmatic science whose notable successes outstrip the scientific theory meant to explain them. Early modern "natural philosophers" like Bacon, Boyle, and du Châtelet faced a similar gap between their practical ability to predict or control and their capacity to explain those successes with existing scientific theories. In this gap flowered an integrated pursuit of observation, experimentation, epistemology, and metaphysics. Lessons from this period, especially the methodological pursuits of Scottish enlightenment scientists such as Joseph Black and James Hutton, inform my work.
Likewise, machine learning holds out the promise, or perhaps illusion, that our technology-enhanced capacities can outstrip the human -- that we can get outside ourselves. My research considers how to make automated decision-making systems more fair and just while grounding them in a naturalistic understanding of human sympathy and social relationships. This work focuses on early modern sentimentalists such as David Hume, Adam Smith, and Sophie de Grouchy. For example, Hume's theory of justice and the caprice of power underlies my most recent manuscript: the Algorithmic Leviathan, concerning arbitrariness by automated decision-making systems.

Papers

Transparency in Complex Computational Systems
Philosophy of Science (October, 2020, Volume 87 Issue 4)

Abstract: Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have suggested treating opaque systems instrumentally, but computer scientists developing strategies for increasing transparency are correct in finding this unsatisfying. Instead, I propose an analysis of transparency as having three forms: transparency of the algorithm, the realization of the algorithm in code, and the way that code is run on particular hardware and data. This targets the transparency most useful for a task, avoiding instrumentalism by providing partial transparency when full transparency is impossible.

The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making, with Deborah Hellman
ACM FAccT 2021, Canadian Journal of Philosophy, forthcoming

Abstract: Automated decision-making systems implemented in public life are typically standardized. One algorithmic decision-making system can replace thousands of human deciders. Each of the humans so replaced had her own decision-making criteria: some good, some bad, and some arbitrary. Is such arbitrariness of moral concern? We argue that an isolated arbitrary decision need not morally wrong the individual whom it misclassifies. However, if the same algorithms are applied across a public sphere, such as hiring or lending, a person could be excluded from a large number of opportunities. This harm persists even when the automated decision-making systems are "fair" on standard metrics of fairness. We argue that such arbitrariness at scale is morally problematic and propose technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harms we identify.

On the Opportunities and Risks of Foundation Models, with Rishi Bommasani* et. al., see full author list at link. at arXiv (2021)

Abstract: AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles (e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.

Clinical decisions using AI must consider patient values, with Jonathan Birch, Abhinav Jha, and Anya Plutynski
Nature Medicine (2022)

Abstract: Built-in decision thresholds for AI diagnostics are ethically problematic, as patients may differ in their attitudes about the risk of false-positive and false-negative results, which will require that clinicians assess patient values.