Conference Program


April 14th, 2022



9:00 - 10:30 AM

Contemporary thought about social change is conditioned in large part by two dogmas. One is a belief in some fundamental cleavage between individualistic routes to social change, by shaping “hearts and minds,” and structuralist routes to social change, by focusing on systems and institutions. The other dogma is foundationalism: the belief that change begins with education or wealth distribution or spiritual revival or [fill in your preferred approach]. Both dogmas, I shall argue, are ill founded. One effect of abandoning them is, as we shall see, a blurring of the supposed boundary between “micro” and “macro” social sciences. Another effect is a shift toward holism about interventions: changing minds requires changing systems and changing systems requires changing minds.

Michael Brownstein

10:40 - 11:40 AM

Large language models (LLMs) reflect, and can potentially perpetuate, social biases in language use. Conceptual engineering aims to revise our concepts to eliminate such bias. We show how machine learning and conceptual engineering can be fruitfully brought together to offer a new perspective on what conceptual engineering involves and how it can be implemented. Specifically, we show that LLMs reveal bias in the prototypes associated with concepts, and that LLM "de-biasing" can serve conceptual engineering projects that aim to revise such conceptual prototypes. At present, these de-biasing techniques primarily involve tactical approaches, requiring bespoke interventions based on choices of the algorithm's designers. Thus, conceptual engineering through de-biasing will include making choices about what kind of "normative training" an LLM should receive, especially with respect to different notions of bias.

Rachel Rudolph, Elay Shech, and Mike Tamir

11:40 AM - 1:00 PM

Lunch

1:00 - 2:30 PM

Much has been said and written about biases in data and how to "fix" them. It is unwise to focus only on the biases in data and not on the complex processes that generate the data. In this talk, I will briefly discuss complex systems and how studying them and their processes gives us insights into biases. If time permits, I will also present a study of how AI researchers describe and respond to the negative impacts of their work on society.

Tina Eliassi-Rad

2:40 - 3:40 PM

Paradigmatically, bias involves a person’s prejudicial mental attitudes towards a certain group. In these paradigm cases, we intuitively attribute some sort of moral failing to the biased agent. However, there are other non-paradigm types of action that I argue are best conceptualized as biased, too. For instance, a person working for a nonprofit, who has no prejudicial mental attitudes, might follow guidelines that result in differential resource distributions between men and women. Or, an algorithm that predicts prison recidivism might generate outputs that are biased against certain racial groups.

These examples indicate the need for a broad definition of bias. Social cognition research suggests people respond defensively when they are given feedback that their behavior is biased (e.g. Vitriol & Moskowitz, 2021). This defensiveness is sensible, given that people also respond to evidence of bias with moral outrage. However, outrage is lessened when the bias is attributed to an algorithm rather than a person (Bigman et al., 2022). In this paper, I make the suggestion that we are relatively ill-equipped to detect bias when it operates on a systemic rather than personal level. This tendency creates issues, as we may assess inequitable distributions as merely unfortunate, not biased.

In this paper, I argue that Mohseni et al.’s experimental economics paper on the Red King effect provides evidence that inequitable distributions between majority and minority groups exist without prejudicial attitudes; I argue construing these distributions as biased is productive. I also evaluate plausible explanations for our decreased moral outrage regarding biased distributions resulting from algorithm rather than human decision-making. Ultimately, I argue that understanding bias more expansively will allow us to more effectively combat systemic bias.

Manon Andre de St Amant

3:40 - 4:00 PM

Coffee Break

4:00 - 5:30 PM

How deep are the bounds on human thinking and feeling and how do they shape social judgments? For the past 40 years, I have studied attitudes, beliefs, and identities that operate relatively outside an individual’s conscious awareness or control. We now know that decisions guided by implicit cognition may not be in one’s own interest and is often at odds with consciously expressed values. Using this dissociation as a starting point, we have explored facets of the nature of implicit social cognition: its universality and cultural variations, its developmental origins, neural underpinnings, and stability and malleability, with new evidence from many labs now demonstrating that regional implicit bias is robustly correlated with socially significant outcomes in domains such as health, education, and law enforcement. I will focus on research that is underway in the lab using NLP approaches (word embeddings) to interrogate large language corpora (Common Crawl, Google Books) in their role as creators and reflectors of mental content in individual minds.

Mahzarin Banaji

6:00 - 7:30 PM

Conference Dinner


April 15th, 2022



9:00 - 10:30 AM

In this talk, I set forth inquiry into the content of a prohibition on racial discrimination, commonly understood as forbidding action that is “based on” race, via two broad questions: What is a consideration of race a consideration of? And what it is for an action to be on the basis of that consideration? Beginning with the matter of what constitutes race-based action brings out two conceptual hinges in analyses of racial discrimination, one in race and one in reason, that comprise what I call the race as a reason problem. There are not only different notions of practical reason that may be implicated in the stricture not to act on the basis of race (as a reason) but also different conceptions of the concept of race that are relevant to these different notions of reasons for acting. Drawing out these two sites of ambiguity generates an illuminating taxonomy of discrimination on the basis of race and casts the descriptive and normative dimensions of the problem as distinct but closely connected. Furthermore, I propose the race as a reason framing in the vein of a methodological intervention that clarifies points of divergence among theorists of discrimination. Addressing head-on the two hinges of race and of reason identifies with greater precision those loci of theorizing that give rise to dispute, thereby clearing the way for more fruitful debate.

Lily Hu

10:40 - 11:40 AM

Recent empirical work attempts to investigate how implicit biases target those facing intersectional oppression. This is welcome, since early work on implicit biases focused on single axes of discrimination, such as race, gender, or age. However, the success of such empirical work on how biases target those facing intersectional oppressions depends on adequate conceptualisations of intersectionality and empirical measures that are responsive to these conceptualisations. Surveying prominent recent empirical work, we identify failures in conceptualisations of intersectionality that inform the design of empirical measures. These failures generate unsupported conclusions about the kinds of biases that those experiencing multiple oppressions face, and render proposed interventions to combat biases useless at best, harmful at worst. We also diagnose unwarranted assumptions about how stereotypes combine in complex concepts: first, that when ‘simple’ social concepts combine the complex concepts inherit the associated stereotypes of their simpler constituent concepts; second that studies which focus on cognition about single social categories are investigating ‘simple’ social concepts (cf. Goff and Khan 2013). We tease out recommendations to guide future investigations into biases that target those who experience multiple oppressions.

Jim Chamberlain, Jules Holroyd, Ben Jenkins, and Robin Scaife

11:40 AM - 1:00 PM

Lunch

1:00 - 2:00 PM

I sketch a ‘material theory of machine-learning’. I show how Norton’s (2021) material theory of induction can be fruitful for evaluating machine-learning applications by emphasizing that facts – not formal schema -- warrant inductive inferences. This theory has three upshots. First, contrary to formal approaches to ‘algorithmic bias’, it explicitly demands the search for, and assessment of, domain-specific facts prior to domain-specific applications. Second, it emphasizes that facts in certain domains can be ‘thick’. Third, it argues that the warrants one should possess for making inductive inferences in `thick’ domains must not only be epistemic, but also normative.

Eugene Y.S. Chua

2:10 - 3:40 PM

The term “bias,” as it is generally used, connotes something bad: a prejudging that is unfair or illegitimate, or an improper favoring of some viewpoint or some person’s interest over others’. But fundamentally, “bias” means simply a “bent or tendency; an inclination in temperament or outlook…” I argue that “bias,” in this neutral sense, plays a constructive – indeed, absolutely crucial -- role in human life. There are two areas of human life where this is especially true: first, in our cognitive/epistemic life – in our attempts to come to know our world, and second, in our ”affective” lives – in our relations and emotional connections to other human beings. When human biases are bad, I contend, it is not because they are biases, but because they are operating in the wrong contexts. In the cases that concern those of us interested in social justice, there are many contexts in which biases – both epistemic and normative -- are problematic, but the problem lies with the environment, not the biases.

Louise Antony

3:40 - 4:00 PM

Coffee Break

4:00 - 5:30 PM

In this session, a discussion between our commentators-at-large will draw together the themes of the conference, providing a space for reflection with the audience on emerging points of discussion and directions for future research.

Featuring: Erin Beeghly, Liam Kofi Bright, Kathleen Creel, Carolina Flores, Branden Fitelson, Benedek Kurdi, Tom Kelly, Gary Lupyan, Raphaël Millière, and Wayne Wu

6:00 - 7:30 PM

Conference Dinner