Andreas Bunge
Interactions between propositional and associative processes in prejudice
In this paper, I present a dual-process account of prejudice, according to which reportable stereotypes and attitudes (i.e., explicit prejudice) are based on mental propositions, whereas stereotypes and attitudes measured on indirect psychological test (i.e., implicit prejudice) are associative by nature. I develop this account in response to a recently proposed position, according to which implicit prejudice is based on propositions (Mandelbaum, 2015). I elaborate on different ways in which propositional and associative processes influence each other, and point out the implications for strategies to reduce implicit bias.
According to the predominant view in psychology, implicit biases are based on mental associations. For example, people may associate black people with violence and therefore identify an ambiguous object in a black person’s hand as a gun rather than as tool. It is usually assumed that mental associations are acquired and changed through associative learning, which involves the repeated exposure to co-occurring stimuli. Recently, the associative view of implicit bias has come under attack. Mandelbaum (2015) argues that at least sometimes certain one-shot interventions are sufficient to change implicit biases. He points to experimental results that show that arguments, logical inferences or exposition to peer opinions can alter the expression of implicit biases. He claims that these effects, of what he calls logical and evidential interventions, can only be explained if we assume that implicit biases are based on propositional structures (or structured beliefs), such as ‘blacks are violent’.
I argue that Mandelbaum’s argument rests on a mistake. The influence of logical and evidential interventions on implicit bias is only an indirect one. Logical and evidential interventions can indeed change propositional mental structures, but these structures give at best rise to explicit prejudice, i.e., reportable attitudes and stereotypes. However, the propositional structures may in turn exert a top-down influence on associations - the actual basis of implicit bias. I have identified three possible mechanisms of top-down influence from propositional thought on associations in the empirical literature:
Francesco Chiesa
Difference and the Limits of Respect: The Case of Implicit Bias
The idea that people, however different, should be treated fairly and with respect has become a commonly invoked tenet in contemporary liberal democracies. This paper offers an argument on the normative insufficiency of the idea of respect as an adequate response to the fact of diversity. I argue that respect can be merely behavioural: X can pay outward respect to Y (through actions, attitudes, verbal and body expressions) in a way that Y perceives nothing but respect from X, and, at the same time, X can hold devaluing and disesteeming beliefs on Y. If X is successful in not making her disesteeming perspective count in her outward relation with Y, X respects Y. Although we can normatively require people to change their outward behaviours, it seems to be normatively unwarrantable to require them to acquire certain respectful beliefs if they do not spontaneously hold them. It can be desirable, but it could hardly be warranted as duty of justice. It seems unwarrantable for both epistemological (the untenability of full ‘doxastic voluntarism’ which sees one’s beliefs as an outcome of one’s mere choice) and normative (liberalism being at least suspicious of the idea of prescribing what one should think, other than do) reasons. It follows that the account of respect on which most literature on the recognition of difference grounds its views is only able to bind outwardly expressible actions/ utterances/ attitudes. This, at first sight, does not seem to be a problem since people seem able, and are required by respect, to restrain themselves from making their devaluing evaluative perspective count in the way they outwardly treat others. However, it is a problem since devaluing perspectives and beliefs can be expressed, and are often expressed, however successful one is in behaving respectfully (in holding devaluing beliefs back). Consider the case of ‘implicit bias’, when a devaluing belief such as a negative stereotype about a certain group is automatically associated to members belonging to that group and affects one’s outward behaviour in a way that typically happens below the level of full consciousness. Since the implicit bias happens below the level of full consciousness we cannot advance a normative requirement that the bias held by people and institutions should change (‘if ought implies can’). So we can imagine a fully respectful world within its recognitive relations in which people whose trait/ culture/ status/ identity differs from the prevailing standards and social norms are structurally disadvantaged nonetheless. Although the implicit bias is not under peoples’ full normative control we know that implicit biases tend to be learned from societies where we live. It follows
that, if we take unfair structural is advantage to be something liberal political institutions should eliminate, political institutions have a duty that goes beyond respect and should intervene on those social structures that nourish unfair implicit biases and, more generally, on those social structures which ground structural disadvantage. One way to pursue this aim is to make sure that people live in a society where they can compete for the relevant others’ esteem on equal footing. This justifies the support for policies that the ideal of equal respect alone would not support, and might even possibly look with suspicion. Policies aiming to intervene on the shared horizons of standards and values in order to try to make sure people are given equal opportunities for esteem.
Noel Dominguez
The Blameworthiness of Implicit Bias: A Constitutivist Solution
The general problem facing philosophers trying to explain the blameworthiness of implicit biases is the same one philosophers trying to give an account of the blameworthiness of omissions have: there are times that actions seem blameworthy, but don’t obviously have the typical that make actions blameworthy (I’ll call these properties blamemakers). When philosophers try to account for the blameworthiness of an action that doesn’t obviously have any blamemakers, two approaches are generally considered. On the tracing approach, philosophers accept that the case in question does not contain a blamemaker, and so attempt to find some previous action the blameworthy action can be connected (or “traced”) to that does have a blamemaker, and then claim the blameworthiness of the previous action grounds the blameworthiness of the action under discussion. On the fine-graining approach, philosophers claim that, despite appearances, the action in question really does have a blamemaker, but this blamemaker is only clearly present once we make a further distinction concerning what kinds of properties make an action blameworthy. In this essay, I’ll argue that while both of these kinds of attempts have been made to ground the blameworthiness of implicit biases, both of these attempts are implausible for reasons that do not turn too heavily on what our blamemakers actually are. As a result, all existing attempts to ground the blameworthiness of implicit biases have failed.
This is not because implicit biases are not blameworthy, but because philosophers have been primarily trying to develop different versions of attributionist (attitude-based or evaluative judgment-centered) or volitionist (intentional or voluntarily-done) blamemakers for implicit bias when these two approaches share a false assumption I’ll call realism about blamemakers, or the idea that when an action is blameworthy, this is due to a real property of the action. As both volitionists and attributionists share this assumption, neither of them will produce a convincing account of the blameworthiness of omissions – the most fundamental problem has to do with the fact that omissions are absences, and you can’t explain the blameworthiness of some absence if what makes something blameworthy is some presence (like a bad attitude or a bad volition). You either have to insert a presence into an absence (and then it isn’t an absence), or connect the absence to some blameworthy presence, and then you aren’t explaining why the absence it blameworthy. To remedy this, I’ll consider projectivism about blamemakers, or the idea that when an action is blameworthy, it is not because of some property of the action, but because of how our normative commitments allow us to see the action. On this understanding, to say an act is blameworthy is not to say it has some bad property, but that the action falls short of our normative expectations for actions of this kind.
On my constitutivist understanding of projectivism about blamemakers, an act is blameworthy not on the basis of the properties it has, but on whether it meets the required standard for action of that kind given our relation to one another. For example, if I am your friend and friends are supposed to treat each other equally, then your inability to treat me equally because of an implicit bias you have constitutes an instance of your failing to treat me as the constitutive standards of friendship requires, and so you are blameworthy for your lapse. This approach not only avoids the problems presented by previous theories attempting to explain the blameworthiness of implicit biases, but also has several independently attractive features for the blameworthiness of our actions in general.
Jules Holroyd
What do we want from a model of implicit bias?
I undertake three tasks in this paper: first, I set out some desiderata for evaluating models of implicit bias; second, I set out some test cases which bring to the fore a largely ignored class of discriminatory behaviour, in which implicit bias has a role; third, I consider how recent models of implicit bias fare in dealing with these test cases, in light of these desiderata. I argue that inadequacies in each of the models are brought to light. Whilst these conclusions are negative, I hope to nonetheless make a positive contribution to the debate in presenting a clearer articulation of what we should want an account of implicit cognition to do.
Hunter
Philosophers Behaving Badly: The systemic failures of “Experimental Philosophy”
The movement known as “Experimental Philosophy” — which is just over a decade old — attempts to answer questions about “how human beings actually happen to be.” To do so, proponents of experimental philosophy attempt to answer these questions by borrowing tools from cognitive science and social psychology to investigate the “psychological processes underlying people’s intuitions about central philosophical issues.” The promise of experimental philosophy is succinctly put in the following passage: The real measure of a research program depends on whether the program generates exciting new discoveries. . . For our part, we think that experimental philosophy has already begun to produce surprising and illuminating results. The thing to do now is just to cast off our methodological chains and go after the important questions with everything we’ve got. While the experimental philosophy movement once seemed promising, the reality over the last decade has illustrated otherwise. The movement has been plagued with questions about methodology and whether the results generated by experimental studies is philosophically significant. While there is much debate about these concerns, there has been a substantial critique that has been missing from the entire debate. This paper will argue that the experimental philosophy movement has a systemic problem that greatly undermines and hinders the success of the movement. The experimental philosophy movement systemically excludes the voices of marginalised
people, especially so from the standpoint of the participants in the experiments. Experimental philosophers have not taken seriously enough the notion that their methodology and survey parameters have ignored particular groups of individuals in ways that are harmful to the communities of people whom they have ignored.
To bring to light the serious issue that the experimental philosophy movement faces, this paper uses both quantitative and qualitative data from the past ten years of experimental philosophy publications. Quantitatively speaking, the experimental philosophy movement systemically excludes any data on marginalized persons—solely due to the fact that the movement, on aggregate, does not even allow for participants to self-identify in socially significant ways (i.e “race/ethnicity” and “gender”). Qualitatively, a number of vignettes and proclamations made on the basis of problematic empirical results can be a source of epistemic harm for marginalised communities. The inability to reflect groupings that social science deems as important inhibits both the move to make experimental philosophy and academic philosophy (in general) more inclusive. Additionally, the questions taken as “central” to the movement systemically exclude philosophical questions that arise from the lived experiences of persons in marginalised communities.
In conclusion, this paper offers a number of solutions to positively change the experimental philosophy movement. The benefits of addressing the systemic issues in the experimental philosophy movement are the following: more inclusiveness in the discipline, more robust results from experiments, and the potential to introduce different research projects that appeal to under- represented groups. The suggested changes to the movement will have a positive reverberating effect on both the discipline and upon what we can know about groups that have been — for the vast majority of history in analytic philosophy — implicitly or overtly ignored.
Sarah Jones
Title TBC
Philosophy’s climate, now riddled with the consequences of implicit bias, is a paradigm of both privilege and discrimination. Its women and other minority faculty are demonstrably disadvantaged by the continued influence of gender and race bias. The discipline’s outcomes (rates of advancement, patterns of hiring, etc.) are at once clearly premised upon the compounding of advantage and disadvantage and are symptomatic of structural injustice. While there is much agreement about the causes of the disparate and unjust patterns in the discipline’s outcomes, there is much disagreement about the appropriate remedy. Jennifer Saul has maintained that we ought not to hold individuals blameworthy for their implicit biases, given that a) such biases are the outcome of a sexist culture and associative psychological processes outside the agent’s control, and b) holding individuals accountable for such biases is counter-productive. Jules Holroyd disagrees. She argues that there are good reasons to believe that biases are at least partly the result of explicit and voluntary beliefs, and subject to (at least long-range) agential control. Moreover, Holroyd argues, it might be productive to regard the biased agent as responsible and blameworthy; holding individuals accountable or blameworthy for their biases creates norms, expectations, and patterns of behavior. This, in turn, significantly impacts the climate of the discipline.
In my paper, I defend and extend Holroyd’s arguments. Specifically, I argue that holding individuals responsible for their biases is not only supported by recent experimental evidence about the nature of implicit biases, it is also morally required by the seriousness of the harms caused by such biases. Given the extent and nature of the wrongs perpetuated by the operations of such bias, it is necessary to hold accountable the individuals who perpetuate these patterns of privilege and discrimination. Moreover, because recent studies suggest that individuals with low internal motivation to regulate their biases are unlikely to respond without bias in the absence of some external motivation, holding individuals accountable for their biases is potentially the only effective strategy for inhibiting such individuals from acting on their biases: explicit accountability, I argue, will provide the needed (external) motivation for otherwise unmotivated (harmful) individuals. Hence, regarding others as blameworthy is only a first step. We must also construct policies through which individual accountability for bias will be enforced. Indeed, given the dramatically unjust outcomes of implicit bias, the construction of such policies should be considered a moral imperative.
Our collective failure to develop such explicit measures signals a collective failure to take seriously enough the consequences of implicit bias. Worse yet, that failure results in an implicit belief that these harms are not all that bad: were they that bad, we would surely hold the worst offenders formally accountable. Yet, the impact of such biases is that bad. As Sally Haslanger states, “it is very hard to find a place in philosophy that isn’t actively hostile to women and minorities.” Hence, I argue, the nature and extent of these harms imposes -- as yet unmet -- moral obligations. It imposes obligations on each of us as agents take measures to control our biases, and it imposes obligations on each us as members of the discipline to develop explicit accountability measures. We must make institutional (e.g., departmental) policy match the seriousness of discrimination’s outcomes. Failure to hold offenders accountable is morally comparable to the failure to take appropriate measures to curb our own biases. Arguably, both the absence of explicit accountability measures and the absence of a collective insistence on such measures render us all complicit and blameworthy in the continued compounding of disadvantage for women and other minorities in philosophy.
Ian Kidd
Can We Retain Confidence in Philosophy in the Light of Implicit Bias?
This talk explores several different senses in which implicit bias might threaten our confidence in philosophy , relating this to similar claims made in recent history of philosophy. I suggest that we can think about these challenges to philosophical confidence in terms of invitations to cultivate a sense of humility. If conceived and responded to in appropriate ways, implicit bias can be a source of deep humility, of a sort wholly congenial to the philosophical enterprise.
Emma McClure
How Best to Use the IAT: The Moral to Draw from the Moral Responsibility Debate
Since the mid-2000s, philosophical accounts of moral responsibility have begun to incorporate the Implicit Association Test (IAT). However, philosophical discussions of implicit bias are complicated by shifts in the prevailing psychological consensus. One such shift occurred in 2013, when Oswald et al. published a meta-analysis of 46 race-IAT studies. Oswald et al. challenged the dominant assumption that IAT scores correlated strongly with discriminatory behavior. In 2015, Greenwald, Banaji, and Nosek— three prominent proponents of the IAT—admitted that correlations with behavior were significantly lower than earlier research had suggested. Nevertheless, they proposed a new avenue for future research: perhaps the small (4%) correlation between IAT scores and discrimination “represents potential for discriminatory impacts with very substantial societal significance.” As Oswald et al. point out in their 2015 response, such a contention “needs to be evaluated empirically rather than simply stipulated.” Thus, psychologists have a new question to study: How important is the 4%?
Given this shift in psychological research, it is unsurprising that many philosophical accounts of implicit bias need to be revised. More intriguing, however, are the philosophical accounts that have survived such major changes in empirical research— and what these accounts can teach us about how best to incorporate psychological studies in future philosophical debates.
To explore this question, I will consider two recent philosophical articles: Neil Levy’s “Consciousness, Implicit Attitudes and Moral Responsibility” (2014) and Chloë Fitzgerald’s “A Neglected Aspect of Conscience: Awareness of Implicit Attitudes” (2014). Both philosophers use psychological studies to challenge the dominant paradigms in their respective subfields, but while Levy’s arguments are undercut by new psychological developments, Fitzgerald’s arguments remain solidly grounded in empirical research. I suggest that these varied outcomes are more than mere coincidence. Rather, I take Levy and Fitzgerald to represent two different methods of engaging with psychological research. The continued success of Fitzgerald’s approach is a mark in its favor.
Levy includes a wide range of IAT research in his paper—from Payne’s Gun/Tool IAT, to Nier’s Bogus Pipeline research, to Dasgupta’s Ingroup Favoritism study. While this list of citations is impressive, the breadth of his research may actually have worked against him. These researchers administered different types of IAT in distinct settings. Levy combines the results of these three studies and uses them to support sweeping generalizations about the behavioral impact of implicit bias: “When we are under time pressure, stressed, tired or distracted, our actions often reflect our implicit attitudes, even when our explicit attitudes conflict with them. Unfortunately, many of our most significant actions occur under these kind of pressures.” As I’ll emphasize, the Oswald et al. meta-analysis has undercut such general claims.
In contrast, Fitzgerald focuses on a cluster of interrelated studies, rather than appealing to a wide range of minimally connected findings. She cites IAT research on medical professionals and then draws out the bioethical implications. Fitzgerald’s philosophical theorizing retains the original focus of these psychological studies—doctor/patient interaction. Since she does not generalize to other, unstudied interactions, her empirical foundation remains secure despite new meta-analytical data. Thus, her account of bioethical responsibility survives the recent shift in psychological consensus.
Although I hesitate to make recommendations based upon so small a sample, these differing levels of success suggest a promising line for future philosophical engagement with developing psychological research. By discussing the philosophical implications of particular studies (or interconnected ones), we can make small, lasting advances in the philosophical debate on moral responsibility. In this case, as in the old fable, slow and steady wins the race.
Aaron Meskin & Sheila Lintott
Art and Implicit Bias
How might a work of art function to corrupt or edify its audiences? On one standard view, a work of art is ethically corrupting if it has as a predictable consequence that its audience will behave unethically after being exposed to it. Debates about the ethics of pornography, for instance, often focus on the question of whether watching violent and misogynistic pornography causes viewers to act in violent or misogynistic manners. Such debates, however, run in circles because data on the behavioral effects of consuming media is decidedly inconclusive. Some studies suggest that pornography causes, others that it reduces, such undesirable behavior. On another standard view, propositionalism, a work of art is ethically corrupting if it is likely to inculcate moral falsehoods. For example, Triumph of the Will might be said to convey the message: “Some humans are inherently superior to others.” Although generally more convincing than a consequentialist account, this model of understanding art’s corrupting hardly seems to be the full story. Teaching moral falsehoods is only one way to corrupt, and many morally problematic works do not convey moral falsehoods. Another way art is sometimes ethically evaluated is for the emotional engagement it invites. Identificationism is the view that art is morally corrupting if it disposes its audience to identify – to feel with and for – a morally corrupt character. This view functions as a nice supplement to propositionalism, but neither alone nor coupled with propositionalism does it fully explain the ways art might morally corrupt.
Today’s social psychology studies psychological mechanisms that were undetectable just a few decades ago. The instrument known as “The Implicit Attitudes Test” (IAT) provides a way to measure the presence of biases of which a person is unaware. Moreover, there is good reason to think that these implicit attitudes influence our behavior, and not always to good effect.
There is significant evidence that implicit attitudes are not stable—various contextual factors, most notably for our purposes, imagining, can reinforce or reduce implicit biases. A wide range of artwork traffics in imaginings, and hence this opens up a number of new ways to understand the ethical power of art.
Consider, for example, how a work of art might corrupt by leading us to imagine in some detail women as catty and conniving, which might reinforce implicit associations of women and those negative attributes (e.g., in the popular American T.V show Gossip Girl). Alternatively, a work of art might edify in virtue of its tendency to diminish implicit prejudices. For example, representations of independent, sexy, and vivacious young women with disabilities might reduce implicit prejudice against the disabled (e.g., in the reality TV show Push Girls). Such works might be positively ethically evaluated for their tendency to falsify implicit attitudes, making them available for conscious critical reflection, or for their ability to present counter-stereotypical exemplars. Finally, artworks may employ implicit prejudice to powerfully reinforce preexisting stereotypes or negative implicit attitudes. For example, a horror movie’s representation of a homosexual monster (e.g., in Silence of the Lambs) plausibly appeals to the audience’s implicit disgust at homosexuality to intensify their fear and disgust and may be ethically evaluable in virtue of this fact.
The persistence of implicit biases helps explain why, in a cultural context where virtually everyone claims freedom from discriminatory beliefs and attitudes, significant inequality and segregation persist. Art, especially mass art, may serve to instill, reinforce, complicate, and lay bare these implicit biases. Ethical evaluation of art should therefore attend to the ways in which art interacts with our implicit attitudes.
Helen Morley
Title TBC
Abstract to follow
Jennifer Saul & Katharine Jenkins (Sheffield)
The Pragmatics of Inclusivity: Visual and Linguistic Cues to Group Membership
Many philosophers are, quite appropriately, engaging in efforts to make their syllabi less overwhelmingly white, male, cis‐gendered, middle class, heterosexual, non‐disabled, etc. However, some of the goals of these efforts cannot be achieved unless the social groups to which authors belong are communicated to students. This paper explores the political implications of various different ways of communicating such social group membership. We argue that despite the positive aims behind communicating group membership, attempts at such communication are liable to carry deeply problematic implications. The possibility of using visual cues rather than linguistic cues as a way to avoid these problematic implications is considered, but it does not succeed in all cases. Finally, we argue that this difficulty is not restricted to syllabi, but also affects broader efforts at integration, and communication about such efforts.
Sophie Stammers
Explicitly unclear: the illusion of the exclusive control mechanism
We tend to think that explicit prejudice is (i) a more easily eliminable affair in the face of both the epistemic and moral reasons that there are for refraining from prejudice, and (ii) a more obvious candidate for moral culpability, than implicit bias. This rests on the supposition that implicit biases, and the actions which they mediate, are somehow importantly different from the beliefs, preferences and prejudices which exist in the explicit realm, and the actions which they guide. One way of filling out this supposed difference is to claim that we exert a different kind of control over actions which are guided by explicit preferences and prejudices, compared with those guided by implicit biases. Indeed, cognitive psychologists sometimes describe implicitly biased actions as the result of ‘automatic’ processes, and that these processes contrast with the ‘controlled’ processes by which explicit states guide actions. In this paper, I will demonstrate that the argument that there is a distinction between implicitly biased actions, and actions guided by explicit preferences/prejudices on the basis of a difference in the kind of control that we exert over each, stands on shaky ground. In fact, I argue that there is no robust distinction between the control that we exert over implicitly biased actions, and the control that we exert over actions guided by explicit preferences/prejudices. To reach this conclusion, I first establish that we do have a form of (what has been called) ‘indirect’ control over our implicitly biased actions, by appealing to recent empirical findings; in the form of (i) implementation intentions, (ii) exposure to counter-stereotypical exemplars, and (iii) convincing argument primers. I then address my opponents who claim that there is a special kind of ‘direct’ control which we exert over actions guided by explicit states, but not over those mediated by implicit bias. Here, I show that we exert direct control over neither the acquisition of implicit biases, nor the acquisition of explicit states, but that we do exert indirect control over the acquisition of both. I then show that there are at least some actions guided by explicit states which are not under our direct control, by appeal to three examples; ‘akratic gin drinking marker’, ‘excited quiz player,’ and ‘explicit (but ignorant) bigot’. These examples are selected to demonstrate that there is a double disassociation between explicit states and what we might typically think of as explicitly guided actions, such that: 1) Explicit states can mediate action implicitly, and, moreover, can fail to mediate action at all. Sometimes we appear to exert control, sometimes we don’t. 2) Implicit states can mediate actions explicitly. Sometimes we appear to exert control, sometimes we don’t. Consequently, I argue that there is no control mechanism that is at once exclusive to explicit beliefs, preferences and prejudices, and the actions which they guide, and which is not also at work when implicit states and implicit biases guide actions. I conclude with some upshots for the practicalities of controlling and eliminating implicitly (and explicitly!) biased behaviours, as well as with the consequences for moral culpability for both implicit bias and explicit prejudice.
Robin Zheng
Bias, Structure, and Injustice: Collective Accountability for Implicit Bias
Philosophers and activists have taken great interest in the phenomenon of implicit bias because of the important role it appears to play in explaining persistent social inequalities. Recently, however, Sally Haslanger (2015) has argued that social inequalities are best explained in terms of social structures rather than the attitudes and actions of individuals, and hence that
philosophical efforts to respond morally and practically to implicit bias are largely misplaced relative to the goal of enacting structural change. Along parallel lines, Chad Lavin (2011) has argued that the concept of responsibility itself—because it prioritizes the particular actions of particular individuals—is inadequate for dealing with enduring, unjust background conditions such as poverty, racism, and other oppressions.
I argue against Haslanger (2015) and Lavin (2011) that understanding and developing practices of responsibility for implicit bias can play an important role in rectifying structural injustice. To see this, I claim, it is necessary to distinguish between two different concepts of responsibility: attributability and accountability. We are responsible for our actions in the attributability sense only when those actions reflect our identities as moral agents, i.e. when they are attributable to us. We are responsible in the accountability sense when it is appropriate for others to enforce certain expectations and demands on those actions, i.e. to hold us accountable for them. Drawing on Iris Marion Young’s (2011) social connection model of responsibility, I show that even though implicit biases and structural injustice may not be attributable to any particular individual, we can still accountable for collectively organizing to transform the social conditions that give rise to them in the first place. I develop this conception of accountability by appealing to role-based ideals (e.g. being a good teacher, a good parent, neighbor, citizen, friend, etc.) which are distributed across the moral community and which give rise to the expectation that we act through these roles to change our local social structures.
Finally, I offer a number of real-world examples of responses to implicit bias, drawn from each of Patricia Hill Collins’ (1999) four domains of oppression: structural, disciplinary, hegemonic, and interpersonal. These examples demonstrate that explanations invoking implicit bias need not compete with structural explanations, since implicit bias is not only enabled by but is itself an enabler of structure. This is because implicit associations are one of the mechanisms by which agents have and make use of their knowledge of cultural schemas (which, along with resources, constitute the building blocks of social structure). In other words, understanding and addressing implicit bias is a way of understanding and addressing the workings of social structure, since implicit biases are the “grease that oils the machine,” so to speak. Practices of accountability for blocking and eliminating implicit bias, then, figure into larger struggles against structural injustice because they represent starting points for transforming unjust social structures.
Interactions between propositional and associative processes in prejudice
In this paper, I present a dual-process account of prejudice, according to which reportable stereotypes and attitudes (i.e., explicit prejudice) are based on mental propositions, whereas stereotypes and attitudes measured on indirect psychological test (i.e., implicit prejudice) are associative by nature. I develop this account in response to a recently proposed position, according to which implicit prejudice is based on propositions (Mandelbaum, 2015). I elaborate on different ways in which propositional and associative processes influence each other, and point out the implications for strategies to reduce implicit bias.
According to the predominant view in psychology, implicit biases are based on mental associations. For example, people may associate black people with violence and therefore identify an ambiguous object in a black person’s hand as a gun rather than as tool. It is usually assumed that mental associations are acquired and changed through associative learning, which involves the repeated exposure to co-occurring stimuli. Recently, the associative view of implicit bias has come under attack. Mandelbaum (2015) argues that at least sometimes certain one-shot interventions are sufficient to change implicit biases. He points to experimental results that show that arguments, logical inferences or exposition to peer opinions can alter the expression of implicit biases. He claims that these effects, of what he calls logical and evidential interventions, can only be explained if we assume that implicit biases are based on propositional structures (or structured beliefs), such as ‘blacks are violent’.
I argue that Mandelbaum’s argument rests on a mistake. The influence of logical and evidential interventions on implicit bias is only an indirect one. Logical and evidential interventions can indeed change propositional mental structures, but these structures give at best rise to explicit prejudice, i.e., reportable attitudes and stereotypes. However, the propositional structures may in turn exert a top-down influence on associations - the actual basis of implicit bias. I have identified three possible mechanisms of top-down influence from propositional thought on associations in the empirical literature:
- The repeated co-occurrence of mental representations in propositional thought can create and change associations between these representations, just as the repeated co-occurrence of external stimuli can change associations.
- Over time, we can acquire different, and sometimes even opposing, clusters of associations regarding one and the same group. Propositional thought processes can influence which of these clusters becomes activated in a given situation.
- Associative processes can give rise to affective responses that enter our awareness. By way of propositional reasoning, we may reject these affective responses as a proper basis for judgment and behavior.
I argue that these mechanisms provide an alternative explanation for the data that Mandelbaum brings forward to support
Francesco Chiesa
Difference and the Limits of Respect: The Case of Implicit Bias
The idea that people, however different, should be treated fairly and with respect has become a commonly invoked tenet in contemporary liberal democracies. This paper offers an argument on the normative insufficiency of the idea of respect as an adequate response to the fact of diversity. I argue that respect can be merely behavioural: X can pay outward respect to Y (through actions, attitudes, verbal and body expressions) in a way that Y perceives nothing but respect from X, and, at the same time, X can hold devaluing and disesteeming beliefs on Y. If X is successful in not making her disesteeming perspective count in her outward relation with Y, X respects Y. Although we can normatively require people to change their outward behaviours, it seems to be normatively unwarrantable to require them to acquire certain respectful beliefs if they do not spontaneously hold them. It can be desirable, but it could hardly be warranted as duty of justice. It seems unwarrantable for both epistemological (the untenability of full ‘doxastic voluntarism’ which sees one’s beliefs as an outcome of one’s mere choice) and normative (liberalism being at least suspicious of the idea of prescribing what one should think, other than do) reasons. It follows that the account of respect on which most literature on the recognition of difference grounds its views is only able to bind outwardly expressible actions/ utterances/ attitudes. This, at first sight, does not seem to be a problem since people seem able, and are required by respect, to restrain themselves from making their devaluing evaluative perspective count in the way they outwardly treat others. However, it is a problem since devaluing perspectives and beliefs can be expressed, and are often expressed, however successful one is in behaving respectfully (in holding devaluing beliefs back). Consider the case of ‘implicit bias’, when a devaluing belief such as a negative stereotype about a certain group is automatically associated to members belonging to that group and affects one’s outward behaviour in a way that typically happens below the level of full consciousness. Since the implicit bias happens below the level of full consciousness we cannot advance a normative requirement that the bias held by people and institutions should change (‘if ought implies can’). So we can imagine a fully respectful world within its recognitive relations in which people whose trait/ culture/ status/ identity differs from the prevailing standards and social norms are structurally disadvantaged nonetheless. Although the implicit bias is not under peoples’ full normative control we know that implicit biases tend to be learned from societies where we live. It follows
that, if we take unfair structural is advantage to be something liberal political institutions should eliminate, political institutions have a duty that goes beyond respect and should intervene on those social structures that nourish unfair implicit biases and, more generally, on those social structures which ground structural disadvantage. One way to pursue this aim is to make sure that people live in a society where they can compete for the relevant others’ esteem on equal footing. This justifies the support for policies that the ideal of equal respect alone would not support, and might even possibly look with suspicion. Policies aiming to intervene on the shared horizons of standards and values in order to try to make sure people are given equal opportunities for esteem.
Noel Dominguez
The Blameworthiness of Implicit Bias: A Constitutivist Solution
The general problem facing philosophers trying to explain the blameworthiness of implicit biases is the same one philosophers trying to give an account of the blameworthiness of omissions have: there are times that actions seem blameworthy, but don’t obviously have the typical that make actions blameworthy (I’ll call these properties blamemakers). When philosophers try to account for the blameworthiness of an action that doesn’t obviously have any blamemakers, two approaches are generally considered. On the tracing approach, philosophers accept that the case in question does not contain a blamemaker, and so attempt to find some previous action the blameworthy action can be connected (or “traced”) to that does have a blamemaker, and then claim the blameworthiness of the previous action grounds the blameworthiness of the action under discussion. On the fine-graining approach, philosophers claim that, despite appearances, the action in question really does have a blamemaker, but this blamemaker is only clearly present once we make a further distinction concerning what kinds of properties make an action blameworthy. In this essay, I’ll argue that while both of these kinds of attempts have been made to ground the blameworthiness of implicit biases, both of these attempts are implausible for reasons that do not turn too heavily on what our blamemakers actually are. As a result, all existing attempts to ground the blameworthiness of implicit biases have failed.
This is not because implicit biases are not blameworthy, but because philosophers have been primarily trying to develop different versions of attributionist (attitude-based or evaluative judgment-centered) or volitionist (intentional or voluntarily-done) blamemakers for implicit bias when these two approaches share a false assumption I’ll call realism about blamemakers, or the idea that when an action is blameworthy, this is due to a real property of the action. As both volitionists and attributionists share this assumption, neither of them will produce a convincing account of the blameworthiness of omissions – the most fundamental problem has to do with the fact that omissions are absences, and you can’t explain the blameworthiness of some absence if what makes something blameworthy is some presence (like a bad attitude or a bad volition). You either have to insert a presence into an absence (and then it isn’t an absence), or connect the absence to some blameworthy presence, and then you aren’t explaining why the absence it blameworthy. To remedy this, I’ll consider projectivism about blamemakers, or the idea that when an action is blameworthy, it is not because of some property of the action, but because of how our normative commitments allow us to see the action. On this understanding, to say an act is blameworthy is not to say it has some bad property, but that the action falls short of our normative expectations for actions of this kind.
On my constitutivist understanding of projectivism about blamemakers, an act is blameworthy not on the basis of the properties it has, but on whether it meets the required standard for action of that kind given our relation to one another. For example, if I am your friend and friends are supposed to treat each other equally, then your inability to treat me equally because of an implicit bias you have constitutes an instance of your failing to treat me as the constitutive standards of friendship requires, and so you are blameworthy for your lapse. This approach not only avoids the problems presented by previous theories attempting to explain the blameworthiness of implicit biases, but also has several independently attractive features for the blameworthiness of our actions in general.
Jules Holroyd
What do we want from a model of implicit bias?
I undertake three tasks in this paper: first, I set out some desiderata for evaluating models of implicit bias; second, I set out some test cases which bring to the fore a largely ignored class of discriminatory behaviour, in which implicit bias has a role; third, I consider how recent models of implicit bias fare in dealing with these test cases, in light of these desiderata. I argue that inadequacies in each of the models are brought to light. Whilst these conclusions are negative, I hope to nonetheless make a positive contribution to the debate in presenting a clearer articulation of what we should want an account of implicit cognition to do.
Hunter
Philosophers Behaving Badly: The systemic failures of “Experimental Philosophy”
The movement known as “Experimental Philosophy” — which is just over a decade old — attempts to answer questions about “how human beings actually happen to be.” To do so, proponents of experimental philosophy attempt to answer these questions by borrowing tools from cognitive science and social psychology to investigate the “psychological processes underlying people’s intuitions about central philosophical issues.” The promise of experimental philosophy is succinctly put in the following passage: The real measure of a research program depends on whether the program generates exciting new discoveries. . . For our part, we think that experimental philosophy has already begun to produce surprising and illuminating results. The thing to do now is just to cast off our methodological chains and go after the important questions with everything we’ve got. While the experimental philosophy movement once seemed promising, the reality over the last decade has illustrated otherwise. The movement has been plagued with questions about methodology and whether the results generated by experimental studies is philosophically significant. While there is much debate about these concerns, there has been a substantial critique that has been missing from the entire debate. This paper will argue that the experimental philosophy movement has a systemic problem that greatly undermines and hinders the success of the movement. The experimental philosophy movement systemically excludes the voices of marginalised
people, especially so from the standpoint of the participants in the experiments. Experimental philosophers have not taken seriously enough the notion that their methodology and survey parameters have ignored particular groups of individuals in ways that are harmful to the communities of people whom they have ignored.
To bring to light the serious issue that the experimental philosophy movement faces, this paper uses both quantitative and qualitative data from the past ten years of experimental philosophy publications. Quantitatively speaking, the experimental philosophy movement systemically excludes any data on marginalized persons—solely due to the fact that the movement, on aggregate, does not even allow for participants to self-identify in socially significant ways (i.e “race/ethnicity” and “gender”). Qualitatively, a number of vignettes and proclamations made on the basis of problematic empirical results can be a source of epistemic harm for marginalised communities. The inability to reflect groupings that social science deems as important inhibits both the move to make experimental philosophy and academic philosophy (in general) more inclusive. Additionally, the questions taken as “central” to the movement systemically exclude philosophical questions that arise from the lived experiences of persons in marginalised communities.
In conclusion, this paper offers a number of solutions to positively change the experimental philosophy movement. The benefits of addressing the systemic issues in the experimental philosophy movement are the following: more inclusiveness in the discipline, more robust results from experiments, and the potential to introduce different research projects that appeal to under- represented groups. The suggested changes to the movement will have a positive reverberating effect on both the discipline and upon what we can know about groups that have been — for the vast majority of history in analytic philosophy — implicitly or overtly ignored.
Sarah Jones
Title TBC
Philosophy’s climate, now riddled with the consequences of implicit bias, is a paradigm of both privilege and discrimination. Its women and other minority faculty are demonstrably disadvantaged by the continued influence of gender and race bias. The discipline’s outcomes (rates of advancement, patterns of hiring, etc.) are at once clearly premised upon the compounding of advantage and disadvantage and are symptomatic of structural injustice. While there is much agreement about the causes of the disparate and unjust patterns in the discipline’s outcomes, there is much disagreement about the appropriate remedy. Jennifer Saul has maintained that we ought not to hold individuals blameworthy for their implicit biases, given that a) such biases are the outcome of a sexist culture and associative psychological processes outside the agent’s control, and b) holding individuals accountable for such biases is counter-productive. Jules Holroyd disagrees. She argues that there are good reasons to believe that biases are at least partly the result of explicit and voluntary beliefs, and subject to (at least long-range) agential control. Moreover, Holroyd argues, it might be productive to regard the biased agent as responsible and blameworthy; holding individuals accountable or blameworthy for their biases creates norms, expectations, and patterns of behavior. This, in turn, significantly impacts the climate of the discipline.
In my paper, I defend and extend Holroyd’s arguments. Specifically, I argue that holding individuals responsible for their biases is not only supported by recent experimental evidence about the nature of implicit biases, it is also morally required by the seriousness of the harms caused by such biases. Given the extent and nature of the wrongs perpetuated by the operations of such bias, it is necessary to hold accountable the individuals who perpetuate these patterns of privilege and discrimination. Moreover, because recent studies suggest that individuals with low internal motivation to regulate their biases are unlikely to respond without bias in the absence of some external motivation, holding individuals accountable for their biases is potentially the only effective strategy for inhibiting such individuals from acting on their biases: explicit accountability, I argue, will provide the needed (external) motivation for otherwise unmotivated (harmful) individuals. Hence, regarding others as blameworthy is only a first step. We must also construct policies through which individual accountability for bias will be enforced. Indeed, given the dramatically unjust outcomes of implicit bias, the construction of such policies should be considered a moral imperative.
Our collective failure to develop such explicit measures signals a collective failure to take seriously enough the consequences of implicit bias. Worse yet, that failure results in an implicit belief that these harms are not all that bad: were they that bad, we would surely hold the worst offenders formally accountable. Yet, the impact of such biases is that bad. As Sally Haslanger states, “it is very hard to find a place in philosophy that isn’t actively hostile to women and minorities.” Hence, I argue, the nature and extent of these harms imposes -- as yet unmet -- moral obligations. It imposes obligations on each of us as agents take measures to control our biases, and it imposes obligations on each us as members of the discipline to develop explicit accountability measures. We must make institutional (e.g., departmental) policy match the seriousness of discrimination’s outcomes. Failure to hold offenders accountable is morally comparable to the failure to take appropriate measures to curb our own biases. Arguably, both the absence of explicit accountability measures and the absence of a collective insistence on such measures render us all complicit and blameworthy in the continued compounding of disadvantage for women and other minorities in philosophy.
Ian Kidd
Can We Retain Confidence in Philosophy in the Light of Implicit Bias?
This talk explores several different senses in which implicit bias might threaten our confidence in philosophy , relating this to similar claims made in recent history of philosophy. I suggest that we can think about these challenges to philosophical confidence in terms of invitations to cultivate a sense of humility. If conceived and responded to in appropriate ways, implicit bias can be a source of deep humility, of a sort wholly congenial to the philosophical enterprise.
Emma McClure
How Best to Use the IAT: The Moral to Draw from the Moral Responsibility Debate
Since the mid-2000s, philosophical accounts of moral responsibility have begun to incorporate the Implicit Association Test (IAT). However, philosophical discussions of implicit bias are complicated by shifts in the prevailing psychological consensus. One such shift occurred in 2013, when Oswald et al. published a meta-analysis of 46 race-IAT studies. Oswald et al. challenged the dominant assumption that IAT scores correlated strongly with discriminatory behavior. In 2015, Greenwald, Banaji, and Nosek— three prominent proponents of the IAT—admitted that correlations with behavior were significantly lower than earlier research had suggested. Nevertheless, they proposed a new avenue for future research: perhaps the small (4%) correlation between IAT scores and discrimination “represents potential for discriminatory impacts with very substantial societal significance.” As Oswald et al. point out in their 2015 response, such a contention “needs to be evaluated empirically rather than simply stipulated.” Thus, psychologists have a new question to study: How important is the 4%?
Given this shift in psychological research, it is unsurprising that many philosophical accounts of implicit bias need to be revised. More intriguing, however, are the philosophical accounts that have survived such major changes in empirical research— and what these accounts can teach us about how best to incorporate psychological studies in future philosophical debates.
To explore this question, I will consider two recent philosophical articles: Neil Levy’s “Consciousness, Implicit Attitudes and Moral Responsibility” (2014) and Chloë Fitzgerald’s “A Neglected Aspect of Conscience: Awareness of Implicit Attitudes” (2014). Both philosophers use psychological studies to challenge the dominant paradigms in their respective subfields, but while Levy’s arguments are undercut by new psychological developments, Fitzgerald’s arguments remain solidly grounded in empirical research. I suggest that these varied outcomes are more than mere coincidence. Rather, I take Levy and Fitzgerald to represent two different methods of engaging with psychological research. The continued success of Fitzgerald’s approach is a mark in its favor.
Levy includes a wide range of IAT research in his paper—from Payne’s Gun/Tool IAT, to Nier’s Bogus Pipeline research, to Dasgupta’s Ingroup Favoritism study. While this list of citations is impressive, the breadth of his research may actually have worked against him. These researchers administered different types of IAT in distinct settings. Levy combines the results of these three studies and uses them to support sweeping generalizations about the behavioral impact of implicit bias: “When we are under time pressure, stressed, tired or distracted, our actions often reflect our implicit attitudes, even when our explicit attitudes conflict with them. Unfortunately, many of our most significant actions occur under these kind of pressures.” As I’ll emphasize, the Oswald et al. meta-analysis has undercut such general claims.
In contrast, Fitzgerald focuses on a cluster of interrelated studies, rather than appealing to a wide range of minimally connected findings. She cites IAT research on medical professionals and then draws out the bioethical implications. Fitzgerald’s philosophical theorizing retains the original focus of these psychological studies—doctor/patient interaction. Since she does not generalize to other, unstudied interactions, her empirical foundation remains secure despite new meta-analytical data. Thus, her account of bioethical responsibility survives the recent shift in psychological consensus.
Although I hesitate to make recommendations based upon so small a sample, these differing levels of success suggest a promising line for future philosophical engagement with developing psychological research. By discussing the philosophical implications of particular studies (or interconnected ones), we can make small, lasting advances in the philosophical debate on moral responsibility. In this case, as in the old fable, slow and steady wins the race.
Aaron Meskin & Sheila Lintott
Art and Implicit Bias
How might a work of art function to corrupt or edify its audiences? On one standard view, a work of art is ethically corrupting if it has as a predictable consequence that its audience will behave unethically after being exposed to it. Debates about the ethics of pornography, for instance, often focus on the question of whether watching violent and misogynistic pornography causes viewers to act in violent or misogynistic manners. Such debates, however, run in circles because data on the behavioral effects of consuming media is decidedly inconclusive. Some studies suggest that pornography causes, others that it reduces, such undesirable behavior. On another standard view, propositionalism, a work of art is ethically corrupting if it is likely to inculcate moral falsehoods. For example, Triumph of the Will might be said to convey the message: “Some humans are inherently superior to others.” Although generally more convincing than a consequentialist account, this model of understanding art’s corrupting hardly seems to be the full story. Teaching moral falsehoods is only one way to corrupt, and many morally problematic works do not convey moral falsehoods. Another way art is sometimes ethically evaluated is for the emotional engagement it invites. Identificationism is the view that art is morally corrupting if it disposes its audience to identify – to feel with and for – a morally corrupt character. This view functions as a nice supplement to propositionalism, but neither alone nor coupled with propositionalism does it fully explain the ways art might morally corrupt.
Today’s social psychology studies psychological mechanisms that were undetectable just a few decades ago. The instrument known as “The Implicit Attitudes Test” (IAT) provides a way to measure the presence of biases of which a person is unaware. Moreover, there is good reason to think that these implicit attitudes influence our behavior, and not always to good effect.
There is significant evidence that implicit attitudes are not stable—various contextual factors, most notably for our purposes, imagining, can reinforce or reduce implicit biases. A wide range of artwork traffics in imaginings, and hence this opens up a number of new ways to understand the ethical power of art.
Consider, for example, how a work of art might corrupt by leading us to imagine in some detail women as catty and conniving, which might reinforce implicit associations of women and those negative attributes (e.g., in the popular American T.V show Gossip Girl). Alternatively, a work of art might edify in virtue of its tendency to diminish implicit prejudices. For example, representations of independent, sexy, and vivacious young women with disabilities might reduce implicit prejudice against the disabled (e.g., in the reality TV show Push Girls). Such works might be positively ethically evaluated for their tendency to falsify implicit attitudes, making them available for conscious critical reflection, or for their ability to present counter-stereotypical exemplars. Finally, artworks may employ implicit prejudice to powerfully reinforce preexisting stereotypes or negative implicit attitudes. For example, a horror movie’s representation of a homosexual monster (e.g., in Silence of the Lambs) plausibly appeals to the audience’s implicit disgust at homosexuality to intensify their fear and disgust and may be ethically evaluable in virtue of this fact.
The persistence of implicit biases helps explain why, in a cultural context where virtually everyone claims freedom from discriminatory beliefs and attitudes, significant inequality and segregation persist. Art, especially mass art, may serve to instill, reinforce, complicate, and lay bare these implicit biases. Ethical evaluation of art should therefore attend to the ways in which art interacts with our implicit attitudes.
Helen Morley
Title TBC
Abstract to follow
Jennifer Saul & Katharine Jenkins (Sheffield)
The Pragmatics of Inclusivity: Visual and Linguistic Cues to Group Membership
Many philosophers are, quite appropriately, engaging in efforts to make their syllabi less overwhelmingly white, male, cis‐gendered, middle class, heterosexual, non‐disabled, etc. However, some of the goals of these efforts cannot be achieved unless the social groups to which authors belong are communicated to students. This paper explores the political implications of various different ways of communicating such social group membership. We argue that despite the positive aims behind communicating group membership, attempts at such communication are liable to carry deeply problematic implications. The possibility of using visual cues rather than linguistic cues as a way to avoid these problematic implications is considered, but it does not succeed in all cases. Finally, we argue that this difficulty is not restricted to syllabi, but also affects broader efforts at integration, and communication about such efforts.
Sophie Stammers
Explicitly unclear: the illusion of the exclusive control mechanism
We tend to think that explicit prejudice is (i) a more easily eliminable affair in the face of both the epistemic and moral reasons that there are for refraining from prejudice, and (ii) a more obvious candidate for moral culpability, than implicit bias. This rests on the supposition that implicit biases, and the actions which they mediate, are somehow importantly different from the beliefs, preferences and prejudices which exist in the explicit realm, and the actions which they guide. One way of filling out this supposed difference is to claim that we exert a different kind of control over actions which are guided by explicit preferences and prejudices, compared with those guided by implicit biases. Indeed, cognitive psychologists sometimes describe implicitly biased actions as the result of ‘automatic’ processes, and that these processes contrast with the ‘controlled’ processes by which explicit states guide actions. In this paper, I will demonstrate that the argument that there is a distinction between implicitly biased actions, and actions guided by explicit preferences/prejudices on the basis of a difference in the kind of control that we exert over each, stands on shaky ground. In fact, I argue that there is no robust distinction between the control that we exert over implicitly biased actions, and the control that we exert over actions guided by explicit preferences/prejudices. To reach this conclusion, I first establish that we do have a form of (what has been called) ‘indirect’ control over our implicitly biased actions, by appealing to recent empirical findings; in the form of (i) implementation intentions, (ii) exposure to counter-stereotypical exemplars, and (iii) convincing argument primers. I then address my opponents who claim that there is a special kind of ‘direct’ control which we exert over actions guided by explicit states, but not over those mediated by implicit bias. Here, I show that we exert direct control over neither the acquisition of implicit biases, nor the acquisition of explicit states, but that we do exert indirect control over the acquisition of both. I then show that there are at least some actions guided by explicit states which are not under our direct control, by appeal to three examples; ‘akratic gin drinking marker’, ‘excited quiz player,’ and ‘explicit (but ignorant) bigot’. These examples are selected to demonstrate that there is a double disassociation between explicit states and what we might typically think of as explicitly guided actions, such that: 1) Explicit states can mediate action implicitly, and, moreover, can fail to mediate action at all. Sometimes we appear to exert control, sometimes we don’t. 2) Implicit states can mediate actions explicitly. Sometimes we appear to exert control, sometimes we don’t. Consequently, I argue that there is no control mechanism that is at once exclusive to explicit beliefs, preferences and prejudices, and the actions which they guide, and which is not also at work when implicit states and implicit biases guide actions. I conclude with some upshots for the practicalities of controlling and eliminating implicitly (and explicitly!) biased behaviours, as well as with the consequences for moral culpability for both implicit bias and explicit prejudice.
Robin Zheng
Bias, Structure, and Injustice: Collective Accountability for Implicit Bias
Philosophers and activists have taken great interest in the phenomenon of implicit bias because of the important role it appears to play in explaining persistent social inequalities. Recently, however, Sally Haslanger (2015) has argued that social inequalities are best explained in terms of social structures rather than the attitudes and actions of individuals, and hence that
philosophical efforts to respond morally and practically to implicit bias are largely misplaced relative to the goal of enacting structural change. Along parallel lines, Chad Lavin (2011) has argued that the concept of responsibility itself—because it prioritizes the particular actions of particular individuals—is inadequate for dealing with enduring, unjust background conditions such as poverty, racism, and other oppressions.
I argue against Haslanger (2015) and Lavin (2011) that understanding and developing practices of responsibility for implicit bias can play an important role in rectifying structural injustice. To see this, I claim, it is necessary to distinguish between two different concepts of responsibility: attributability and accountability. We are responsible for our actions in the attributability sense only when those actions reflect our identities as moral agents, i.e. when they are attributable to us. We are responsible in the accountability sense when it is appropriate for others to enforce certain expectations and demands on those actions, i.e. to hold us accountable for them. Drawing on Iris Marion Young’s (2011) social connection model of responsibility, I show that even though implicit biases and structural injustice may not be attributable to any particular individual, we can still accountable for collectively organizing to transform the social conditions that give rise to them in the first place. I develop this conception of accountability by appealing to role-based ideals (e.g. being a good teacher, a good parent, neighbor, citizen, friend, etc.) which are distributed across the moral community and which give rise to the expectation that we act through these roles to change our local social structures.
Finally, I offer a number of real-world examples of responses to implicit bias, drawn from each of Patricia Hill Collins’ (1999) four domains of oppression: structural, disciplinary, hegemonic, and interpersonal. These examples demonstrate that explanations invoking implicit bias need not compete with structural explanations, since implicit bias is not only enabled by but is itself an enabler of structure. This is because implicit associations are one of the mechanisms by which agents have and make use of their knowledge of cultural schemas (which, along with resources, constitute the building blocks of social structure). In other words, understanding and addressing implicit bias is a way of understanding and addressing the workings of social structure, since implicit biases are the “grease that oils the machine,” so to speak. Practices of accountability for blocking and eliminating implicit bias, then, figure into larger struggles against structural injustice because they represent starting points for transforming unjust social structures.