Motivated Skepticism in the Evalu...
1 MOTIVATED SKEPTICISM IN THE EVALUATION OF POLITICAL BELIEFS Charles S. Taber & Milton Lodge Department of Political Science Stony Brook University, NY, 11794 ABSTRACT: We propose a model of motivated skepticism that helps explain when, how, why and under what conditions citizens are prone to be biased political information processors. We report the results of two experimental studies that explore how citizens evaluate arguments about two political issues ��� affirmative action and gun control ��� to test hypotheses predicting motivated reasoning. As predicted, in situations where participants (Ps) are presented with a balanced set of pro and con arguments, we find strong evidence of a prior attitude effect such attitudinally congruent arguments are evaluated as stronger than attitudinally incongruent arguments. When reading the pro and con arguments, Ps counter argue the contrary arguments and uncritically bolster supporting arguments, evidence of a disconfirmation bias. We also find a confirmation bias ��� the seeking out of confirmatory evidence ��� when Ps are free to self-select the source of the arguments they read. Both the confirmation and disconfirmation biases lead to attitude polarization ��� the strengthening of their t2 over t1 attitude ��� especially among those with the strongest priors and highest level of political sophistication. We conclude with a discussion of the normative implications of these findings for rational behavior in a democracy.
2 So convenient a thing is it to be a rational creature, since it enables us to find or make a reason for everything one has a mind to. Ben Franklin Physicists do it (Glanz, 2000). Psychologists do it (Kruglanski & Webster, 1996). Even political scientists do it (cites withheld to protect the guilty among us). Research findings confirming a hypothesis are accepted more or less at face value, but when confronted with contrary evidence, we become ���motivated skeptics��� (Kunda, 1990), mulling over possible reasons for the ���failure���, picking apart possible flaws in the study, recoding variables, and only when all the counter arguing fails do we rethink our beliefs. Whether this systematic bias in how we deal with evidence is rational or not is debatable, the philosopher of science (e.g., Popper) saying ���no���, the good Reverend Bayes saying ���yes���. One negative consequence of this practice is that bad theories and weak hypotheses, like prejudices, persist longer then they should. But what about ordinary citizens? Politics is contentious (Iyengar & Kinder, 1987 Newman, Just, & Krigler, 1992). In the marketplace of ideas, citizens are confronted daily with arguments designed to either bolster their opinions or challenge their prior beliefs and attitudes (Gamson, 1992). To the extent that ordinary citizens act similarly to scientists the consequences would be similar ��� hanging on to one���s beliefs and attitudes longer and stronger than warranted. It would be foolish to push this analogy too hard since scientific practice has such built-in safeguards as peer review and double-blind experiments to prevent bad ideas from driving the good ones out of the marketplace albeit there certainly are fewer and weaker controls to protect ordinary folks from themselves when they think and reason. Ideally, one���s prior beliefs and attitudes ��� whether scientific or social ��� should ���anchor��� the evaluation of new information and then, depending on how credible is some piece of
3 evidence, impressions should be adjusted upward or downward (Anderson, 1981). The ���simple��� Bayesian updating rule would be to increment the overall evaluation if the evidence is positive, and decrement the original belief or attitude if the evidence is contrary. Assuming one has established an initial belief (attitude or hypothesis), normative models of human decision-making imply or posit a two-step updating process, beginning with the collection of belief-relevant evidence, followed by the integration of new information with the prior to produce an updated judgment. Critically important in such normative models is the requirement that the collection and integration of new information be kept independent of one���s prior judgment (for a useful discussion of this normative requirement in Bayesian theory, see Evans & Over, 1996). All well and good, and normatively right, but empirically off base if demonstrations in psychology (Lord, Ross, & Lepper, 1979) and behavioral decision theory (Baron, 1994) are to be believed. These studies show repeatedly that one���s priors unduly influence what evidence is sought out and how new, particularly contrary, evidence is comprehended, evaluated, and weighted. The basic finding across domains, issues, and situations is that people are ���motivated skeptics ��� they are prone to accept at face value evidence that is congruent with their prior beliefs but apt to denigrate and hyper-critically evaluate evidence contrary to their priors (Ditto & Lopez, 1992 Koehler, 1993). The result is anchoring with insufficient adjustment to contrary information. In this paper we report the results of two experiments showing that citizens are prone to overly accommodate supportive evidence while dismissing out of hand evidence that challenges their prior attitudes. On reading a balanced set of pro and con arguments about affirmative action or gun control, we find that rather than moderating or simply maintaining their original attitudes, citizens ��� especially those who feel the strongest about the issue and are the most
4 sophisticated ��� strengthen their attitudes in ways not warranted by the evidence. A Theory of Motivated Political Reasoning Our starting premise (following Kunda, 1987 1990) is that all reasoning is motivated. While citizens are always constrained in some degree to be accurate, they are typically unable to control their preconceptions, even when encouraged to be objective. This tension between the drives for accuracy and belief perseverance underlies all human reasoning. Keeping it simple and focusing on reasoning about things political, citizens are goal-oriented (Chaiken & Trope, 1999). Their motives fall into two broad categories: 1. Accuracy goals, which motivate them to seek out and carefully consider relevant evidence so as reach a correct or otherwise best conclusion (Baumeister & Newman, 1994 Fiske & Taylor, 1991), and 2. Partisan goals, which motivate them to apply their reasoning powers in defense of a prior, specific conclusion (Kruglanski & Webster, 1996). Apart from the ideal worlds of philosophy and fiction, neither of these goals is ever entirely independent, so the two dimensions range from weak to strong goals of each type: At one pole on what is surely a motivational continuum is the prototypic ���rational decision maker��� ��� someone no doubt like you and me ��� who on confronting new information is motivated to reach the right conclusion by conducting a thorough search and balanced evaluation of the evidence at hand. ���Partisan reasoning���, by contrast, occurs when citizens evaluate evidence in ways that allow them to maintain or even bolster their attitudes in the face of contradictory evidence. The critical questions concern when partisan biases will overwhelm the objective quality of the evidence and why it is that many a seemingly good rationalist turns into a motivated skeptic when evaluating political candidates and issues.
5 Three psychological mechanisms underlie our theory of motivated political reasoning (Lodge & Taber, 2000, forthcoming Taber, Lodge, & Glathar, 2001). First, the hot cognition hypothesis posits that all social concepts that have been evaluated in the past become ���affectively charged,��� positively or negatively tagged, with the affective charge linked directly to the concept in memory (Bargh, 1997 Fazio, Sanbonmatsu, Powell & Kardes, 1986 Fiske & Newberg, 1990). Accordingly, the sociopolitical world is characterized by affect-laden beliefs, what Abelson (1963) calls ���hot cognitions.��� Our second sub theory, on-line processing, claims that the evaluative affect attached to concepts in memory is updated spontaneously upon exposure to new information about the memory object (Lodge, McGraw & Stroh, 1989 Lodge & Stroh, 1993). When new information is before your eyes so to speak, you spontaneously update your impression of the object ��� in the normative Bayesian version, by incrementing for positive information and decrementing for negative information (Anderson & Hubert, 1963 Lodge, Steenbergen & Brau, 1995). Finally, our model asserts the primacy of affect, in the sense that it is faster and earlier (both in cognitive and evolutionary time) than cold cognition (Zajonc, 1980, 1984). Neurophysiological evidence suggests that the ���affect system��� (LeDoux, 1994, 1996 Damasio, 1994, 2002) forms a ���quick and dirty��� pathway in the service of approach-avoidance behavioral responses. Automatic affective responses come to mind quickly and spontaneously, in all likelihood entering the evaluation process moments before any cognitive considerations, thereby signaling the affective coloration of the object (e.g., Bassili & Roy, 1998 Crites, Cacioppo, Gardner, & Berntson, 1995 Lavine, Thomsen, Zanna, & Borgida, 1998 Marcus, 1988 2000 Rahn, 2000 Lodge & Taber, 2000, forthcoming). The problem for normative theory is that one���s prior attitudes can easily and unduly direct the collection, comprehension, interpretation, and evaluation of evidence in ways that bias
6 judgments. If our theory holds true, at the moment of recognizing an object, one���s affective tally is automatically called up, triggering a series of largely nonconscious processes that drive the interpretation, comprehension, and evaluation of the evidence (Bargh, 1997 Ditto & Lopez, 1992 Edwards & Smith, 1996 Kunda, 1990). Surprisingly, given the widespread acceptance of selective attention, exposure, and judgment processes throughout the social sciences, the empirical evidence from social psychology is far more mixed and qualified than is often believed. The empirical status of selective attention and, in particular, selective exposure can best be characterized as uncertain (Abelson, Aronson, McGuire, Newcomb, Rosenberg, & Tannenbaum, 1968 Eagley & Chaiken, 1993, 1998 Freedman & Sears, 1965 Frey, 1986 Greenwald, Banaji, Rudman, Farnham, Nosek, & Mellott, 2002 Kunda, 1990 Lord, 1992 Pomerantz, Chaiken, & Tordesillas, 1995 Wicklund & Brehm, 1976). Certainly one goal of the work we report here is to test experimentally the various selectivity hypotheses within the context of political information processing, though it is important to keep in mind that unlike much of the work in psychology motivated by dissonance theory we explain selective biases as the product of automatic hot cognition. Selective information processes are particularly important because of their impact on subsequent attitudes and behavior and because of their implications for the distribution of aggregate public opinion (Zaller, 1992). Theoretically, we should expect attitude polarization: those holding strong prior attitudes become attitudinally more extreme on reading pro and con arguments because they assimilate congruent evidence uncritically but vigorously counter-argue incongruent evidence (Ditto & Lopez, 1992 Rucker & Petty, 2004). Unfortunately, the empirical pedigree of this classic expectation is even more dubious than the various selectivity hypotheses. The most cited support for attitude polarization comes from the 1979 Lord, Ross,
7 and Lepper study of attitudes toward the death penalty, but even this evidence is unconvincing because it is based on subjective rather than direct measures of polarization. Rather than comparing t1 and t2 measures of attitudes, Lord et al. asked subjects to report subjectively whether their attitudes had become more extreme after evaluating pro and con evidence on the efficacy of capital punishment. Moreover, numerous attempts to replicate polarization using direct t1 and t2 measures of social and political attitudes have failed (e.g., Kuhn & Lao, 1996 Miller, McHoskey, Bane, & Dowd, 1993 Pomerantz, Chaiken, & Tordesillas, 1995).1 We believe that attitude polarization has been elusive in psychological research for at least two reasons. First, we suspect that the arguments and evidence used in many of these studies failed to arouse sufficient partisan motivation to induce much biased processing. Since most of the work in the cognitive dissonance tradition did not consider the strength of prior affect to be critical, little effort was made to create stimuli that would elicit strong affective responses. Some research, for example, relied on syllogistic arguments that are hard to understand (e.g., Oakhill & Johnson-Laird, 1985) other research used oversimplified policy statements comprised of a single, stylized premise and conclusion. For example (Edwards & Smith, 1996): PREMISE: Implementing the death penalty means there is a chance that innocent people will be sentenced to death. CONCLUSION: Therefore, the death penalty should be abolished. While the conclusion may in some sense follow from the premise, this type of policy argument is not particularly engaging, and hence should be less likely to trigger a defensive response. In our theory, selective biases and polarization are triggered by an initial (and uncontrolled) affective response. That is, motivated reasoning in our view is the result of hot cognition by contrast, 1 Rucker and Petty (2004) have recently found polarization of attitude certainty as a result of counter-arguing, but notably they did not find evidence of more extreme attitudes.
8 most of the work on selectivity and polarization in social psychology uses rather cold arguments and rests on theories of cold cognition (most commonly, dissonance theory). In our motivated reasoning experiments, we use statements and arguments taken directly from political interest groups, which are far more contentious and more in-line with contemporary political discourse (Ailes, 1995 Ansolabehere & Iyengar, 1995) these arguments often generate strong affective responses (see Figure 1, below, for an example argument). The second and more difficult problem for those seeking to find attitude polarization is the weak measurement of attitude change and the severe scale constraints that ensue. Researchers have typically (e.g., Edwards & Smith, 1996) relied on a single item, presented pre- and post- task, to measure attitude extremity and change. The problem, of course, in addition to the weak reliability of a single item, is that while the theory holds that those with the most extreme attitudes are the most prone to become even more extreme at t2, detecting any such change is thwarted by the upper and lower bounds of the scale and by regression to the mean. We employ a six-item additive scale to measure attitudes at t1 and t2, which improves measurement reliability and reduces the number of respondents at or near the scale limits at t1. Based on our theory of affect-driven motivated reasoning, we posit three mechanisms of partisan or biased processing: ��� Hypothesis 1: a prior attitude effect, whereby people who feel strongly about an issue ��� even when encouraged to be objective and leave their preferences aside ��� will evaluate supportive arguments as stronger and more compelling than contrary arguments ��� Hypothesis 2: a disconfirmation bias, such that people will spend more time and cognitive resources denigrating and counter-arguing attitudinally incongruent than congruent arguments and
9 ��� Hypothesis 3: a confirmation bias, such that when free to choose what information they will attend to people will seek out confirming over disconfirming arguments. Because each of these mechanisms deposits more supporting than repudiating evidence in mind, we predict: ��� Hypothesis 4: attitude polarization, whereby attitudes will become more extreme, even when people have been exposed to a balanced set of pro and con arguments. Our theory, at first glance, might suggest we are arguing that people are closed-minded, consciously deceiving themselves to preserve their prior beliefs. On the contrary, a key argument we make (with supporting evidence in Lodge and Taber, forthcoming) is that people are largely unaware of the power of their priors. It is not that they simply lie to themselves. Rather, they try hard to be fair-minded or at least preserve the ���illusion of objectivity��� (Pyszczynski & Greenberg, 1987), but they are frequently unable to do so. On the other hand, as the persuasion literature clearly shows (Petty & Wegener, 1998) and as attested to in the study of voting behavior (Aldrich, Sullivan, & Borgida, 1989 Rabinowitz & MacDonald, 1989), even those committed to their positions can be persuaded by strong and credible counter-evidence (Festinger, 1957). But the research we report suggests that, once attitudes have become crystallized, persuasion is difficult. We try our best to appear reasonable ��� to ourselves as well as to others ��� but as motivated skeptics we frequently fall unsuspecting into the ���mind traps��� posited in Hypotheses 1-3. This asymmetrical skepticism ��� as would be reflected in the type of thoughts that come to mind as we read pro and con arguments ��� deposits in mind all the evidence needed to justify and bolster our priors with a clear conscience (Ditto, Scepansky, Munro, Apanovitch & Lockhart, 1998).
10 Being a motivated reasoner takes effort (Lavine, Borgida, & Sullivan, 2000 Pomerantz, Chaiken, & Tordesillas, 1995) hence we expect Hypotheses 1-4 to be conditional on the strength of one���s prior attitude (motive) and on one���s level of political sophistication (opportunity). ��� Hypothesis 5: an attitude strength effect, such that those citizens voicing the strongest policy attitudes will be most prone to motivated skepticism and ��� Hypothesis 6: a sophistication effect, such that the politically knowledgeable, because they possess greater ammunition with which to counter-argue incongruent facts, figures, and arguments, will be more susceptible to motivated bias than will unsophisticates. Of course, because people are arguing with themselves the fight is fixed. An unsophisticated person lacks the cognitive resources to counter-argue and is therefore as likely to stand pat as to be buffeted first by one side then by the other (Zaller, 1992 Zaller & Feldman, 1992) and by both weak and strong arguments (Cobb & Kuklinski, 1997 Petty, Cacioppo, & Goldman, 1981). Should the attitude strength and sophistication hypotheses be supported, we further expect that those who feel the strongest and know the most will also show the strongest attitude polarization. Experiments on the Mechanisms of Biased Reasoning Two experiments were carried out to test these six hypotheses. Participants (Ps) were recruited from introductory political science courses at Stony Brook University. Their participation, for which they received course credit, consisted of a single session lasting less than one hour (Study 1: N=126, 59 male, 70 white, 64 Democrat, 34 Republican Study 2: N=136, 68 male, 64 white, 61 Democrat, 21 Republican). Since the two experiments share the same basic design, differing in but one manipulation, we will describe them together. On entering the laboratory, Ps were seated individually at computers in separate experimental rooms and instructed that they would be participating in a study of public opinion.
11 Their first task was to evaluate a number of contemporary political issues, among them a battery of items tapping their attitudes on either affirmative action or gun control (with the sample split into two conditions by random assignment). These items, presented both before and after the experimental task, serve as our basic measures of prior and posterior attitudes. For both affirmative action and gun control, the attitude measures included four items designed to measure attitude strength (recorded on 100 point sliding response scales) and six items that measure attitude position (9 pt. agree/disagree Likert items see Appendix for the items). Both variables were constructed by summing the items (recoded for direction) and rescaled to [0,1] with responses below the midpoint indicating ���weak��� or ���con��� respectively.2 In keeping with prior research (for an overview, see Petty & Krosnick, 1995), strength and position are independent attitudinal dimensions such that some respondents took extreme positions on the issues without feeling strongly about those positions (and conversely, some moderates rode the fence with conviction).3 2 Both scales are reliable. The attitude extremity scale produced the following standardized item alphas, with subscripts indicating prior or posterior measurement: for affirmative action in study 1, 1=.80 and 2=.87 for gun control in study 1, 1=.75 and 2=.72 for affirmative action in study 2, 1=.82 and 2=.93 for gun control in study 2, 1=.77 and 2=.89. The comparable alphas for the attitude strength scale were: for affirmative action in study 1, 1=.90 and 2=.92 for gun control in study 1, 1=.91 and 2=.94 for affirmative action in study 2, 1=.93 and 2=.93 for gun control in study 2, 1=.91 and 2=.90. 3 The strongest correlation obtained between attitude strength and position (with position folded at the neutral point so higher values indicate more extreme attitudes) was for gun control in study 2: r=.20).
12 After completing the attitude battery for the first time, Ps were introduced to and given practice using an information board designed to track their search for pro or con information about affirmative action (or gun control in the other condition). They were instructed to view information in an even-handed way so that they could explain the issue to other students (such instructions have been found to enhance accuracy motivation which works against our research hypotheses see the Appendix for exact wording of instructions). Our information board presented a matrix of 16 hidden policy arguments, which Ps could only view by clicking on a button in the matrix (see Figure 1a). We can be sure that participants knew which arguments would favor and which would oppose the issue since the four arguments in each row of the matrix were clearly attributed to a known source (in these studies, a pro interest group, a con interest group, and the two political parties), and because Ps were explicitly told each group���s position on the issue as part of their instructions. Moreover, they were required to correctly place each group on the issues before they could open the information board (errors sent them back to the instructions page where group positions were explained) and at any time while the information board was open Ps could remind themselves of the group positions by hovering over group names with the mouse pointer. Rows and columns were randomized Ps viewed 8 arguments with no time limit, but could not view the same argument a second time the computer recorded the order and viewing time for each argument selected.4 This task provides our test for the confirmation bias ��� the prediction that people, especially those who feel the strongest and 4 These restrictions were introduced to ensure that the information environments were comparable across participants. We did not want participants reading all the information on the board (and possibly working systematically from the upper left box) conversely, we did not want participants to opt out after reading a single argument.
13 know the most, will seek out confirmatory evidence and avoid what they suspect might be disconfirming evidence. All Ps then completed the attitude battery a second time (so as to measure t1���t2 attitude change). [Figure 1 about here] To improve external validity as well as realism, the arguments used in our experiments were drawn from print and on-line publications of real issue-relevant interest groups (including the NRA, NAACP, Brady Anti-Handgun Coalition, and the platforms of the Republican and Democratic parties). To control for such alternative explanations for the prior attitude effect as the ���argument length = strength��� or ���complexity = strength��� heuristics (Petty & Cacioppo, 1981 Cobb & Kuklinski, 1997), the arguments were edited such that they had similar complexities (length of sentence, average number of syllables, words per sentence, sentences per argument, reading level, and so forth see Appendix) and were pre-tested on student samples. A substantial set of demographic questions followed the information board task, including all the usual suspects: PID, ideological self-placement, race, gender, etc., and most important for our purposes, a 17-item general political knowledge scale (asking, e.g., ���What proportion of Congress is needed to override a presidential veto?���). Our measure of political sophistication is the proportion of correct responses, which for many subsequent analyses we subject to a tertile split (so we may contrast the top and bottom thirds of the sample). The second part of the experiments, testing for a disconfirmation bias, began with a third administration of the attitude battery as described above, but with the issues flipped across conditions. That is, Ps who rated affirmative action for the information board task now rated gun control, and vice versa. After completing the attitude pre-test measures, Ps were asked to rate the strength of eight arguments, four pro and four con (presented sequentially in random order
14 see Figure 1b for a sample strength rating box). Again, Ps were instructed to be even-handed and told that they would be asked to explain the controversy to other students (to maximize accuracy goals). This argument strength rating task was followed by the post-test attitude battery and a recognition memory test. In addition ��� this the only significant difference between Studies 1 and 2 ��� the Ps in Study 2 completed a thoughts listing task for 2 pro and 2 con affirmative action or gun control arguments. That is, immediately after rating each of these 4 arguments, participants in Study 2 were asked to list the thoughts they had had while rating the strength of that argument. Results Judgments of argument strength. Our first hypothesis, the prior attitude effect, points to the difficulty people have in putting aside their prior feelings and prejudices when evaluating evidence, even when pro and con arguments have been presented to them in a balanced manner, and even when, as here, Ps are instructed repeatedly to ���set their feelings aside,��� to ���rate the arguments fairly,��� and to be as ���objective as possible.��� As an initial test of the prior attitude effect (Hypothesis 1), we compare the average strength ratings for pro-attitudinal and counter-attitudinal arguments, expecting Ps to rate the congruent stronger than the incongruent arguments. Arguments were rated on a [0,100] scale, with larger values denoting stronger ratings. [Figure 2 about here] Figure 2 displays the results in sets of four bars, broken down by study, issue, sophistication, and strength of prior attitudes. Dark bars represent average strength ratings for pro arguments, light bars con arguments the first pair of bars show the responses of proponents of the issue, and the second pair showing responses of opponents. The prior attitude bias is
15 indicated wherever we see higher ratings for congruent than incongruent arguments. In other words, we expect proponents to rate pro arguments more highly than they rate con arguments (with the opposite pattern for opponents). Clearly, the prior belief effect is systematic and robust among sophisticates and those who feel the strongest, despite our best efforts to motivate even- handedness (and despite the fact that across these samples and prior pretest samples, the 8 arguments for each issue have statistically equivalent average strength ratings). By contrast with the most knowledgeable and most ���crystallized��� thirds of our sample, the least sophisticated respondents and those with the weakest prior attitudes on these issues show little or no prior belief effect. [Table 1 about here] Table 1 reports regression analyses of the impact of prior attitudes on argument strength ratings, with contrasts for the least and most sophisticated thirds of our samples and those with the weakest and strongest priors.5 Each P���s overall rating of the strength of arguments (our dependent variable) was computed as the sum of ratings of the pro arguments minus the sum of ratings of the con arguments, recoded to [0,1]. To test for a prior attitude bias, we regressed these argument strength ratings on attitude extremity at time 1 (as measured by the 6-item Likert scale described above, recoded to [0,1]). Significant, positive coefficients support the hypothesis: Ps 5 Though we believe the display of contrasts in Table 1 presents our results most transparently, the proper tests are interactive. All of the contrasts for affirmative action shown in Table 1, when run as proper interaction models, yield significant results for the interaction term. The interactions for gun control are (obviously) not significant for Study 1, where both sophisticates and non-sophisticates were biased the sophistication interaction is marginally significant for gun control in study 2 (p .1), but the attitude strength interaction is not.