Von Fehlinformationen lernen

  • Dan V
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Zahlreiche Akteure blicken besorgt auf die Verbreitung von falschen bzw. irreführenden Botschaften im Internet – darunter Verschwörungstheorien, Gerüchte und Fake News/Falschmeldungen. Um die von Fehlinformationen Betroffenen und den dadurch entstandenen Irrglauben aufzuklären, werden Richtigstellungen herausgegeben. Allerdings schöpfen diese nicht immer ihr volles Wirkungspotenzial aus, was in der Folge keine hinreichende Aufklärung bedeutet. Der vorliegende Beitrag möchte zur Wirksamkeitssteigerung von Richtigstellungen beitragen und macht dazu einen unkonventionellen Vorschlag: Richtigstellungen sollten sich dieselben psychologischen Mechanismen zunutze machen wie die Fehlinformationen, die sie richtigstellen möchten. Um diese Mechanismen zu identifizieren, extrahiere ich aus bisherigen Studien die Eigenschaften von Fehlinformationen, die nach jetzigem Forschungsstand eine Erklärung für die große Aufmerksamkeit und Einprägsamkeit von Fehlinformationen, ihren hohen perzipierten Wahrheitsgehalt und ihre rasante Verbreitung liefern. Die anschließende Gegenüberstellung mit herausgearbeiteten Charakteristika von Richtigstellungen verdeutlicht zahlreiche Unterschiede zu Fehlinformationen in Bezug auf ihre Machart, ihre Kommunikatoren bzw. Quellen und ihre Verbreitungswege. Für jeden der festgestellten Unterschiede wird abgewogen, wie man bei Richtigstellungen aus Fehlinformationen lernen kann, die Wirksamkeit zu steigern, und inwiefern dies aus normativer Sicht vertretbar wäre. Dies führt zu sechs konkreten Vorschlägen für die Gestaltung und Verbreitung von Richtigstellungen. The pervasiveness of misinformation online—that is, false or misleading messages presented as true—has prompted intensified efforts to correct it. By providing alternative true explanations to misunderstood phenomena, corrections seek to enable those who were misled to update their faulty mental model. In the updated model, misinformation (e.g., a false causal claim, such as “if A, then B”) is supplemented with a negation label (“false”) and the correct piece of information (“if AA, then B”). When asked to judge something related to the topic at hand, individuals holding an updated model should be able to retrieve all stored claims from memory (“if AA [not ‘if A,’ that is false] , then B”). Yet, corrections do not always succeed in prompting such an update. Instead, they may fail to reach those who were misled; reach but fail to convince them (misbelief persistence); or reach and convince audiences but prove unable to promote an update of the mental model. An inability to reach the desired outcome is not solely the function of how corrections are made and distributed. Rather, factors such as audiences’ cognitive ability and motivation also play a role. However, because these individual-level factors are outside the sphere of influence of those issuing corrections, we focus on identifying areas for improvement that relate to the correction. To this end, we suggest that corrections should use the same psychological mechanisms as the misinformation they seek to correct.To identify the relevant psychological mechanisms, previous studies on misinformation were closely examined. We collected evidence suggesting precisely which properties of misinformation are associated with how much attention it attracts, its memorability, its perceived credibility, and the speed at which it spreads. This revealed the five following properties, where the appeal was explained with reference to the seven psychological mechanisms in parentheses: (1) negative valence (negativity bias), (2) provision of simple explanations of complex phenomena (preference for complete albeit faulty mental models, transportation), (3) use of visual “evidence” for the claims made (seeing is believing/realism heuristic/truthiness effect), (4) viral distribution in social media (illusory truth effect), and (5) compatibility with the values and norms of the audience (motivated reasoning, confirmation bias [believing is seeing]).Although corrections deal with the same issues as the claims they seek to correct, they usually have less news value than the misinformation does. Because neutrality is held in high esteem, and rightly so, corrections are often (1) sober and complex, (2) purely text-based, (3) issued by official/expert sources, (4) distributed in highbrow channels, (5) focused on the rebuttal of the false/misleading claim rather than on telling an alternative true story, and (6) lacking references to widespread values and norms. Juxtaposing these characteristics with those of misinformation in the previous paragraph reveals considerable differences regarding message design and distribution. We provide a careful examination of which differences should be leveled out to improve the impact of corrections, as well as the extent to which and precisely how this could be accomplished. We recognize the unconventional nature of suggesting that one should learn from one’s antagonist, and we acknowledge that actors issuing corrections may have qualms about relying on a similar design, employing similar sources, and using the same distribution channels to those of the misinformation they seek to correct. In response to this concern, we clarify the scientific basis of each suggestion and offer specific examples of how each criterion can be met without jeopardizing actors’ reputations. The following recommendations are offered: We suggest making corrections (1)  more captivating by featuring people whose misbeliefs could be corrected and addressing how they felt learning that they had been misled (e.g., anger, fear, disappointment). Furthermore, we propose that (2)  a narrative format and easy-to-understand language should be used whenever possible. Thus, a simple rebuttal does not suffice; misinformation should be countered via an easy story based on correct information. In addition, it seems advisable that alternative true explanations should (3)  be supported with still or moving images . These can be graphics suggesting the truth value of a statement (e.g., politifact.com’s Truth-O-Meter), still/moving images showing key elements of the true information, or if available, visuals providing evidence for the correct alternative explanation. Moreover, we recommend that corrections should be (4)  distributed in the same channels as the misinformation to be corrected—whether on YouTube, Instagram, or Twitter. In this way, corrections can benefit from two characteristics of the internet—the longevity of messages online (the internet never forgets) and their rapid dissemination. We also advise that (5)  sources similar to those distributing misinformation (e.g., regular people, influencers) be used in addition to expert sources . This two-tier strategy should draw more attention to the correction and increase its perceived truthfulness. After all, the perceived credibility of a source does not automatically result from its expertise but also emerges from its perceived honesty, integrity, and impartiality. Non-expert sources should make corrections more relatable. Finally, we recommend that corrections (6)  address values and norms and include self-affirming elements. This is because a person may reject corrections if they entail positions that are incompatible with the person’s value system. While some investigations on each of these criteria have been published, none of the studies currently available have tried to hit all these buttons. In this way, this paper defines the contours of an agenda for future research focused on increasing the impact of corrections.

Cite

CITATION STYLE

APA

Dan, V. (2021). Von Fehlinformationen lernen. Publizistik, 66(2), 277–294. https://doi.org/10.1007/s11616-021-00667-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free