Journal Club
Our Research Team meets regularly to discuss relevant literature and review its application to our work
If you would like to recommend a paper for discussion related to human rights, deepfakes, user-generated evidence or open source evidence,
contact us here
Fact-checker warning labels are effective even for those who distrust fact-checkers
Meeting
27th September 2024
Authors
Cameron Martel and David G. Rand
​
Nature Human Behaviour (2024). https://doi.org/10.1038/s41562-024-01973-x
Summary
Warning labels from professional fact-checkers are one of the most widely used interventions against online misinformation. But are fact-checker warning labels effective for those who distrust fact-checkers? Here, in a first correlational study (N = 1,000), we validate a measure of trust in fact-checkers. Next, we conduct meta-analyses across 21 experiments (total N = 14,133) in which participants evaluated true and false news posts and were randomized to either see no warning labels or to see warning labels on a high proportion of the false posts. Warning labels were on average effective at reducing belief in (27.6% reduction), and sharing of (24.7% reduction), false headlines. While warning effects were smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in (12.9% reduction), and sharing of (16.7% reduction), false news even for those most distrusting of fact-checkers. These results suggest that fact-checker warning labels are a broadly effective tool for combatting misinformation.
A systematic account of
probabilistic fallacies in
legal fact-finding
Meeting
3rd May 2024
Authors
Christian Dahlman
​
The International Journal of Evidence & Proof, 28(1), 45-64, doi.org/10.1177/13657127231209019.
Summary
Evidence scholars have observed probabilistic fallacies in legal fact-finding and given them names since the 1980s (for example ‘Prosecutor's Fallacy’ and ‘Defense Attorney's Fallacy’). This has produced a rather un-organised list of over a dozen different probabilistic fallacies. In this article, the author proposes a systematic account where the observed probabilistic fallacies are organised in categories. Hierarchical relations between probabilistic fallacies are highlighted, and some fallacies are re-named to reflect the category they belong to and their relation to other fallacies in that category. All fallacies are precisely defined and illustrated with examples from real cases where they are committed by fact-finders. The result is a list of 12 probabilistic fallacies organised into 7 categories.
Addressing Disinformation Online
Meeting
15th March 2024
Authors
Max Mawby and team at Thinks Insight and Strategy
​
Study reported by inewspaper and Gorilla.sc
Summary
"There is also evidence from a study into social media suggesting many British voters could be vulnerable to misinformation campaigns [...] [Findings] suggest that many voters now believe there are more underhand injustices in our democratic process. It reveals a high potential vulnerability among voters to AI-produced 'deepfakes' on social media. And the study also includes startling results from a poll of British adults which found that nearly a third (30 per cent) thought that UK elections are more likely to be 'manipulated' or 'rigged' than 'free and fair'."
(This summary was taken from the report by inewspaper).
The Liar’s Dividend: Can Politicians
Use Deepfakes and Fake News to
Evade Accountability?
Meeting
19th January 2024
Authors
Kaylyn Jackson Schiff, Daniel S. Schiff, and Natalia Bueno
​
Paper available at SocArXiv
Abstract
This study addresses the phenomenon of misinformation about misinformation, or politicians “crying wolf” over fake news. Strategic and false allegations that stories are fake news or deepfakes may benefit politicians by helping them maintain support in the face of information damaging to their reputation. We posit that this concept, known as the “liar’s dividend,” works through two theoretical channels: by invoking informational uncertainty or by encouraging oppositional rallying of core supporters. To evaluate the implications of the liar’s dividend, we use three survey experiments detailing hypothetical politician responses to video or text news stories depicting real politician scandals. We find that allegations of misinformation raise politician support, while potentially undermining trust in media. Moreover, these false claims produce greater dividends for politicians than longstanding alternative responses to scandal, such as remaining silent or apologizing. Finally, false allegations of misinformation pay off less for videos (“deepfakes”) than text stories (“fake news”).
Deepfake Detection by Human
Crowds, Machines, and
Machine-informed Crowds
Meeting
8th December 2023
Authors
Matthew Groh, Ziv Epstein, Chaz Firestone, and Rosalind Picard
​
Proceedings of the National Academy of Sciences, Vol. 119(1), doi: 10.1073/pnas.2110013119
Abstract
The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s prediction are more accurate than either alone, but inaccurate model predictions often decrease participants’ accuracy. To probe the relative strengths and weaknesses of humans and machines as detectors of deepfakes, we examine human and machine performance across video-level features, and we evaluate the impact of preregistered randomized interventions on deepfake detection. We find that manipulations designed to disrupt visual processing of faces hinder human participants’ performance while mostly not affecting the model’s performance, suggesting a role for specialized cognitive capacities in explaining human deepfake detection performance.
Perception and deception: Exploring individual responses to deepfakes
across different modalities
Meeting
13th October 2023
Authors
Saifuddin Ahmed and Hui Wen Chua
​
Heliyon, Vol. 9(10), doi: 10.1016/j.heliyon.2023.e20383
Abstract
This study is one of the first to investigate the relationship between modalities and individuals' tendencies to believe and share different forms of deepfakes (also deep fakes). Using an online survey experiment conducted in the US, participants were randomly assigned to one of three disinformation conditions: video deepfakes, audio deepfakes, and cheap fakes to test the effect of single modality against multimodality and how it affects individuals’ perceived claim accuracy and sharing intentions. In addition, the impact of cognitive ability on perceived claim accuracy and sharing intentions between conditions are also examined. The results suggest that individuals are likelier to perceive video deepfakes as more accurate than cheap fakes, but not audio deepfakes. Yet, individuals are more likely to share video deepfakes than cheap and audio deepfakes. We also found that individuals with high cognitive ability are less likely to perceive deepfakes as accurate or share them across formats. The findings emphasize that deepfakes are not monolithic, and associated modalities should be considered when studying user engagement with deepfakes.
A Picture Paints a Thousand Lies?
The Effects and Mechanisms of
Multimodal Disinformation and
Rebuttals Disseminated via Social Media
Meeting
28th April 2023
Authors
Michael Hameleers, Thomas E. Powell, Toni G.L.A. Van Der Meer, and Lieke Bos
​
Political Communication, Vol. 37(2), pp. 281-301
Abstract
Today’s fragmented and digital media environment may create a fertile breeding ground for the uncontrolled spread of disinformation. Although previous research has investigated the effects of misinformation and corrective efforts, we know too little about the role of visuals in disinformation and fact checking. Against this backdrop, we conducted an online experiment with a diverse sample of U.S. citizens (N = 1,404) to investigate the credibility of textual versus multimodal (text-plus-visual) disinformation, and the effects of textual and multimodal fact checkers in refuting disinformation on school shootings and refugees. Our findings indicate that, irrespective of the source, multimodal disinformation is considered slightly more credible than textual disinformation. Fact checkers can help to overcome the potential harmful consequences of disinformation. We also found that fact checkers can overcome partisan and attitudinal filters – which points to the relevance of fact checking as a journalistic discipline.
Scottish Jury Research:
Findings from a Large-Scale Mock
Jury Study
Meeting
2nd December 2022
Authors
Rachel Ormston, James Chalmers, Fiona Leverick, Vanessa Munro, Lorraine Murray
2021
iScience, 24,103364
This study is the largest of its kind ever undertaken in the UK, involving 64 mock juries and 969 individual participants. The research team staged jury deliberations between May and September 2018, in venues in central Glasgow and Edinburgh. Jurors were recruited to be broadly representative of the Scottish population aged 18-75 in terms of gender, age, education and working status. This meant that the mock juries were similar in demographic composition to the actual population eligible for jury service. In order to assess the effect of the Scottish jury system’s unique features on decision-making, juries varied in terms of the number of verdicts
available to them (two or three), jury size (12 or 15) and the size of majority they were required to reach (simple majority or unanimity). Each jury watched a video of either a mock rape trial or a mock assault trial, lasting approximately one hour. Short clips from the two trials are available to watch online; links are in the footnote below.3 Jurors completed a brief questionnaire recording their initial views on the verdict, before deliberating as a group for up to 90 minutes and returning a verdict (if the jury had been able to arrive at one). After returning their verdict, jurors completed a final questionnaire covering their beliefs about the not proven verdict and views about the deliberation process, as well as their final views on the verdict. The data generated from the study included: quantitative data from the questionnaires; quantitative and qualitative data from the deliberations (which were filmed, audio recorded and analysed by the researchers); and quantitative ‘metadata’ on each jury (e.g. length of deliberations, verdict returned, etc).
Fooled twice: People cannot detect deepfakes but think they can
Meeting
14th October 2022
Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N = 210), we show that (1) people cannot reliably detect deepfakes and (2) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (3) people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and (4) they overestimate their own detection abilities. Together, these results suggest that people adopt a “seeing-is-believing” heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content.
Trust in User-Generated
Information on Social Media
during Crises: An Elaboration
Likelihood Perspective
Meeting
15th September 2022
Authors
L. G. Pee and Jung Lee
​
Asia Pacific Journal of Information Systems, Vol. 26(1), pp. 1-21
Abstract
Social media are increasingly being used as a source of information during crises such as natural disasters and civil unrests. Nevertheless, there have been concerns about the quality and truthfulness of user-generated information. This study seeks to understand how users form trust in information on social media. Based on the elaboration likelihood model and the motivation, opportunity, and ability framework, this study proposes and empirically tests a model that identifies the information processing routes through which users develop trust, and the factors influencing the use of the routes. Findings from a survey of Twitter users seeking information about the Fukushima Daiichi nuclear crisis indicate that personal relevance and level of anxiety moderate individuals’ use of information processing routes. This study extends the theorization of trust in user-generated information. The findings also suggest practical approaches for managing social media during crises.
A Signal Detection Approach to Understanding the Identification
of Fake News
Meeting
15th July 2022
Authors
Cédric Batailler, Skylar M. Brannon, Paul E. Teas and Bertram Gawronski
​
Perspectives on Psychological Science 2022, Vol. 17(1), pp. 78–98
Abstract
Researchers across many disciplines seek to understand how misinformation spreads with a view toward limiting its impact. One important question in this research is how people determine whether a given piece of news is real or fake. In the current article, we discuss the value of signal detection theory (SDT) in disentangling two distinct aspects in the identification of fake news: (a) ability to accurately distinguish between real news and fake news and (b) response biases to judge news as real or fake regardless of news veracity. The value of SDT for understanding the determinants of fake-news beliefs is illustrated with reanalyses of existing data sets, providing more nuanced insights into how partisan bias, cognitive reflection, and prior exposure influence the identification of fake news. Implications of SDT for the use of source-related information in the identification of fake news, interventions to improve people’s skills in detecting fake news, and the debunking of misinformation are discussed.