top of page
face icon.jpg

Journal Club

Our Research Team meets regularly to discuss relevant literature and review its application to our work

If you would like to recommend a paper for discussion related to human rights, deepfakes, user-generated evidence or open source evidence,

contact us here

Perception and deception: Exploring individual responses to deepfakes
across different modalities

13th October 2023


Saifuddin Ahmed and Hui Wen Chua

Heliyon, Vol. 9(10), doi: 10.1016/j.heliyon.2023.e20383


This study is one of the first to investigate the relationship between modalities and individuals' tendencies to believe and share different forms of deepfakes (also deep fakes). Using an online survey experiment conducted in the US, participants were randomly assigned to one of three disinformation conditions: video deepfakes, audio deepfakes, and cheap fakes to test the effect of single modality against multimodality and how it affects individuals’ perceived claim accuracy and sharing intentions. In addition, the impact of cognitive ability on perceived claim accuracy and sharing intentions between conditions are also examined. The results suggest that individuals are likelier to perceive video deepfakes as more accurate than cheap fakes, but not audio deepfakes. Yet, individuals are more likely to share video deepfakes than cheap and audio deepfakes. We also found that individuals with high cognitive ability are less likely to perceive deepfakes as accurate or share them across formats. The findings emphasize that deepfakes are not monolithic, and associated modalities should be considered when studying user engagement with deepfakes.

A Picture Paints a Thousand Lies?
The Effects and Mechanisms of
Multimodal Disinformation and
Rebuttals Disseminated via Social Media

28th April 2023


Michael Hameleers, Thomas E. Powell, Toni G.L.A. Van Der Meer, and Lieke Bos

Political Communication, Vol. 37(2), pp. 281-301


Today’s fragmented and digital media environment may create a fertile breeding ground for the uncontrolled spread of disinformation. Although previous research has investigated the effects of misinformation and corrective efforts, we know too little about the role of visuals in disinformation and fact checking. Against this backdrop, we conducted an online experiment with a diverse sample of U.S. citizens (N = 1,404) to investigate the credibility of textual versus multimodal (text-plus-visual) disinformation, and the effects of textual and multimodal fact checkers in refuting disinformation on school shootings and refugees. Our findings indicate that, irrespective of the source, multimodal disinformation is considered slightly more credible than textual disinformation. Fact checkers can help to overcome the potential harmful consequences of disinformation. We also found that fact checkers can overcome partisan and attitudinal filters – which points to the relevance of fact checking as a journalistic discipline.

Scottish Jury Research:
Findings from a Large-Scale Mock
Jury Study

December 2022

This study is the largest of its kind ever undertaken in the UK, involving 64 mock juries and 969 individual participants. The research team staged jury deliberations between May and September 2018, in venues in central Glasgow and Edinburgh. Jurors were recruited to be broadly representative of the Scottish population aged 18-75 in terms of gender, age, education and working status. This meant that the mock juries were similar in demographic composition to the actual population eligible for jury service. In order to assess the effect of the Scottish jury system’s unique features on decision-making, juries varied in terms of the number of verdicts
available to them (two or three), jury size (12 or 15) and the size of majority they were required to reach (simple majority or unanimity). Each jury watched a video of either a mock rape trial or a  mock assault trial, lasting approximately one hour. Short clips from the two trials are available to watch online; links are in the footnote below.3 Jurors completed a brief questionnaire recording their initial views on the verdict, before deliberating as a group for up to 90 minutes and returning a verdict (if the jury had been able to arrive at one). After returning their verdict, jurors completed a final questionnaire covering their beliefs about the not proven verdict and views about the deliberation process, as well as their final views on the verdict. The data generated from the study included: quantitative data from the questionnaires; quantitative and qualitative data from the deliberations (which were filmed, audio recorded and analysed by the researchers); and quantitative ‘metadata’ on each jury (e.g. length of deliberations, verdict returned, etc).

Fooled twice: People cannot detect deepfakes but think they can

October 2022


Nils C. Köbis, Barbora Doležalová and Ivan Soraperra 




iScience, 24,103364

Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N = 210), we show that (1) people cannot reliably detect deepfakes and (2) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (3) people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and (4) they overestimate their own detection abilities. Together, these results suggest that people adopt a “seeing-is-believing” heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content.

Trust in User-Generated
Information on Social Media
during Crises: An Elaboration
Likelihood Perspective

September 2022


L. G. Pee and Jung Lee

Asia Pacific Journal of Information Systems, Vol. 26(1), pp. 1-21


Social media are increasingly being used as a source of information during crises such as natural disasters and civil unrests. Nevertheless, there have been concerns about the quality and truthfulness of user-generated information. This study seeks to understand how users form trust in information on social media. Based on the elaboration likelihood model and the motivation, opportunity, and ability framework, this study proposes and empirically tests a model that identifies the information processing routes through which users develop trust, and the factors influencing the use of the routes. Findings from a survey of Twitter users seeking information about the Fukushima Daiichi nuclear crisis indicate that personal relevance and level of anxiety moderate individuals’ use of information processing routes. This study extends the theorization of trust in user-generated information. The findings also suggest practical approaches for managing social media during crises.

A Signal Detection Approach to Understanding the Identification
of Fake News


15th July 2022


Cédric Batailler, Skylar M. Brannon, Paul E. Teas and Bertram Gawronski

Perspectives on Psychological Science 2022, Vol. 17(1), pp. 78–98


Researchers across many disciplines seek to understand how misinformation spreads with a view toward limiting its impact. One important question in this research is how people determine whether a given piece of news is real or fake. In the current article, we discuss the value of signal detection theory (SDT) in disentangling two distinct aspects in the identification of fake news: (a) ability to accurately distinguish between real news and fake news and (b) response biases to judge news as real or fake regardless of news veracity. The value of SDT for understanding the determinants of fake-news beliefs is illustrated with reanalyses of existing data sets, providing more nuanced insights into how partisan bias, cognitive reflection, and prior exposure influence the identification of fake news. Implications of SDT for the use of source-related information in the identification of fake news, interventions to improve people’s skills in detecting fake news, and the debunking of misinformation are discussed.

bottom of page