top of page
  • Yvonne McDermott Rees

Trust in Evidence in an Era of Deepfakes

This blog post, written by TRUE PhD students Rebecca, Ruben, and Anne, is cross-posted from the Academy of Social Sciences blog.




In this piece, Rebecca Jenkins, Ruben Lamers James, and Anne Hausknecht, Swansea University, discuss the Trust in User-Generated Evidence (TRUE) project and set out policy recommendations for improving awareness of the potential impact that synthetic content, or ‘deepfakes’, can have on trust in real, authentic user-generated evidence used in prosecutions of human rights violations.


On 15 November 2023, a French court issued an arrest warrant against Syrian president Bashar Al-Assad and three associates for complicity in war crimes and crimes against humanity for the use of chemical weapons against civilians. It is the latest in a series of recent cases utilising “user-generated evidence” (i.e., information captured by ordinary users through their personal digital devices) to pursue accountability for atrocity crimes. User-generated evidence is increasingly being used in prosecutions before the International Criminal Court, as well as in the domestic prosecution of international crimes in several countries around the world, including Finland, Germany, and Sweden.


Simultaneously, the public is increasingly confronted with deepfakes, or hyper-realistic images, videos, or audio recordings created using machine learning technology. Deepfakes are only likely to become more advanced and difficult to detect as the technology to generate them progresses. And the biggest societal danger posed by deepfakes might not be that faked footage will be believed, but that real footage will be dismissed as possibly fake. In other words: people might dismiss real evidence as fake and trust fake evidence as true.


To help us understand the impact of deepfakes on trust in user-generated evidence, the TRUE project combines research evidence from a legal, psychology, and linguistic perspective. For example, we have recorded a mock trial  – presided over by Sir Howard Morrison KC, a former judge of the International Criminal Court – which will be shown to research participants acting as mock jurors. Using applied linguistics, we will analyse jury deliberations to find out what terms jurors use to indicate the perceived (un)trustworthiness of user-generated evidence, and their concerns that the evidence might be faked or manipulated.


The TRUE project also uses online psychology studies to test lay people’s evaluation of user-generated evidence. We investigate the extent to which people’s (real or perceived) exposure to deepfakes impacts trust in audio-visual user-generated evidence and its credibility and reliability judgments. For example, when evaluating a real piece of user-generated evidence from the conflict in Syria, we found that participants showed no default towards disbelief and scepticism as reported in extant literature; instead, they thoroughly verified the information by looking at elements such as background voices, bystander witness, or technical quality of the video akin to how investigative journalists work. The project also aims to develop methods to improve people’s ability to correctly detect deepfakes, such as assessing how people update their beliefs following feedback from others, or through interventions aimed at improving digital literacy.


We are also analysing a database of legal cases from domestic and international courts. This database details which types of user-generated evidence are being used, by whom and at what stage of the trials the evidence is introduced, whether allegations of deepfakes appear and how those allegations are dealt with. We are exploring to what extent allegations of deepfakes and narratives of (mis)trust have appeared in criminal trials and investigating which measures have been taken to counteract those allegations.


One aspect of particular interest is the notion of the ‘liar’s dividend’, which refers to anyone’s ability to decry real evidence as fake simply because of the existence of deepfakes. We have seen this play out in one of the January 6th Capitol Hill cases, where one of the defendants claimed that the evidence against him could be a deepfake; in a lawsuit against Tesla on statements made by Elon Musk, with Tesla’s lawyers arguing that the video of Musk could easily be a fake; and at the European Court of Human Rights in the case of Ukraine and The Netherlands vs Russia, where Russia attempted to undermine the user-generated evidence presented against it. With our research, we hope to increase awareness of the potential impact that deepfakes can have on trust in real, authentic user-generated evidence, and to highlight the technical and procedural measures that could be taken in this regard.


Policy Recommendations

As technology advances, the quantity and quality of deepfakes is likely to increase. Therefore, an awareness of deepfakes and ability to spot them, as well as policies on the demarcation of synthetic content are essential. The following policy recommendations are based on an examination of best practices in the handling of deepfake content from different countries, as well as insights gained from a series of interviews and conversations with legal practitioners:


  1. Governments should make digital literacy a priority and invest in educating people about the steps they can take to verify the information they consume. This applies not just to the legal sphere, but to society as a whole. Enabling individuals – from laypeople to journalists to policymakers and further – to spot deepfakes and evaluate user-generated evidence based on its source, history, digital features, and content can only strengthen democratic practices.

  2. Steps should be taken to ensure the provenance of user-generated content. This could include the preservation of metadata, and promotion of the Coalition for Content Provenance and Authenticity (C2PA), which encodes details about the origin of digital content that can be automatically checked to detect image/video manipulation.

  3. Policies requiring watermarking to label AI-generated content should be introduced. The US executive order of November 2023 on managing the risk of AI, requiring federal agencies to use watermarking to label AI-generated content, could act as a model.

  4. While the UK Online Safety Act criminalizes the sharing or threatening to share deepfake pornography, creation of such content is not a crime. Creating deepfake pornography, including deepfake child sexual abuse images, should thus be criminalised, and police and lawyers should be trained to detect and try offenders.


About the authors

The authors are Rebecca Jenkins (PhD Candidate in Applied Linguistics, Swansea University), Ruben Lamers James (PhD Candidate in Psychology, Swansea University), and Anne Hausknecht (PhD Candidate in Law, Swansea University). You can find out more about them on the TRUE project website. All three are researchers on the project.


Image Credit: Scott Graham, Unsplash

77 views
bottom of page