The advent of deepfake technology, or the manipulation of images, video, text, or audio using artificial intelligence (AI), prompted many well-founded and wide-ranging concerns for society, including implications for the anti-fraud and investigative professions. Other recent forms of generative AI, such as Chat GPT variations, have only exacerbated these concerns. This presentation will cover the current capabilities and applications of deepfake technology to help the audience understand the risks associated with the technology, as well as how to address those risks and prepare for future impacts.

Learning Objectives

  • Understand current capabilities and applications of deepfake and generative AI technologies
  • Identify signs of AI manipulation in imagery and scenarios in which investigators might encounter manipulated imagery
  • Consider implications for litigation and admissibility of evidence related to deepfake and generative AI technologies


Mason Wilder is research manager for the ACFE. In this role, he oversees creation and updates of ACFE materials for continuing professional education, works on research initiatives such as the Report to the Nations and benchmarking reports, conducts trainings, writes for ACFE publications, and responds to member and media requests.

Prior to joining the ACFE, Wilder worked in corporate security intelligence and investigations for over a decade, specializing in background and due diligence investigations and intelligence analysis for international physical security and crisis response. Throughout his career, he has relied heavily on open-source intelligence gathering and analysis to support investigations, security operations, executive protection, kidnap for ransom and maritime piracy response, emergency evacuations, and risk assessments.