A Review on Explainable AI for Deepfake Detection Leveraging Hybrid Deep Learning Techniques

Author: Hitarth H. Raval, Mehul S. Patel, Shweta D. Parmar
Published Online: July 1, 2025
DOI: http://doi.org/10.63766/spujstmr.24.000034
Abstract
References

The advent of deepfake technology, leveraging advancements in generative artificial intelligence, has catalyzed a substantial threat to the integrity and trustworthiness of digital media. Deepfakes, which include hyper-realistic synthetic images, videos, and audio generated using techniques such as Generative Adversarial Networks (GANs), have been widely exploited to create fake content that is increasingly indistinguishable from reality. This work investigates the intersection of Explainable Artificial Intelligence (XAI) with deepfake detection, emphasizing the importance of transparency and interpretability in this field. We provide a detailed analysis of existing deepfake detection strategies, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and hybrid and multimodal approaches. The paper further emphasizes the importance of integrating XAI techniques to enhance model interpretability, reliability, and robustness, thus enabling more transparent and ethical AI systems. In addition, we assess various evaluation metrics and benchmark datasets utilized in deepfake detection research and discuss the limitations of current models. Finally, the paper outlines future research directions, advocating for continuous innovation and interdisciplinary collaboration to mitigate the pervasive threat posed by deepfake technology.

Keywords: Deepfake, Generative Adversarial Networks (GAN), Hybrid Models, Deep Learning, Explainable AI (XAI)
Download PDF Pages ( 69-75 ) Download Full Article (PDF)
←Previous Next →