Abstract
In today's digital landscape, the proliferation of AI-generated deepfakes poses asignificant threat to content authenticity. This project aims to develop an innovative deepfake detection system that leverages the power of explainable AI (XAI) techniques. By combining interpretable ML models with neural networks, the proposed solution will analyze video and imagery to identify AI-manipulated fabrications accurately. Explainable AI methods Like feature attribution and saliency maps will enhance transparency and trustworthiness. The system will be accessible through a web or browser interface, empowering online users to verify content authenticity and mitigate the spread of disinformation.