ABSTRACT
The rapid advancement of deep learning has led to the creation of realistic
deepfakes, sparking concerns about identity theft, scams, privacy breaches, and the
spread of misinformation, impacting public trust and psychological well-being.
Addressing these challenges necessitates a blend of technological innovations,
legislative actions, and public awareness initiatives. DeepFakes, generated using
algorithms like Generative Adversarial Networks (GAN) and Autoencoders. The
fabricated images are so lifelike that detecting them with the naked eye is nearly
impossible. Deepfakes can be used to spread misinformation, manipulate public
opinion, damage reputations, and even create fraudulent content. Deepfakes blur
the lines between reality and fiction,making it increasingly challenging to discern
truth from falsity, which can undermine trust in media and institutions. Therefore,
it's crucial to recognize and construct a model capable of accurately differentiating
between genuine and altered media. The prototype explores deep learning's
efficacy in detecting manipulated images and videos, utilizing advanced
architectures like VGG19, MobileNet, ResNet, MesoNet, CNN, and Xception.
VGG19 is chosen for image prediction with the Celeb-df dataset due to its superior
accuracy over other models. Meanwhile, MobileNet is selected for video prediction
with the DFDC dataset for its accuracy and efficiency compared to alternatives
like VGG19 and MesoNet.