Abstract

My work addressed the growing misuse of AI-generated material in online media by proposing a deep learning approach to identify GAN-generated false facial photos. By resolving intra- and inter-spatial as well as frequency-domain discrepancies in GAN-generated images, a novel design that combines cutting-edge Convolutional Neural Networks (CNNs) with a Cross-Band Co-occurrence Convolutional Network (CBCCNet) was introduced to improve detection rates. A mixed volume of real and fake faces, standardized into 256 × 256 pictures and enhanced with six-channel cross-band features, has been used in the study. After the photos were preprocessed and normalized, feature extraction techniques based on an examination of channel correlation, spatial consistency, texture patterns, and color distribution were applied. The suggested model was trained and validated on a balanced set, with early stopping triggers and iterative checkpoints to ensure that models converged and did not overfit. With an accuracy of 97.6%, a precision of 99.2%, and a recall of 96%, the performance was outstanding and was backed up by thorough visualizations in the form of precision-recall plots, ROCs, and confusion matrices. The criteria demonstrated the model's strong generalization ability and remarkable ability to confidently distinguish tiny abnormalities found in fake photos. In addition to recommending further research to integrate adversarial robustness, explain ability, and model compression into edge computing, this study validated the value of multi-band image analysis in boosting the resilience of categorization.

You need to be a member of PhD Thesis Writing Service | Expert Help & Consultation to add comments!

Join PhD Thesis Writing Service | Expert Help & Consultation

Email me when people reply –