Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Jiang, Jianguoa; b; * | Li, Boquana; b; * | Wei, Baolea; b | Li, Gangc | Liu, Chaoa | Huang, Weiqinga | Li, Meimeia | Yu, Mina; **
Affiliations: [a] Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China. E-mails: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] | [b] School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China | [c] School of Information Technology, Deakin University, 221 Burwood Highway Vic 3125, Australia. E-mail: [email protected]
Correspondence: [**] Corresponding author. E-mail: [email protected].
Note: [*] J. Jiang and B. Li contribute equally to this work.
Abstract: Abuse of face swap techniques poses serious threats to the integrity and authenticity of digital visual media. More alarmingly, fake images or videos created by deep learning technologies, also known as Deepfakes, are more realistic, high-quality, and reveal few tampering traces, which attracts great attention in digital multimedia forensics research. To address those threats imposed by Deepfakes, previous work attempted to classify real and fake faces by discriminative visual features, which is subjected to various objective conditions such as the angle or posture of a face. Differently, some research devises deep neural networks to discriminate Deepfakes at the microscopic-level semantics of images, which achieves promising results. Nevertheless, such methods show limited success as encountering unseen Deepfakes created with different methods from the training sets. Therefore, we propose a novel Deepfake detection system, named FakeFilter, in which we formulate the challenge of unseen Deepfake detection into a problem of cross-distribution data classification, and address the issue with a strategy of domain adaptation. By mapping different distributions of Deepfakes into similar features in a certain space, the detection system achieves comparable performance on both seen and unseen Deepfakes. Further evaluation and comparison results indicate that the challenge has been successfully addressed by FakeFilter.
Keywords: Digital multimedia forensics, face swap, Deepfake detection, domain adaptation
DOI: 10.3233/JCS-200124
Journal: Journal of Computer Security, vol. 29, no. 4, pp. 403-421, 2021
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]