As with every new security measure that is introduced, criminals are now seeking ways to exploit vulnerabilities in biometric systems. In so-called “presentation attacks,” they attempt to assume a false identity. In unprotected systems, this is often done with printed images, video playbacks or fake rubber masks. Unfortunately, such attacks are becoming more sophisticated, and detecting them is therefore increasingly important. It is also becoming more complex.
In our research, we present several new approaches to increasing the generalizability of spoof detection, as researchers are currently confronted with a variety of different attack types. One tool consists of dual-stream convolutional neural networks (CNNs), where one stream takes its cues from color space and the other stream from frequency space, to detect different and previously unknown attacks.