September 19, 2019
Anti-Spoofing System Project
Cameras are everywhere. We bet you have one in your pocket on a smartphone. Web cameras are incorporated into tablets, laptops, we use them for security in banks and groceries, on the parking lots and at the airports. Facial recognition systems for access control, attendance tracking or marketing purposes are on the rise, as they cannot be forgotten or lost as standard keys.
We use facial recognition when unlocking an iPhone by Face ID or when tagging pics on Facebook. Microsoft, Amazon, and IBM tools have already inbuilt facial recognition technology tools. You can pay with your face for chicken nuggets at KFC in China as it launched a ‘smile to pay’ service. Amsterdam Airport Schiphol is testing facial recognition to ease the onboarding process and Caterpillar uses facial recognition software to predict drivers from falling asleep when going long distances.
How Camera Knows Who Is Who
The facial recognition system works almost like our brain. At first, we see someone, then identify facial features and process them in our head. Technologies do the same. The system searches for the face on the image and selects the needed area using algorithms.
The system determines the similarity of proportions, selects the contours in the image and compares them with the contours of faces, or selects symmetries using neural networks.
Here, an immediate question arises: is it possible to convince cameras to take you for another person? The number of fraudsters trying to trick the system increases. When someone aims to present himself with a false identity to the camera for getting unapproved access, it is a spoofing attack. An attacker may present a picture or a photo of another person’s face to the camera, show a prerecorded video of another user on a tablet or smartphone or even use a 3D mask of an authorized person’s face.
As more skimmers appear, it is important to develop more anti-spoofing systems to protect the facial recognition systems. CoreValue’s R&D unit Coresearch closely works with facial recognition delivering this technology to our clients’ projects. We were pleased to invite Arsen Senkivskyy, the Applied Sciences graduate of the UCU, to work with our team on one of such projects as a part of his BA diploma. As a result, he built an anti-spoofing system based on eye-tracking technology.
Our case: anti-spoofing system
The idea behind the project was to create a system, which detects spoofing by analyzing a user’s eye movement. For this, we developed a challenge-response system, trained it by recording pupils’ movements of the real people, and then we tried to spoof it by picture-based and video-based attacks.
For the camera, if a user can do what system has challenged him to do, it is a real user. In our case, a user is required to watch a moving dot on the screen. The dot starts from the centre of the screen and goes to the randomly chosen side so that the user cannot present a prerecorded video. As the user follows the dot, the system estimates the direction where the user’s eyes are moving.
We started with collecting a dataset and asked people to sit in front of the computer and follow the moving points on the screen with their eyes. There were two types of moving point patterns to compare. In the first case, the key points were situated in the centre of each side of the screen. In the second one, key points were in the screen corners. You can see it on the picture, the black stars stand for the key points and the arrows show the movement.
We tried three different approaches to detect which works the best. For the first one, we created the custom neural network, presented it with a set of pictures and trained it to predict the pupils’ movement direction.
For the second approach, we manually calculated the vector of the pupil’s direction on each frame and classified its direction by its angle.
With these two anti-spoofing approaches, if the direction of the user’s eye movement is the same as the dot’s movement, the user is considered to be alive. The second approach demonstrated the worst performance. The eye area is too small and eye movement is not enough to be correctly.
The third one requires the user to look at not just one but all episodes of the dot moving to the side of the screen. Then the system analyzes the variance of the x and the y coordinates of the user’s pupil centre when he/she looks at a collinear set of points. If it is below a certain threshold, a user is considered to be alive. This approach has a very big flaw. When an attacker is presenting a picture and it’s completely static, the model will classify it as a real person.
The first approach is the most promising as it requires only one episode and has the highest accuracy of detecting eye movement direction.
CoreValue R&D department, Coresearch helps our clients make innovations accessible. Our research capabilities, cross-domain proficiency, and advanced innovative thinking allows for a deep understanding of the client’s needs and provide expedited development and improvement of their solutions.