Facebook De-Identification Method Thwarts Facial Recognition Technology

by November 9, 2019

The facial recognition technology is becoming popular and trendy among many government agencies across the globe. These agencies seek to automate their services and keep a check on their citizens. Notably, if your picture is somewhere out there, the chances are extremely high that you will be identified in photos and videos from public camera feeds.

The tech giant Facebook has come up with a method that can impede this technology. Three AI researchers who work with the company have devised face de-identification tech which can modify your face slightly in video content so that facial recognition systems can’t match what they see in the footage with images of you in their databases. According to Facebook, this can be used with both pre-recorded content or live video.

The face de-identification technology could enable more ethical use of video footage of people for training AI systems, which usually needs several examples to learn how to emulate the content they’re fed. By tweaking the people’s faces and making it impossible to recognize, these AI systems can be trained to risk the test subjects’ privacy.

Some experts believe that this technology might become a standard requirement soon, for government agencies and companies that capture footage of people, whether for security or other purposes.

The VentureBeat briefly explained how this method works. It said – “…the AI uses an encoder-decoder architecture to generate both a mask and an image. During training, the person’s face is distorted then fed into the network. Then the system generates distorted and undistorted images of a person’s face for output that can be embedded into the video.”

VentureBeat also noted that the tech giant has no plans to use it in any of its own products. Facebook generally uses facial recognition technology to identify your friends in uploaded photos for easier tagging, and also to notify you when you’re in someone else’s pictures. In order to respect its users’ privacy a little more, the company turned its facial recognition feature off by default last month.

As recent world events have witnessed the abuse of facial recognition technology, it has invoked the need to understand methods that can successfully manage de-identification.

According to the research team of Facebook, “Our contribution is the only one suitable for video, including live video, and presents a quality that far surpasses the literature methods. The approach is both elegant and markedly novel, employing an existing face descriptor concatenated to the embedding space, a learned mask for blending, a new type of perceptual loss for getting the desired effect, among a few other contributions.”

The research paper presented by them further quotes that, “Minimally changing the image is important for the method to be video-capable, and is also an important factor in the creation of adversarial examples. Unlike adversarial examples, in our work, this change is measured using low- and mid-level features and not using norms on the pixels themselves. It was recently shown that image perturbations caused by adversarial examples distort mid-level features, which we constrain to remain unchanged.”