DeepPrivacy Aims to Anonymize People While Retaining Their Facial Expressions

deep privacy

One of the most important concerns in deepfakes is undoubtedly privacy, which is exactly what these researchers at the Norwegian University of Science and Technology has tried to fix.

This new technique named Deep Privacy makes use of generative adversarial networks(GAN), the underlying technology employed in Deep fakes to anonymize the subjects while replicating their characteristics.

The algorithms used in Deep Privacy extract critical facial expressions from the subject and runs it through the database trained with 1.5 million face images to generate a new face retaining all the details from the source image.

For accurately detecting facial features, the researchers use Mask R-CNN and Dual Shot Face Detector (DSFD). Mask R-CNN is responsible for generating sparse pose information of the face while DSFD is used for detecting the faces present in the image.

“We present experimental results reflecting the capability of our model to anonymize images while preserving the data distribution, making the data suitable for further training of deep learning models. As far as we know, no other solution has been proposed that guarantees the anonymization of faces while generating realistic images.”, wrote the researchers.

Since the project is highly experimental at this point in time, you should not be expecting state-of-the-art results here. From the implementation, however, it appears like it could improve significantly over time.

The source code of the research paper is available on GitHub. If you’re interested to run the code yourself, you may do so from here. The instructions to set up the environment are given in the README.md file in the repository as well. You can also run the project in Google Colab from here.

So, what do you think of Deep Privacy? Let us know your thoughts in the comments.

Comments 0
Leave a Reply

Loading comments...