New privacy tool ‘Fawkes’ blocks your images from facial recognition

Fawkes was tested against Facial AI including Microsoft Azure Face API…

Fawkes was tested against Microsoft Azure Face API, Amazon Rekognition, and Face++ amongst which the researchers believe it beat all of them.

Over the past few years, many positive developments have arisen from the use of Artificial Intelligence (AI). One of these includes facial recognition programs that law enforcement agencies can use to track criminals and do their job in a better way.

However, this naturally also has a downside that it compromises the privacy of innocent people while also bearing great consequences if such programs get into the wrong hands – think racist police officers using Clearview.

Therefore, another need arises of countering such programs. Keeping this in mind, “SAND Lab” from the University of Chicago has developed a new algorithm named Fawkes that allows users to control how the images they upload on the internet are used demonstrating the entire model in a research paper.

See: Meet IRpair & Phantom; powerful anti-facial recognition glasses

The algorithm function in a way that its code alters the pixels present in images uploaded on the Internet which although makes them perfectly viewable for the naked eye, they do not cooperate with facial recognition models. For example, if I upload my own pictures, anyone could see that it’s me on social media.

However, the moment someone tries to utilize these in their facial recognition network, the images will not accurately tell the model on what I look like and will present a “distorted version.”

Clarifying further on the real-life expected success rate, the researchers state in the report that,

We experimentally demonstrate that Fawkes provides 95+% protection against user recognition regardless of how trackers train their models. Even when clean, uncloaked images are “leaked” to the tracker and used for training, Fawkes can still maintain an 80+% protection success rate.

To make sure it is effective, Fawkes was tested against the market leaders in Facial AI including Microsoft Azure Face API, Amazon Rekognition, and Face++ amongst which the researchers believe it beat all of them.

There are caveats as well nonetheless in the words of the team themselves:

Please do remember that this is a research effort first and foremost, and while we are trying hard to produce something useful for privacy-aware Internet users at large, there are likely issues in configuration, usability in the tool itself, and it may not work against all models for all images.

For those of you who may want to try the tool out, it is available to download for free:

Mac Binary – 167 MB

Windows Binary – 124 MB

Linux Binary – 211 MB

Moreover, the source code is also hosted at Github for everyone to see.

See: Website uses Artificial Intelligence to create utterly realistic human faces

Concluding, the tool no doubt has a long way to go and is not a one-fit-all solution to avoid the onslaught of privacy by AI. Countermeasures such as manually tracking someone and taking their pictures to train facial recognition models can always be used when the target is enough high profile. Even so, this should be helpful for the majority of us out there.

Did you enjoy reading this article? Do like our page on Facebook and follow us on Twitter.

Total
0
Shares
Related Posts