Almost every one of us has heard about Artificial Intelligence (AI), but relatively few people have a really good idea of what it can really do — This report talks about AI that would unblur pixelated Images.
Believe it or not, but a recent study conducted by researchers at the University of Texas and Cornell University has revealed that [PDF] the blurring technology is soon to become exploitable if not obsolete. We may not enjoy full confidentiality that we do now by blurring pictures or license plates for much longer because now computer devices can decode images using Artificial Intelligence.
This means it is quite easy to unblur pixelated or blurred pictures using various readily and easily available software tools that help in identifying faces or information.
The team of researchers utilized a range of deep learning tools to unblur 71% of the blurred faces and numbers, while the percentage increased to 80% when they allowed the computer to guess for up to 5 times.
It is true that their algorithm cannot create the original image but it can identify whatever can be seen in the blurred photograph according to the information that it has acquired. Thus, it can be assumed that the time isn’t far when internet privacy will become a novel concept since even the blurred content would become viewable with simplistic techniques and tools.
Though there are various software available for decoding images like the Max Planck Institute’s creation that traces people in pixelated images for this particular study, the team used Torch. It is an open-source deep learning library. The reason is that Torch is at least 18% more accurate in comparison to the Max Planck Institute’s version. Torch generates templates for neural networks and is easily accessible as well.
According to the co-author of the paper and Cornell University’s professor Vitaly Shmatikov, the team used this “off-the-shelf, poor man’s approach” only to prove that it does not require as much effort to decode images as it is commonly perceived.
Shmatikov states “Just take a bunch of training data, throw some neural networks on it, throw standard image recognition algorithms on it, and even with this approach…we can obtain pretty good results.”
During the course of their research, the team defeated 3 distinct privacy protection measures. First, they identified YouTube blur tool’s faces; secondly, the team identified pixelated numbers and faces from Photoshop and thirdly they exploited p3 JPEG photo format. These tools are deemed as reliable tools for securely hiding information.
However, this algorithm is far from complete, yet it has proven to be successful. Currently, it can just identify faces or objects that it has previously recorded. But, it is possible for an attacker to use images from social media platforms to train the computer.
As Shmatikov noted, “the result of this paper will be that nobody will be able to publish a privacy technology and claim that it’s secure without going through this kind of analysis.”