Psychedelic Stickers Easily Disrupt Google’s Image Recognition AI

psychedelic stickers

Our first-hand experience with AI image recognition tech, Google Lens, has shown that machine learning has surely come a really long way. But this technology is currently far from perfect and not as smart as we hoped it to be, mostly because the pace at which we are training computers to identify models is slow. In addition, some AI require advanced computations for the neural network, and it is certainly not the easiest task to teach a machine.

Bearing this in mind, a group of Google researchers took it upon themselves to put AI image recognition systems to the test and see whether they can be deceived or not.

It appears that they’ve emerged victorious since the AI system failed to recognize the object (here, a banana) in the sample images, all thanks to a specially-printed psychedelic sticker.

psychedelic stickers

The creation of these psychedelic stickers that can dupe images recognition systems as described in a research paper titled Adversarial Patch, which was just presented at the 31st conference on Neural Information Processing Systems in December 2017. The paper explains that researchers trained an adversary (opponent) system to create small patch-like psychedelic circles with random shapes, colors, and sizes to fool the image recognition system.

While the most common method to fool AI image recognition systems is to alter an image by appending graphics to it, the researchers at Google decided to trick the system with psychedelic designs.

As seen in the video demo below, the system is able to recognize the banana and, to a certain degree, the toaster when you place a normal image next to the banana. But the results are less clear when a psychedelic swirl is placed next to the banana:

The team also found that a patched design appears separate from the subject and is not affected by factors such as lighting conditions, camera angles, objects in the view of the classifier, and the classifier itself.

The researchers go on to explain the working of the psychedelic designs:

This attack generates an image-independent patch that is extremely salient to a neural network. This patch can then be placed anywhere within the field of view of the classifier, and causes the classifier to output a targeted class.

While at first sight this looks like the AI image recognition  is fooled, this experiment will actually be used to remove the inconsistencies in the system. Those working in this field now need to adapt to noisy data which can be included in the subject images. This finding could give machine learning-powered systems a chance to improve themselves against similar deception in the future.

SOURCE ArXiv
Comments 0
Leave a Reply

Loading comments...