Researchers Develop AI Model That Can Fool CAPTCHA With 100% Accuracy

There’s no doubt that CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) puzzles can get super annoying, especially if you’re trying to book that last-minute plane ticket or are simply trying to log into a website. Yes, the ones where you have to mark stairs, bikes, buses, and crosswalks from a series of grids.

Well, that may soon not be a problem anymore, as a group of Swiss researchers at ETH Zurich, Switzerland, have now developed an AI model that’s capable enough to solve Google’s reCAPTCHAv2 security puzzles with ease. While that may be less annoying for you, it’s not exactly good news in terms of web security.

An AI Model That Solves 100% of Captchas

The research paper titled (PDF warning) “Breaking reCAPTCHAv2” published on September 13, sees four Swiss researchers (Andreas Plesner, Tobias Vontobel, and Roger Wattenhofer) work to examine “the efficacy of employing advanced machine learning methods to solve captchas from Google’s reCAPTCHAv2 system.”

As a result, they ended up developing an AI model based on the YOLO (You Only Look Once) picture processing model which “can solve 100% of the captchas, while previous work only solved 68-71%.” They basically train the AI model to recognize objects that appear in the reCAPTCHAv2 tests. Right now, there are 13 common classes of objects in these security challenges, including bicycles, bridges, cars, buses, chimneys, crosswalks, fire hydrants, motorcycles, mountains, stairs, palm trees, and traffic lights.

Different Google reCAPTCHAv2 examples
Image Courtesy: arxiv.org

The research paper talks about how there have been several open-source projects that have worked towards cracking down Google’s reCAPTCHAv2 through machine-learning techniques. However, the accuracy has never really been this impressive.

The researchers used a bunch of different analysis conditions to test out the AI model. From running the AI model in VPN and non-VPN usage conditions to mimicking human mouse movement as well as having and not having browser history or cookies, they had quite the comprehensive testing grounds. And, in all of them, while the AI model required human intervention, it successfully hit 100% accuracy.

That only means that the next step forward is to develop this AI model to make it work without any human intervention. However…

Good for AI, Bad for Us

While those security challenges may certainly seem pointless and frustrating, there’s a reason they exist. Internet’s malicious bots and crawlers can do you some serious damage if a threat actor uses them to gain access to sensitive information.

CAPTCHAs help protect the security and integrity of these online systems. The most important example would be the role of CAPTCHA in protecting your bank accounts. CAPTCHA verifications keep bots attempting unauthorized access into your account, at bay.

Then, when creating a social media account, such security puzzles prevent bots from creating fake accounts to a certain degree. So, not having such security measures in place basically breaks the floodgate and puts users at insurmountable vulnerabilities.

A Check Point Research reported a 30% year-on-year rise in global cyber attacks, hitting 1,636 attacks per organization every week. Meanwhile, an esentire report revealed that global cybercrime attacks are estimated to cost the world an unimaginable $9.5 trillion.

High Time for Captchas to Evolve

So, while AI models like these can seem to be quite alarming, it’s necessary for such advancements in the industry. Due to such advancements, organizations automatically buckle up and solidify their security measures. Most importantly, ever since Google’s reCAPTCHAv3 launch back in 2018, there has been no actual progress made in further evolving the security algorithm.

As the research paper rightfully notes,

Continuous progress in AI requires a simultaneous development of digital security measures. Subsequent investigations should prioritize the development of captcha systems capable of adjusting to the complexity of artificial intelligence or explore alternative methods of human verification that can withstand the progress of technology.

Besides, since AI is gradually reaching a point where it can interact and even talk like a human (ChatGPT’s Advanced Voice Mode and Gemini Live, for example), it’s not incredibly hard for it to replicate a couple of human-like attempts in reCAPTCHAs to fool it all together. Ultimately, it all comes down to using AI responsibly, and instead of looking at these advancements as a threat, try to use them to our advantage.

What do you think about the new AI model that can solve CAPTCHAs for you? Drop your thoughts in the comments down below!

#Tags
comment Comments 0
Leave a Reply

Loading comments...