Bing Chat, powered by Microsoft’s artificial intelligence, can be easily tricked into solving CAPTCHA tests, what are created for preventing web-bots to enter certain sites. This can be done with just simple lies and basic photo manipulation.

Tests designed to be easy for humans to pass but difficult for web crawlers have long been a security feature for all kinds of websites. Over time, CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) types have become more advanced and more difficult to solve.

While humans often struggle to successfully complete modern CAPTCHAs, the current range of advanced AI models can solve them with ease. Therefore, they are programmed not to do this. This should prevent them from being used for malicious purposes. This is part of a process known as “alignment”.

Bing Chat is powered by the OpenAI GPT-4 model and obediently refuses to solve CAPTCHA tests when presented to them.

Denis Shiryaev, CEO of artificial intelligence company neural.love, said he was able to convince Bing Chat to read the CAPTCHA test text by turning it into a photo of a locket. He then told the AI ​​that the locket belonged to his recently deceased grandmother and that he needed to decipher the tag. The robot then solved the CAPTCHA test despite being programmed not to.

Source: New Scentist