A hacker named Amadon discovered a way to bypass ChatGPT’s safety guidelines, tricking it into providing bomb-making instructions by creating a fictional scenario where the bot’s restrictions didn’t apply.

This technique, known as “jailbreaking,” allowed ChatGPT to give detailed instructions for creating explosive devices. An explosives expert confirmed that the information provided could be used to make a functional bomb.

Although Amadon reported the issue to OpenAI, the company’s bug bounty program said these types of safety concerns require broader research rather than being treated as simple bugs.

Source: Techcrunch