OpenAI shut down its AI text recognition tool due to low accuracy. The company decided to shut down its AI classifier as of July 20.

“We are working to add feedback and are currently exploring more efficient text source techniques,” the company noted.

However, OpenAI plans to develop and implement new mechanisms that allow users to understand whether audio or visual content is AI-generated. However, there is no data yet on what these mechanisms might be.

OpenAI acknowledged that the existing classifier was never very good at identifying AI-generated text and warned that it could produce false positives, meaning human-written text labeled as AI-generated text.

After OpenAI ChatGPT became one of the fastest growing applications of all time, many sectors raised the alarm about AI-generated text and art. Especially teachers who were afraid that students would stop learning and let ChatGPT write their homework. New York schools even banned access to ChatGPT on school grounds.

Misinformation spread by AI has also been a concern, with studies showing that AI-generated text, such as tweets, can be more persuasive than those written by humans. Governments have not yet figured out how to control AI and are leaving it up to individual groups and organizations. No one seems to have the answers right now on how to deal with all of this.

Source: The Verge