OpenAI discontinues its AI writing detector due to “low rate of accuracy” [Ars Technica]

View Article on Ars Technica

An AI-generated image of a slot machine in a desert.

Enlarge / An AI-generated image of a slot machine in a desert. (credit: Midjourney)

On Thursday, OpenAI quietly pulled its AI Classifier, an experimental tool designed to detect AI-written text. The decommissioning, first noticed by Decrypt, occurred with no major fanfare and was announced through a small note added to OpenAI’s official AI Classifier webpage:

As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.

Released on January 31 amid clamor from educators about students potentially using ChatGPT to write essays and schoolwork, OpenAI’s AI Classifier always felt like a performative Band-Aid on a deep wound. From the beginning, OpenAI admitted that its AI Classifier was not “fully reliable,” correctly identifying only 26 percent of AI-written text as “likely AI-written” and incorrectly labeling human-written works 9 percent of the time.

As we’ve pointed out on Ars, AI writing detectors such as OpenAI’s AI Classifier, Turnitin, and GPTZero simply don’t work with enough accuracy to rely on them for trustworthy results. The methodology behind how they work is speculative and unproven, and the tools are currently routinely used to falsely accuse students of cheating.

Read 5 remaining paragraphs | Comments