OpenAI, the GPT-4 creator, has taken down links to its AI Classifier tool, the detection tool which ws widdeely used by universities to detect whether a student had used Chat-GPT to write essays and following thousands of appeals from students accused of doing so and having final grades withheld, pending investigation. Faced with an increasing number of threats of class actions against the Universities and OpenAI, Open-AI has reversed its claim at the launch of AI Classifier that it could “distinguish between human and machine written text” although even OpenAI at the time acknowledged that the tool was far from perfect. The use of AI Classifier resulted in hundreds of thousands of claims about plagiarism, academic integrity, and the generation of misinformation via generative AI.
OpenAI claimed that AI Classifier correctly identified 26% of AI text as “likely AI-written,” and incorrectly labelled human written text as AI generated 9% of the time, but today’s announcement admits that Classifier was a failure although its removal was very low key, probably as an anti-litigation ploy.
As of July 20th, 2023, the AI Classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.
The move will no-doubt result in all of the Universities who have sanctioned students as a result of use of the AI Classifier tool having to reverse their decisions, unless the student has admitted the use of AI, and this could result in many millions of dollars in compensation arising. by way of claims against Universities.
As Nick Lockett of ADL Solicitors identified months ago, AI Classifier was free to use but came without any warranties about accuracy, although Open AI press statements created certain expectations, but the terms of use excluded liability as the system was in beta. This however identifies that the risks of AI usage are not being identified properly by developers in the rush for publicity, fame and the $$$$$$ that are available by unverified statements about the accuracy and acceptability of AI, and should be a warning for the future.
Paul Christiano, former head of language model alignment on OpenAI’s safety team, recently warned that a “full-blown A.I. takeover scenario” is a “distinct possibility” and that it is no longer a negligible risk “that human- or superhuman-level A.I. could take control of humanity and even annihilate it…..there is a very decent chance advanced A.I. could spell potentially world-ending calamity in the near future.”

