Did ChatGPT write this? Here’s how to tell
The AI wars are heating up. In late 2022, OpenAI’s ChatGPT made headlines for showing us what a new search engine could look like. ChatGPT (which stands for “Generative Pre-trained Transformer”) is a chatbot — one that can process queries and spit out relevant information to answer questions about historical facts, recipes, car dimensions and lots more. As a bonus, ChatGPT lets you word questions in plain English, so you’re not forced to write queries like “how to stop dog pooping everywhere reddit.” The result is, essentially, a search box that you can message back and forth with. It almost makes Google search look a little primitive. Microsoft, the maker of Bing and biggest investor in OpenAI, is okay with this.
ChatGPT, and the latest release GPT-4, provides thorough answers — it can even write your code, write your cover letter and pass your law exam. It also provides thoroughly wrong answers sometimes. It’s worrying how confidently ChatGPT presents inaccurate information. That hasn’t stopped newsrooms from rethinking how many writers they hire nor professors from coming out against the chatbot. (Though not all professors. Some embrace the change.)
The excitement around artificial intelligence is anything but artificial. At least for some. College professors or job recruiters are less than excited to have to discern human words from chatbot chatter. Industry experts are less than enthused for a potential wave of misinformation, signing an open letter that warns of AI’s potential to “flood our information channels with propaganda and untruth.” Those who have signed say “such decisions must not be delegated to unelected tech leaders.” Issues like this are exactly what Mozilla seeks to address with the Responsible Computing Challenge, ensuring higher education programs emphasize tech’s political and societal impact. And also with Mozilla.ai, a startup with the mission of making it easy to create AI that’s open source and ethical.
As we enter this brave new world where even a friend’s Snapchat message could be AI-written, you might want to know a bit more about chatbots’ capabilities and limitations. Can you spot a paragraph written by AI? Can you tell if your coworker is actually responding to you and not ChatGPT? Do you know how to spot misinformation within a chatbot’s answers because ChatGPT-infused Bing definitely still gets facts wrong at times? It’s not always possible to know if an AI wrote some copy, but sometimes, you can detect language written by ChatGPT and other bots by using a detector tool and watching for awkward language. Read on to learn how.
How to detect ChatGPT text yourself?
You can detect ChatGPT-written text using online tools like OpenAI API Key. The tool comes from OpenAI, itself, the company that made ChatGPT. It’s worth noting that the app isn’t perfect. OpenAI says the tool needs at least 1,000 words before it can sniff out AI-generated content, so something like an AI-generated text message may fly under its radar. Also, even if it gets the 1,000 words it needs, it isn’t always 100% accurate at detecting AI vs human written language. AI-made text that has been edited by a human can also fool the tool
(Update: As of July 2023, a new ChatGPT update no longer includes the use of the AI classifier which Open AI used to detect AI-generated text and, as of early 2024, the company has even taken their original tool offline. The company claims to be working on new, more effective ways of detecting AI-generated text as well as AI-generated audio and video.)
OpenAI’s tool may not be perfect but there are other offerings in the ChatGPT text detection world. The Medium blog Geek Culture lists other options made by folks at Princeton and Stanford. If it’s critical to know if text was written by a bot or a human, testing it on multiple tools might help. ChatGPT is changing quickly so your mileage may vary.
Detecting ChatGPT text: The caveats
It’s important to emphasize that no method of detecting AI-written text is foolproof — that includes options using tools available today. Jesse McCrosky is a data scientist with Mozilla Foundation who warns of AI text detection tools’ limitations. “Detector tools will always be imperfect, which makes them nearly useless for most applications,” says McCrosky. “One can not accuse a student of using AI to write their essay based on the output of a detector tool that you know has a 10% chance of giving a false positive.”
According to McCrosky, it can be impossible to ever have a true AI-detector because it will always be possible for software to write “undetectable” texts or create text with the specific intent of evading these sorts of detectors. And then there’s the fact that the AI tools available to us are always improving. “There can be some sense of an ‘arms race’ between ChatGPT text detectors and detector-evaders, but there will never be a situation in which detectors can be trusted,” says McCrosky.
How to spot misinformation within ChatGPT?
It’s no secret that ChatGPT can (and has) been spreading misinformation and disinformation. Microsoft may be using tools like those from NewsGuard to limit the misleading responses its AI gives, but the issue is still cause for concern. The Poynter Institute has our favorite tips to spot misinformation within ChatGPT: 1) check for patterns and inconsistencies, 2) look for signs of human error and check the context. If a ChatGPT answer repeats something multiple times, has weird errors that a person wouldn’t make or says something that doesn’t make sense in the context of what you’re reading, you might be reading misleading content. Check the source links at the bottom of your ChatGPT responses and make sure you do your own research outside of ChatGPT too. Treat it as a starting point and not the final word.
ChatGPT is fun, but watch out
ChatGPT offers an interesting glimpse into a chatbot answer-filled world, but it also acts as a warning of the downsides too. With great smarts comes great responsibility. As Bing and ChatGPT (and Sydney?) learn how to be a better chatbot, we as the users will have to continue to use ChatGPT detection tools to verify that the words we’re seeing are human-made and that the facts we’re sharing are indeed factual.
Did Chat GPT Write This? Here’s How To Tell
Written by: Xavier Harding
Edited by: Ashley Boyd, Audrey Hingle, Carys Afoko, Innocent Nwani
SEO Insight: Aslam Shaffraz
For further reading on AI, check these out from the Mozilla Foundation: