What Are the Limitations of ChatGPT?

ChatGPT is a popular chatbot released by OpenAI in late 2022. Chatbots, or computer programs that simulate human interactions via artificial intelligence (AI) and natural language processing (NLP), can help answer many academic questions.

While using ChatGPT for your studies can be really useful, particularly for help with exam preparation, homework assignments, or academic writing, it is not without its limitations. It’s essential to keep in mind that AI language models like ChatGPT are still developing technologies and are far from perfect. Current limitations include:

Note
Universities and other institutions are still developing their stances on how ChatGPT and similar tools may be used. Always follow your institution’s guidelines over any suggestions you read online. Check out our guide to current university policies on AI writing for more information.

You can also learn more about how to use AI tools responsibly on our AI writing resources page.

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

ChatGPT limitation 1: Incorrect answers

Because ChatGPT is a constantly evolving language model, it will inevitably make mistakes. It’s critical to double-check your work while using it, as it has been known to make grammatical, mathematical, factual, and reasoning errors (using fallacies).

It’s not always reliable for answering complicated questions about specialist topics like grammar or mathematics, so it’s best to keep these types of questions basic. Double-check the answers it gives to any more specialized queries against credible sources.

Perhaps more concerningly, the chatbot sometimes has difficulty acknowledging that it doesn’t know something and instead fabricates a plausible-sounding answer. In this way, it prioritizes providing what it perceives as a more “complete” answer over factual correctness.

Some sources have highlighted several instances where ChatGPT referred to nonexistent legal provisions that it created in order to avoid saying that it didn’t know an answer. This is especially the case in domains where the chatbot may not have expertise, such as medicine or law, or anything that requires specialized knowledge in order to proceed beyond a general language understanding.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

Try for free

ChatGPT limitation 2: Biased answers

ChatGPT, like all language models, is at risk of inherent biases, and there are valid concerns that widespread usage of AI tools can perpetuate cultural, racial, and gender stigma. This is due to a few factors:

  • How the initial training datasets were designed
  • Who designed them
  • How well the model “learns” over time

If biased inputs are what determines the pool of knowledge the chatbot refers to, chances are that biased outputs will result, particularly in regards to how it responds to certain topics or the language it uses. While this is a challenge faced by nearly every AI tool, bias in technology at large represents a significant future issue.

ChatGPT limitation 3: Lack of human insight

While ChatGPT is quite adept at generating coherent responses to specific prompts or questions, it ultimately is not human. As such, it can only mimic human behavior, not experience it itself. This has a variety of implications:

  • It does not always understand the full context of a topic, which can lead to nonsensical or overly literal responses.
  • It does not have emotional intelligence and does not recognize or respond to emotional cues like sarcasm, irony, or humor.
  • It does not always recognize idioms, regionalisms, or slang. Instead, it may take a phrase like “raining cats and dogs” literally.
  • It does not have a physical presence and cannot see, hear, or interact with the world like humans do. This makes it unable to understand the world based on direct experience rather than textual sources.
  • It answers questions very robotically, making it easy to see that its outputs are machine-generated and often flow from a template.
  • It takes questions at face value and does not necessarily understand subtext. In other words, it cannot “read between the lines” or take sides. While a bias for neutrality is often a good thing, some questions require you to choose a side.
  • It does not have real-world experiences or commonsense knowledge and cannot understand and respond to situations that require this kind of knowledge.
  • It can summarize and explain a topic but cannot offer a unique insight. Humans need knowledge to create, but lived experiences and subjective opinions also are crucial to this process—ChatGPT cannot provide these.
    Note
    Passing off AI-generated text as your own work is generally considered plagiarism (or at least academic dishonesty) and may result in an automatic fail and other negative consequences. An AI detector may be used to detect this offense.

    We strongly recommend against using AI tools as a substitute for your own writing.

    ChatGPT limitation 4: Overly long or wordy answers

    ChatGPT’s training datasets encourage it to cover a topic from many different angles, answering questions in every way it can conceive of.

    While this is positive in some ways—it explains complicated topics very thoroughly—there are certainly topics where the best answer is the most direct one, or even a “yes” or “no.” This tendency to over-explain can make ChatGPT’s answers overly formal, redundant, and very lengthy.

    You can use Scribbr’s free text summarizer and Scribbr’s free paraphrasing tool to create more concise and coherent texts.

    The only proofreading tool specialized in correcting academic writing - try for free!

    The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

    Try for free

    Other interesting articles

    If you want to know more about ChatGPT, using AI tools, fallacies, and research bias, make sure to check out some of our other articles with explanations and examples.

    Frequently asked questions about ChatGPT

    Is ChatGPT a credible source?

    No, ChatGPT is not a credible source of factual information and can’t be cited for this purpose in academic writing. While it tries to provide accurate answers, it often gets things wrong because its responses are based on patterns, not facts and data.

    Specifically, the CRAAP test for evaluating sources includes five criteria: currency, relevance, authority, accuracy, and purpose. ChatGPT fails to meet at least three of them:

    • Currency: The dataset that ChatGPT was trained on only extends to 2021, making it slightly outdated.
    • Authority: It’s just a language model and is not considered a trustworthy source of factual information.
    • Accuracy: It bases its responses on patterns rather than evidence and is unable to cite its sources.

    So you shouldn’t cite ChatGPT as a trustworthy source for a factual claim. You might still cite ChatGPT for other reasons—for example, if you’re writing a paper about AI language models, ChatGPT responses are a relevant primary source.

    Where does ChatGPT get its information from?

    ChatGPT is an AI language model that was trained on a large body of text from a variety of sources (e.g., Wikipedia, books, news articles, scientific journals). The dataset only went up to 2021, meaning that it lacks information on more recent events.

    It’s also important to understand that ChatGPT doesn’t access a database of facts to answer your questions. Instead, its responses are based on patterns that it saw in the training data.

    So ChatGPT is not always trustworthy. It can usually answer general knowledge questions accurately, but it can easily give misleading answers on more specialist topics.

    Another consequence of this way of generating responses is that ChatGPT usually can’t cite its sources accurately. It doesn’t really know what source it’s basing any specific claim on. It’s best to check any information you get from it against a credible source.

    Is ChatGPT biased?

    ChatGPT can sometimes reproduce biases from its training data, since it draws on the text it has “seen” to create plausible responses to your prompts.

    For example, users have shown that it sometimes makes sexist assumptions such as that a doctor mentioned in a prompt must be a man rather than a woman. Some have also pointed out political bias in terms of which political figures the tool is willing to write positively or negatively about and which requests it refuses.

    The tool is unlikely to be consistently biased toward a particular perspective or against a particular group. Rather, its responses are based on its training data and on the way you phrase your ChatGPT prompts. It’s sensitive to phrasing, so asking it the same question in different ways will result in quite different answers.

    Can I create citations using ChatGPT?

    No, it is not possible to cite your sources with ChatGPT. You can ask it to create citations, but it isn’t designed for this task and tends to make up sources that don’t exist or present information in the wrong format. ChatGPT also cannot add citations to direct quotes in your text.

    Instead, use a tool designed for this purpose, like the Scribbr Citation Generator.

    But you can use ChatGPT for assignments in other ways, to provide inspiration, feedback, and general writing advice.

    How long will ChatGPT be free?

    It’s not clear whether ChatGPT will stop being available for free in the future—and if so, when. The tool was originally released in November 2022 as a “research preview.” It was released for free so that the model could be tested on a very large user base.

    The framing of the tool as a “preview” suggests that it may not be available for free in the long run, but so far, no plans have been announced to end free access to the tool.

    A premium version, ChatGPT Plus, is available for $20 a month and provides access to features like GPT-4, a more advanced version of the model. It may be that this is the only way OpenAI (the publisher of ChatGPT) plans to monetize it and that the basic version will remain free. Or it may be that the high costs of running the tool’s servers lead them to end the free version in the future. We don’t know yet.

    Cite this Scribbr article

    If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

    George, T. (2023, November 16). What Are the Limitations of ChatGPT?. Scribbr. Retrieved January 8, 2024, from https://www.scribbr.com/ai-tools/chatgpt-limitations/

    Is this article helpful?
    Tegan George

    Tegan is an American based in Amsterdam, with master's degrees in political science and education administration. While she is definitely a political scientist at heart, her experience working at universities led to a passion for making social science topics more approachable and exciting to students.

    1 comment

    Tegan George
    Tegan George (Scribbr Team)
    April 20, 2023 at 10:45 PM

    Thanks for reading! Hope you found this article helpful. If anything is still unclear, or if you didn’t find what you were looking for here, leave a comment and we’ll see if we can help.

    Still have questions?

    Please click the checkbox on the left to verify that you are a not a bot.