Answered By Library Staff
Last Updated: Apr 09, 2024     Views: 140

Sometimes a source really is too good to be true! 

Depending on the Generative AI, it might be telling the user what it thinks they want to hear. ChatGPT and other generative AIs, as of June 2023, do make up up citations that don't exist. They might seem like they could be real (the author might even be a real person, the publisher might exist, the journal or periodical might focus on publishing articles of that topic) but the overall source/citation is not real. This is because many AIs are not smart enough to present real sources, but rather string together likely word combinations found in our patterns of text engineers have fed it with. Many AIs are not connected to the internet, so cannot present you with links to updated sources even if it could, either. 

Here's an example from Google Bard's Generative AI. All of these book recommendations (citations) are made up. The first book does exist, but it's written by a different author. A quick Google search helped us verify what was real: 

google bard answer  

Just like when using Wikipedia for research, we need to use this tool wisely. We recommend the SIFT Method, which involves lateral reading, for evaluating a citation from an AI. If you can't find any mention of the exact source using a search engine, that should be a red flag. Made up sources have real-world implications and consequences if not verified. Take this news story, for example: 

snapshot of news

We also recommend that faculty take measures to verify student sources. College & Research Libraries recommends requiring students to submit full-text copies of cited sources, for example.

It's better to use AI like ChatGPT for tasks like these:

  • Brainstorming for keywords or phrasings to use in your research or writing 
  • Gleaning ideas about your topic you may not have thought about
  • Interpreting or summarizing concepts or other works you might be struggling to understand on your own