Research Guides: Artificial Intelligence (AI) and Information Literacy: What does AI get wrong? (2024)

Sometimes an AI will confidently return an incorrect answer. This could be a factual error, or – like in the example below – inadvertently omitted information. Vanuatu and Vatican City are both real countries, but these are not all the countries that start with the letter V.

Research Guides: Artificial Intelligence (AI) and Information Literacy: What does AI get wrong? (1)

Sometimes, rather than simply being wrong, an AI will invent information that does not exist. Some people call this a “hallucination,” or, when the invented information is a citation, a “ghost citation.”

Research Guides: Artificial Intelligence (AI) and Information Literacy: What does AI get wrong? (2)

These are trickier to catch, because often these inaccuracies contain a mix of real and fake information. In the screenshot above, none of the listed sources on The Great Gatsby exist. While the authors are all real people, and the collections are all real books, none of the articles listed here are actually real.

When ChatGPT gives a URL for a source, it often makes up a fake URL, or uses a real URL that leads to something completely different. It’s key to double-check the answers AI gives you with a human-created source. You can find out how to fact-check AI text in sections and video at the bottom of this page.

Currently, if you ask an AI to cite its sources, the results it gives you are very unlikely to be where it is actually pulling this information. In fact, neither the AI nor its programmers can truly say where in its enormous training dataset the information comes from.

As of summer 2023, even an AI that provides real footnotes is not providing the places information is from, just an assortment of webpages and articles that are roughly related to the topic of the prompt. If prompted, the AI will provide the exact same answer but footnote different sources.

Research Guides: Artificial Intelligence (AI) and Information Literacy: What does AI get wrong? (3)

Research Guides: Artificial Intelligence (AI) and Information Literacy: What does AI get wrong? (4)

For example, the two screenshots above are responses to the same prompt. In the second screenshot, the user specified to use only peer-reviewed sources. When you compare the two, you can see that the AI cites different sources for word-for-word identical sentences. This means that these footnotes are not where the AI sourced its information. (Also note that the sources on the right are all either not peer-reviewed or not relevant. Plus, artsy.net, history.com, and certainly theprouditalian.com are not reliable enough for you to source from in your assignments.)

This matters because an important part of determining a human author’s credibility is seeing what sources they draw on for their argument. You can go to these sources to fact-check the information they provide, and you can look at their sources as a whole to get insight into the author’s process, potentially revealing a flawed or biased way of information-gathering.

You should treat AI outputs like fact-checking a text that provides no sources, like some online articles or social media posts. You’ll determine its credibility by looking to outside, human-created sources (see lateral reading on the next page).

AI can accidentally ignore instructions or interpret a prompt in a way you weren’t expecting. A minor example of this is ChatGPT returning a 5-paragraph response when it was prompted to give a 3-paragraph response, or ignoring a direction to include citations throughout a piece of writing. In more major ways, though, it can make interpretations that you might not catch. If you’re not too familiar with the topic you’re asking an AI-based tool about, you might not even realize that it’s interpreting your prompt inaccurately.

The way you ask the question can also skew the response you get. Any assumptions you make in your prompt will likely be fed back to you by the AI.

For instance, when ChatGPT was prompted:

“Write a 5 paragraph essay on the role of elephants in the University of Maryland's sports culture. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims.”

It returned an answer full of false information about elephants being a symbol of UMD sports alongside Testudo, making up some elephant-related traditions and falsely claiming that elephants helped build U.S. railroads during the Civil War. It generated a list of non-existent news articles and fake website links supporting both of these claims.

Research Guides: Artificial Intelligence (AI) and Information Literacy: What does AI get wrong? (5)

By contrast, when ChatGPT was prompted:

“Does UMD's sports culture involve elephants? Give a detailed answer explaining your reasoning. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims.”

It returned a correct answer with information about our real mascot, Testudo the terrapin.

Research Guides: Artificial Intelligence (AI) and Information Literacy: What does AI get wrong? (6)

However, the sources it provided were both dead links, either to out-of-date pages on the UMD website, or to real pages with a muddled URL.

ChatGPT interpreted the first prompt as “taking it as a given that UMD’s sports culture involves elephants, write an answer justifying this.” However, with the way the second prompt was phrased, the AI was free to answer the question based on its training data, and returned the correct answer.

Depending on how we phrased the question, ChatGPT either reinforced a mistake we made in the prompt or corrected that same mistake. Paying attention to your prompt phrasing can make a key difference!

You can read both the conversations in full here:

Fact-checking AI

Now that you know some common errors that AI text generators make, how do we go about fact-checking AI outputs? Click the "next" button below to learn about fact-checking using lateral reading.

Research Guides: Artificial Intelligence (AI) and Information Literacy: What does AI get wrong? (2024)
Top Articles
Please Find Attached: Do You Need to Notify Your Audience?
Average cycling speed for new and experienced cyclists
Napa Autocare Locator
Www.politicser.com Pepperboy News
Phone Number For Walmart Automotive Department
Comforting Nectar Bee Swarm
Sportsman Warehouse Cda
Beds From Rent-A-Center
Crime Scene Photos West Memphis Three
Dark Souls 2 Soft Cap
Seth Juszkiewicz Obituary
Aita Autism
Craigslist Cars Nwi
6th gen chevy camaro forumCamaro ZL1 Z28 SS LT Camaro forums, news, blog, reviews, wallpapers, pricing – Camaro5.com
The Shoppes At Zion Directory
Restaurants Near Paramount Theater Cedar Rapids
Swedestats
Caledonia - a simple love song to Scotland
EASYfelt Plafondeiland
Winco Employee Handbook 2022
Ac-15 Gungeon
Chime Ssi Payment 2023
Turbo Tenant Renter Login
Cb2 South Coast Plaza
At 25 Years, Understanding The Longevity Of Craigslist
Panolian Batesville Ms Obituaries 2022
No Limit Telegram Channel
208000 Yen To Usd
Table To Formula Calculator
Anesthesia Simstat Answers
Weather Underground Durham
Craigslist Sf Garage Sales
Grand Teton Pellet Stove Control Board
Ixl Lausd Northwest
Amici Pizza Los Alamitos
Louisville Volleyball Team Leaks
Reborn Rich Ep 12 Eng Sub
Dr Adj Redist Cadv Prin Amex Charge
The Thing About ‘Dateline’
Silive Obituary
התחבר/י או הירשם/הירשמי כדי לראות.
Exam With A Social Studies Section Crossword
Rocket Lab hiring Integration & Test Engineer I/II in Long Beach, CA | LinkedIn
Aznchikz
Used Auto Parts in Houston 77013 | LKQ Pick Your Part
15:30 Est
Rocket Bot Royale Unblocked Games 66
Coleman Funeral Home Olive Branch Ms Obituaries
Nfsd Web Portal
Buildapc Deals
라이키 유출
Lorcin 380 10 Round Clip
Latest Posts
Article information

Author: Greg Kuvalis

Last Updated:

Views: 6295

Rating: 4.4 / 5 (55 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Greg Kuvalis

Birthday: 1996-12-20

Address: 53157 Trantow Inlet, Townemouth, FL 92564-0267

Phone: +68218650356656

Job: IT Representative

Hobby: Knitting, Amateur radio, Skiing, Running, Mountain biking, Slacklining, Electronics

Introduction: My name is Greg Kuvalis, I am a witty, spotless, beautiful, charming, delightful, thankful, beautiful person who loves writing and wants to share my knowledge and understanding with you.