Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > May 2024

Back Index Forward
SUBSCRIBE NOW!
Vol. 44 No. 4 — May 2024

VOICES OF THE SEARCHERS

The New Information Literacy
by Mary Ellen Bates

More than 60 countries are holding national elections in 2024. The availability of generative AI (GenAI) has introduced an entirely new level of risk to election infrastructure from malicious actors. While librarians and search professionals have always emphasized the importance of information literacy, today’s information landscape requires a new approach to determining the validity of online information.

I recently read Verified: How to Think Straight, Get Duped Less, and Make Better Decisions About What to Believe Online by Mike Caulfield and Sam Wineburg (University of Chicago Press, 2023). What caught my attention was the authors’ comment that even Digital Natives have been taught a media literacy approach from the 1970s. In the past, we could evaluate a web source by examining its URL and seeing whether the content had been updated recently, was free of spelling and grammatical mistakes, and was frequently cited. Today, those signals can all be faked; a dot-org domain doesn’t necessarily indicate a nonprofit organization; a bot could create well-written daily content; and diploma mills and predatory publishers can lend authority to anyone. We can no longer gauge a website’s reliability by its appearance and content.

Instead, info pros need to take on—and teach—the skills of professional fact-checkers, who focus on “lateral reading” to judge the truth of an assertion. In essence, fact-checkers use the web to verify the web. Instead of examining a website to determine whether it is trustworthy, fact-checkers look for context. What are others saying about this source? Who cites this source? What are the original sources for this site’s claims? In a time when GenAI can produce credible-sounding reports and moderately realistic fake images, the practice of ascertaining reliability by the test of “Yeah, that looks right” is dangerously misguided. GenAI tools are designed to “look right”; they are only plausibility generators, after all.

The rise in AI-generated content is particularly challenging for searchers, who are accustomed to evaluating sources on-the-fly, keeping at least 15 browser tabs open with promising leads, and bouncing between a professional bibliographic database and a search engine’s chatbot. In our path to the answer, we wind up following dead ends while also monitoring our peripheral vision for useful outliers. The very nature of a well-conducted search is that it is neutral and free of preconceptions or assumptions—characteristics that a good searcher cultivates by keeping a less-filtered attitude toward what information emerges during the search process. This open-mindedness, however, gets us in trouble if we forget how easy it is to fake credibility.

When I give presentations on super searcher tips, I often try to address the question of when to end a search. Back when I relied on professional databases of relatively authoritative bibliographic content, I believed that once I found several mentions of the same study, I had probably found the authoritative source and I could stop searching. Now, in an environment in which bad actors can simply flood the infoscape with false or misleading content, finding repeated mentions of a particular “authoritative” report simply means that we have to conduct further research to determine its trustworthiness and reputation among experts.

To effectively advocate for information literacy in 2024, we searchers have to become more proficient in the chatbots our clients are using; we only have credibility when we can back up our claims. In order to teach users not to trust the polished appearance of a chatbot like Google’s Gemini or Microsoft Copilot, we have to know how to test the authority of an apparently well-researched answer.

While many general-purpose GenAI tools are reluctant to cite their sources for a response when directly asked, there are workarounds. And these cited “sources” offer tangible evidence of the unverifiability of many chatbots’ responses. For example, I recently asked Google’s Gemini, Are taxes on soda effective, and show me sources that back up each claim. Of the five sources cited, three were entirely hallucinated, one had the correct title but incorrect citation, and one citation was accurate. While the chatbot’s response may, in fact, reflect a consensus of the sources that address this topic, we need to be able to show our users why its response cannot substitute for an answer. Professional searchers must develop enough skills in prompt engineering that we can teach users to probe more, to go beyond the “Yeah, that looks right” test of reliability. We can use these conversations to point users to GenAI tools that are taught on peer-reviewed or otherwise authoritative sources.

Now more than ever, information literacy is a critical skill, and info pros are called to adapt to the new info landscape.

Mary Ellen Bates


Mary Ellen Bates
(mbates@BatesInfo.com, Reluctant-Entrepreneur.com) fact-checks annoyingly at parties.

Comments? Emall Marydee Ojala (marydee@xmission.com), editor, Online Searcher