Fake news and information credibility became hot topics in late 2016. However, even before the U.S. presidential election cycle, librarians at the Yale Center for Science and Social Science Information (CSSSI) started to adapt Google workshops to address the changing online credibility environment. We have taught workshops on effective Google searching since 2012, developing the series because we observed that many researchers in the sciences and social sciences “satisficed” their information and research inquiries using Google and Google Scholar.
The workshops meet students, faculty, and staff at their points of need and acknowledge that Google and Google Scholar are part of their research toolkits. In addition to 30-minute workshops targeted at undergraduates, we also teach 60- or 90-minute sessions on Google Scholar and My Citations, Google Images, Google News, and other Google products. Our intention is to give participants sophisticated strategies and models for searching that will enhance the efficiency, relevancy, and quality of their search engine-based research. We presented information about these sessions in a poster at SLA Annual 2016 and in an Information Outlook article [1] that described our rationale for creating them.
In this article, we describe a workshop for participants in the Young Global Scholar program, a Yale University summer program for outstanding high school students from around the world. The immersive and intellectually rigorous classes and events center on topics such as environmental or political science. Instructors of the program plan for library sessions in which workshops are developed for these motivated high schoolers by librarians and archivists.
In June 2016, several CSSSI librarians created a 1-hour session that redefined our introductory Google workshop. Called Wayfinding the Web in the Age of Google, it introduced students to strategies for academic and personal research using Google and challenged their assumptions about the credibility and perceived neutrality of online information. The session’s active learning opportunities and sample searches would be relevant both for students’ research and their daily lives.
Why Teach Students Wayfinding?
The Wayfinding the Web workshop was inspired by a lecture and discussion with Safiya U. Noble, an assistant professor in the department of information studies at the UCLA Graduate School of Education and Information Studies. In April 2016, the Yale University Library’s Standing Committee on Professional Awareness (SCOPA) brought Noble to campus to discuss her research on information environments and the way they intersect with social justice and (in)equity.
Some participants in the SCOPA discussion were familiar with Noble’s work from a Bitch magazine article she wrote on the ways in which Google results reinforce gender, racial, and other stereotypes. The article shows how searches for “black girls” often return pornography while searches for female athletes often return sexualized “listicles” containing women who are also athletes [2, 3]. During the event at Yale, Noble discussed bias, credibility, and how algorithms reinforce social norms because they are written by humans who have biases. Noble also stressed the need to train librarians in social justice [4].
Some recent developments in the sciences also informed our process. In a PLOS ONE article on the content volatility in specific science topic pages on Wikipedia, Adam M. Wilson and Gene E. Likens discovered that most of Wikipedia is fine—except for the topics that are politically controversial. They compared pages on fundamental physics concepts to pages on contentious topics such as global warming and acid rain [5]. Essentially, if a topic is being debated by politicians and citizens, the Wikipedia page is less credible for fact-based research.
CRITICAL PEDAGOGY
These real-world examples of how a search algorithm or a particular search result can reinforce biases struck us as powerful points of entry into a discussion with students about how they find and evaluate online information. In our previous Google workshops, we had remarked upon concepts of authority, bias, and relevancy, but we decided to take a new direction in Wayfinding the Web. Using critical pedagogy, we could encourage students to engage with search engines, algorithms, and results in a more substantive way.
As defined by Nicole Pagowsky and Kelly McElroy, critical pedagogy is “the theory and practice (or praxis) of inclusive and reflective teaching in order to broaden students’ understanding of power structures within the education system and in society” [6]. Critical pedagogy is an appropriate match for the critiques of information structures offered by Noble and others.
Thus, Wayfinding the Web merges critical pedagogy with our local expertise in teaching Google and Google Scholar in conjunction with library databases. By discussing how Google works and how our searches can be shaped, constrained, and even warped by its underlying, human-created search algorithm, we could offer students an opportunity to encounter a familiar search engine in a new light. Our hope was that students would better understand and evaluate how information is retrieved, presented, and incorporated into (or rejected from) academic research and daily life.
WAYFINDING THE WEB Session
Each Wayfinding the Web session lasted 1 hour. We taught the class four times within a 2-day period. In the session, we compared and contrasted library web presence and born-digital search engines. Although it does not align perfectly with the ACRL Information Literacy Framework, we used our own threshold concept that we developed in the course of our Google series:
Library information platforms and online search engines are fundamentally different: Comparing them is like comparing apples to pomegranates. We want students to recognize essential differences between these tools in order to understand when, why, and how they can most effectively use them. The varied economic, political, and design contexts of these information resources make it crucial that searchers wayfind actively as informed participants.
Examples of Algorithmic Bias
Each session began by showing students a June 10, 2016, article and video by Jessica Guynn at USA Today regarding Google search results for "three black teenagers" (usatoday.com/story/tech/news/2016/06/09/google-image-search-three-black-teenagers-three-white-teenagers/85648838). The news story described the different image search results displayed for three black teenagers versus three white teenagers. Searches for black teenagers retrieved police mugshots, and searches for white teenagers retrieved results for “smiling young people” [7]. We asked the students the following questions:
Does anything about the article/video surprise you?
Do you agree with the author’s conclusions?
Is Google a neutral resource?
We also showed students a quotation from the USA Today article: “At Google, seven out of 10 employees are men. Most employees are white (60%) and Asian (31%). Latinos made up just 3% of the work force and African Americans just 2%—a far cry from reflecting the racial and ethnic diversity of its users in the U.S. and around the world” [7].
After 5 minutes of discussion, we shared a second example: A Guardian article by Leigh Alexander on unprofessional hair, which “yielded image results mainly of black women with natural hair” [8]. Searches for professional hair usually showed white women. According to Alexander, “Often the hairstyles themselves were not vastly different—only the hair type and the wearer’s skin” [8].
We then showed students an Equal Justice Initiative map, which used data from a PLOS ONE article to display a map of racial biases in searching (eji.org/news/google-searches-reveal-racial-bias-in-united-states). Racist Google searches correlated with an 8.2% increase in young African Americans’ mortality rate [9, 10]. The article and map suggest that searches in many parts of the country show overt racism, whereas problems like those in the Guardian and USA Today articles are evidence of subconscious bias in algorithms.
With the discussion and activity, we wanted to surface and challenge students’ assumptions about searching for online information in the ubiquitous Google search engine.
Breaking the Ice with Google and Library Databases
The next step was to help students actively engage with the algorithm. Because the workshop involved both Google and library databases, we provided two examples of Boolean searching:
Google:
“climate change” OR “global warming” toads -frogs intitle:”extinction” site:gov
(Many) library databases:
(“climate change” OR “global warming”) AND toads NOT frogs AND ti=extinct*
We displayed an iceberg image as a reference tool for how search engines operate. Search engines can only see a fraction of the internet. Thus, it is important to talk with students about the problems that arise even when content is visible to crawlers. For example, Scientific American has two domain names. The Yale University Library subscribes to the institutional version, which uses nature.com. The individual subscription version, scientificamerican.com, is at the top of search results. Google does not prominently display the nature.com results, so students at Yale frequently encounter paywalls when doing Google searches for Scientific American content.
In Google Scholar, searchers frequently click on a result and are taken to a publisher website. Aggregator subscriptions are often invisible to crawlers, causing students to be unaware that the library has access. The iceberg image reinforces the concept that there is no resource that will search everything. Our explanations revealed the messiness of information and explored issues related to information as a commodity. We also discussed problems in Google Scholar, specifically the predatory/scam publishers which look like real information resources. This discussion centered on four questions:
- What is peer review?
- How do you know when a scientific article is published by a scientist and checked for academic rigor?
- How often do you fact-check using Google?
- What do you typically do when you are evaluating sources in Google?
We demonstrated differences between Google, Google Scholar, and library resources using the search phrase lower voting age, a topic that would interest teenagers. In Google, we showed students how to search U.S. government websites using lower voting age site:gov and demonstrated how to combine search results using the OR command to make lower voting age site:gov OR site:edu OR site:org.
Students also learned about the search filters, especially the filter by date feature. Limiting searches to recently published pieces is effective for academic and personal use, as you can repeat searches every few weeks or months and see fresh results. In Google Scholar, we compared lower vot
ing age with the phrase ™voting age∫ to illustrate how to use quotation marks in searches.
LIBRARY WEB PRESENCE
Following demonstrations in search engines, we transitioned to navigating the Yale University Library’s web presence. A key component was a slide that showed logos and images of the types of materials we have in our catalog. We hoped that students would understand the specific challenges involved in ensuring digital discoverability for a collection that includes online and analog resources—we have nearly 11 million records in our catalog and countless electronic resources.
In several of the Wayfinding the Web sessions, we swapped our lower voting age search for Brexit to show students how to use Google and library resources to locate information. Brexit was a hot news topic in June 2016, and it definitely captured the participants’ interest. We began with a Google search, using Brexit as a keyword, and then moved to Academic Search Premier and Worldwide Political Science Abstracts to compare results. We navigated through several iterations of Brexit searches so students would see wayfinding in action.
Unsurprisingly, the starkest differences between Google results and library database results were related to the keywords and concepts we used. While Brexit was appropriate for Google, terms such as (Britain OR United Kingdom) European Union critique were more fruitful in scholarly databases. We used these very divergent result sets to highlight the need to match search terms to the resource used, as well as strategies for honing a library database search with subject terms and other metadata.
We compared the facets and metadata available in both interfaces to what is available in Google, arguing that the value added by professional staff reviewing records and ensuring that we have quality metadata makes answering hard questions a lot easier despite complex user interfaces.
ELEMENTS OF TRUST
The Wayfinding the Web workshop was eye-opening for students—and for us as instructors. We teach Google and Google Scholar because we know that searchers often make it their initial point of inquiry. This is the first time since starting to teach Google that we talked with attendees about common search assumptions. The questions we asked about their understanding of search algorithms gave us feedback that we can use to refine future instruction sessions for students.
We learned that these students, many of whom have grown up with Google, have an uncritical trust of Google for their academic and personal search needs. When asked if Google is a “neutral resource,” students replied that it was, or that it merely “reflected society.” When we heard this in the first Wayfinding session, we thought it was a fluke; but in each of the four sessions, students echoed this view. This was an initially disheartening discovery, but it gave us a clearer picture of students’ perspectives on search tools, search strategies, and information authority/quality. Neither the actions, biases, or agency of individual programmers, nor the status of Google as a corporate entity is at the forefront of students’ minds when they search.
A key moment in the workshop came when we realized that we needed multiple examples of search engine bias to help everyone understand algorithmic issues. Although students were troubled by the examples of “three black teenagers” or “unprofessional hair” search results, many did not believe that these results had anything to do with decisions made by programmers.
ONLINE HEALTH INFORMATION
Some students were more engaged by an impromptu example of online health information, which treats men’s health and safety concerns as the norm. We were aware that smartphones have faced criticism by health professionals for the inadequacy of Cortana, Siri, and Google Now at identifying and properly referring people in crisis to the help they need. According to a JAMA Internal Medicine study, Siri understood suicide ideation and physical injury/health crises, but not rape or domestic abuse. Only Cortana referred a rape crisis center when prompted [11].
Drawing on this study, but couching our example in less potentially triggering terms, we explained that women and men have different heart attack symptoms. If only men create algorithms, the likelihood that they would know or implement a gender-aware algorithm for individuals searching for heart attack symptoms is quite low. This example, paired with the Guynn article’s mention of gender, race, and ethnicity profiles of Google programmers, seemed to resonate with students, brought a degree of humanness to the algorithm, and suggested possible mortality outcomes of search algorithm bias.
WHERE TO SEARCH
In our comparison of Google/Google Scholar and library databases, particularly using the extremely current Brexit search, we demonstrated that choosing where to search means deciding whether you need a shallow view of an uncurated mélange or a deep dive into scholarly content. Our earlier discussions about the Google search algorithm set the stage for talking about differences in interface design priorities and underlying algorithm structure. Library databases are also made by people, and using these tools still requires vigilance.
We did not raise these issues to assert that Google is “evil” and library databases unblemished. Algorithms are not evil, nor are they neutral—they are an essential part of information technology that underpins all our online experiences—from Twitter to Facebook to Google to library databases and discovery systems. Instead, we wanted to underscore the need for vigilant critical thinking when seeking information.
Through a critical pedagogy lens, we sought to give students more than a checklist for evaluating what they read online. The CRAP (currency, reliability, authority, purpose) model of website evaluation has its benefits, but it fails to address how and why we retrieve a particular set of results through a search engine or library database. While everyday searches such as ™New Haven best pizza∫ are expected to contain multiple (and sometimes angry) perspectives, the majority of users turn to Google for evidence-based answers to their research questions, even when it doesn’t perform those searches well.
As information experts, we wanted to give students a conceptual decision tree that would help them find quality results in Google and library products. It is easier to search effectively and efficiently when you know how a search tool works and the nature of its limitations.
Find Your Own Way
In an ideal environment, all information seekers consider currency, volatility, virality, and the nature of their information need. They decide on the best information resource for the problem at hand. They recognize that information systems are flawed and should be met with a healthy skepticism. Ideal users are also empowered to use library websites and can make sense of the many complexities at play when it comes to the diversity of content represented in those digital-analog hybrid systems.
For those who teach Google Scholar and library resources as complements, we would like to offer the following takeaways:
- Using current events improves the relevancy
of a session. Keep up with the news until the time of your workshop and be willing to adapt on-the-fly. - Provide multiple examples. It does take time, but each student has a different experience of Google and the web.
- Iterate. Iterate. Iterate. Our workshops have built on one other and have changed dynamically alongside Google. Offering these sessions multiple times during a several-year period has made us very prepared for Google’s last-minute interface changes.
- Don’t be afraid to shake up students’ assumptions about search. We found that critical pedagogy techniques were more effective and immediate in holding students’ attention and conveying the concepts that we’ve tried to teach all along.
Kayleigh Bohémier (kayleigh.bohemier@yale.edu) is science research support librarian, Melanie Maksin (melanie.maksin@yale.edu) is director of research support and outreach programs, and Gwyneth Crowley (gwyneth.crowley@yale.edu) is social science research support librarian at Yale University.
Bibliography
1. Bohémier, Kayleigh Ayn and Melanie Maksin. “Providing Google Workshops to Science and Social Sciences Students,” Information Outlook, Sept/Oct 2016,
pp. 12–14 (sla.org).
2. Noble, Safiya Umoja. “Missed Connections: What Search Engines Say About Women,” Bitch Magazine, Issue 54, Spring 2012 (bitchmedia.org).
3. Noble, Safiya Umoja. “Google Search: Hyper-Visibility as a Means of Rendering Black Women and Girls Invisible,” InVisible Culture: An Electronic Journal for Visual Culture, Issue 19, October 2013 (ivc.lib.rochester.edu/google-search-hyper-visibility-as-a-means-of-rendering-black-women-and-girls-invisible).
4. Cooke, Nicole A., Miriam E. Sweeney, and Safiya Umoja Noble. “Social Justice as Topic and Tool: An Attempt to Transform an LIS Curriculum and Culture,” Library Quarterly, Vol. 86, No. 1, January 2016, pp. 107–124; doi:10.1086/684147.
5. Wilson, Adam M. and Gene E. Likens. “Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale.” PLOS ONE, Aug. 14, 2015 (journals.plos.org/plosone/article?id=10.1371/journal.pone.0134454).
6. Pagowsky, Nicole, and Kelly McElroy. “Introduction,” in Critical Library Pedagogy Handbook, ed. Nicole Pagowsky and Kelly McElroy (Chicago: Association of College and Research Libraries, 2016) pp. xvii–xxi.
7. Jessica Guynn. “‘Three Black Teenagers’ Google Search Sparks Outrage,” USA Today, June 10, 2016 (usatoday.com/story/tech/news/2016/06/09/google-image-search-three-black-teenagers-three-white-teenagers/85648838).
8. Alexander, Leigh. “Do Google’s ‘Unprofessional Hair’ Results Show It Is Racist? The Guardian, April 8, 2016 (theguardian.com/technology/2016/apr/08/does-google-unprofessional-hair-results-prove-algorithms-racist-).
9. Chae, David H., Sean Clouston, Mark L. Hatzenbuehler, Michael R. Kramer, et al. “Association Between an Internet-Based Measure of Area Racism and Black Mortality,” PLOS ONE, April 24, 2015 (journals.plos.org/plosone/article?id=10.1371/journal.pone.0122963).
10. Equal Justice Initiative. “Google Searches Reveal Racial Bias in the United States,” March 14, 2016 (eji.org/news/google-searches-reveal-racial-bias-in-united-states).
11. Miner, Adam S., Arnold Milstein, Stephen Schueller, Roshini Hegde, et al. “Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health,” JAMA Internal Med-icine, Vol. 176, No 5, May 2016, pp. 619-625 (jamanetwork.com/journals/jamainternalmedicine/fullarticle/2500043? mkt_tok=3RkMMJWWfF9wsRokuK3Ncu/hmjTEU5z17eUpXqa1lMI/0ER3fOvrPUfGjI4HTsBlPa+TFAwTG5toziV8R7LMKM1ty9MQWxTk).