VOICES OF THE SEARCHERS
Information Search and Rescue
by Mary Ellen Bates
On a rideshare trip recently, I noticed that the 20-something driver had a standard-transmission car. I complemented him on his ability to use a stick shift; it is a point of pride that I can still drive one too. In an era of electric vehicles with single-speed transmissions, I fear that this will soon become an arcane skill only passed down from the elders to a select few.
I am seeing a similar phenomenon taking place in the world of search. An article from The Washington Post back in February (washingtonpost.com/technology/2023/02/15/search-tips) breathlessly tells readers, “Bing and Google are bringing chatty AI-powered search results to the masses soon, but you can get better results today with these tricks.” The tricks suggested include enclosing a phrase in quotes, going past the first page of overly SEO’d search results, and—yikes—including the word reddit in your search to get “recommendations from real people” (insert exploding-head emoji).
Reading this reminded me that we search professionals some times forget how much more proficient we are at search than many of our patrons or clients. Of course, comparing our search skills to those of non-info pros is like comparing my goofy puppy, who trips over his own feet, with a highly trained search and rescue dog. My training goals for Mr. Pickles will be for him to sit on command and stay off the darn couch. A search and rescue dog has spent 500-plus hours in training and can handle highly complex situations with confidence.
Likewise, most people are accustomed to having a minimally challenging experience when they are searching. They ask Siri for comparisons of Airpods and Pixel Buds, or whether Tyrannosaurus rex had lips (answer: probably). They ask Google Bard or ChatGPT to explain dark matter to them. These kinds of queries don’t require any planning or preparation; they don’t need to consider what the best search terms would be or whether to limit by date or geographic region. These searchers don’t expect to scroll through 30 or 40 search results, scanning the URLs for ideas on what websites to explore in more detail. Rather, they expect to get the answer they need as a zero-click result, without any additional effort on their part.
Info pros, however, serve as informational search and rescue dogs. We have years of training and practice to understand how information is structured, how search engines work, and how to find the most authoritative information to answer a question. We are thinking about the perils and promise of searchbots and other generative AI tools, not just laughing at ChatGPT’s ability to write sonnets about psychedelics. We know how to hack search engines—focusing our search with advanced search filters, using site: to drill deeper into a web resource, and using non-tracking search engines when we need less-filtered results. We aren’t satisfied with the first answer we find, we watch for important outliers, and we help our clients understand and organize the information we find.
I was talking recently with a STEM librarian who has developed an innovative semantic-search tool that enables her users to search the full text of peer-reviewed articles from PubMed. She acknowledged that she will never convince her users to go beyond the familiar search tools they first encountered at college when these tools often get them “good enough” search results. Instead, her approach is to enhance their familiar tools so that her users benefit from her deeper understanding of the information landscape without needing to modify their behavior.
She acknowledged that eventually she will need to surface her search enhancement projects, particularly once she expands into other text and data mining initiatives and needs financial support from her user communities. But she also knows that trying to build user dissatisfaction with the results of their (relatively) simple searches is pointless. As she develops more enhancements and tools, she will be able to demonstrate the benefits by a direct comparison to a plain-vanilla PubMed search. As she not ed, “Once I start showing them knowledge graphs that allow them to see relationships between different diseases that they would not have recognized on their own, I know I’ll convert them.”
Just as I hope that my friend who trains service dogs isn’t going to judge Mr. Pickles by his rudimentary obedience skills, we search pros need to keep in mind that our users have sufficient expertise for good-enough queries. Rather than expect them to see and appreciate the advanced search techniques, value-added information sources, and information management tools we bring, we should focus our efforts on enticing our users to move outside their comfort zone by offering them results they can’t get from their usual sources. |