Methodology
My research assistant (Aris Bailey) and I tested 19 subjects in four user groups: public library patrons, public library staffers, academic librarians, and middle schoolers. Other than the students, all subjects reported previous experience with MeL.
The first thing we asked the users was what they thought the search box would deliver. Specifically, what did they think the search box was searching and what kinds of results did they think they would see? The top responses were books, articles, databases, magazines, and the difficult-to-satisfy expectation of “everything.”
In order to assess information-seeking skills, we developed and pre-tested 10 tasks for adults and 10 tasks for students. A post-task interview was conducted with each subject. We constructed the tasks to appeal to a wide audience, including two tasks from our top search terms, according to Google Analytics (“Chilton’s” and “bullying”). Tasks for the middle schoolers were adjusted to use culturally relevant examples.
User Tasks
The tasks explored a variety of ways that users could seek information on MeL:
- Find a full-text article on a variety of subjects; examples included a consumer health query with a given search term (“glaucoma”) built in to the question and more complex research that required users to develop their own search string
- The ability to use a limiter in Advanced Search
- Use facets, or limiters, on the search results screen to narrow down results
- The ability to request materials in MeLCat
The interviews covered topics such as the following:
- Whether users would have persevered with MeL had they been working outside of the testing environment
- Their understanding of the Advanced Search functionality, including specific terms on that page
- Their opinions about the look and feel of the results page
- What would make the search results set more useful for them
- Whether the icons assisted users in determining format
- Their opinions about whether interfiling database and catalog results was helpful in locating resources and differentiating their source
Results
At the end of the 40 to 50 minute test period, we asked each subject if the search experience had met his expectations. Only six out of 19 said yes. Respondents reported that the results page was confusing and that it was difficult to find what they wanted. For MeLCat usage, they reported that the many individual entries for different formats of books (electronic, print, audio, etc.) made it confusing to locate and request a particular item.
Overall, the response was mixed. More than 60% of the subjects said that their faith in the MeL search box was “Good—The MeL search box usually gives me a result I’m satisfied with.” At the same time, only 25% of the respondents said they would have persisted with the MeL discovery service if working on the tasks independently; the rest said they would have gone to their favorite search engine or to a particular database interface.
Advanced Search
Advanced Search presented a range of difficulties. There was often misunderstanding about how it operated beyond a common, well-articulated belief that it would narrow a user’s search. Subjects were tasked to limit their search in a particular way—to a specific journal, within a named database, or to a format such as “a book on CD.” They were more likely to use Advanced Search for these tasks than to refine by facet. Because of the design of the Advanced Search page, those attempts were often unsuccessful. There is a Keyword in Title field, and subjects often used it to search for, or within, a journal. However, that field only applied to the catalog, not the databases. Searches using this strategy were therefore unsuccessful. When tasked to find content in a particular database, subjects frequently searched for the name of that database—such as General OneFile—along with their search term. Unfortunately, that did not meet user expectations of searching within the named database. No field on the Advanced Search page allowed users to search within a particular database.
Using Facets
Tasks designed to test the usefulness of facets provided inconclusive results. During tasks that specifically asked users to search and then limit their results, participants went to the Advanced Search screen or reformulated their search terms more frequently than using the facets. When asked at the end of testing why they didn’t use the facets, we often heard that they hadn’t noticed them.
Patron-Initiated Requesting Through MeLCat
Locating and requesting MeLCat titles through Encore Duet proved challenging for every user group, including subjects who reported that they regularly borrowed materials with it. Stumbling blocks found during testing include differentiating among multiple bibliographic records of the same title, distinguishing among formats, and the ability to locate and request an available copy.
What Stood Out About the Public Library Patrons?
The testing experience with public library patrons yielded disparate results.
One self-described amateur genealogist (age 60-plus), who reported she’d used the MeL discovery service once or twice, constructed nearly perfect search strings for all 10 tasks. A man in his 30s, who reported that he worked in computer science and used the MeL discovery search more than once a month, used the advanced search option almost exclusively, although he only completed one task with it successfully.
All of the public library patrons had used MeLCat prior to testing, but only one was able to successfully request MeLCat materials during testing. That was disappointing; MeLCat is intended to be a patron-initiated service. Staff members at the library where these patrons were recruited report that they are often called upon to assist patrons with MeLCat.
What Stood Out About the Public Library Staffers?
Three library staff members and one degreed librarian representing a rural library system, Lapeer District Library, were tested. All had at least some college experience, and one of the non-librarians had an M.A. in another subject. Despite their familiarity with MeLCat and the generally favorable impressions they had of it, they encountered challenges that sometimes stopped them from making a successful request or that made it difficult to complete the information-seeking tasks.
What Stood Out About the Academic Librarians?
Academic librarians from the Lansing Community College (LCC) Library were recruited as testers. All were self-described expert internet users. Each stated that he or she used both the discovery service and MeLCat more than once a month. However, their success with the MeLCat tasks (finding specific titles and requesting them) was not as successful as either public library staffers or middle schoolers who had never used MeL.
As a group, they had the most effective search strategies. Due to familiarity with LCC’s discovery service (ProQuest’s Summon), they were the most adept at using its various features. Despite that, they often reported that they would prefer to go directly to a database’s native interface and not use the Encore Duet discovery service.
What Stood Out About Middle Schoolers?
Middle schoolers from Immaculate Heart of Mary-St. Casimir in Lansing, Mich., made up our student population. Of all the subjects, they displayed the highest degree of respect for MeL as a library. They reported that, with assignments, they needed to provide a bibliography to their teachers and that a library website was “dependable.” When asked if they would have left MeL to go to their favorite search engine if they were searching on their own, several said they would have used Google in addition to, but not instead of, MeL. Overall, their post-task impressions were more favorable than those of the adults.
Recommendations and Subsequent Actions for Encore Duet
We evaluated the study results to see how we could improve the MeL search experience for users. It’s important to note that this study was not intended to be an evaluation of Encore Duet as a product. Much of what we were testing was the configuration options that we had control over and how their implementation influenced the user experience. In the chart, the major findings are separated between options we had control over and options we didn’t. Most importantly, it’s easy to ascertain how we are addressing each major area of concern.
The Circle of User Experience
Usability is ongoing. Once all of these changes are implemented, we will retest to see if we met our goal of hosting a discovery service that better meets the expectations of users. And we will never stop evolving. |