Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > June 2013

Back Index Forward
SUBSCRIBE NOW!
Vol. 33 No. 5 — June 2013
THE OPINION PIECE
Upping Our Game: Unlocking Real Value in Libraries
by Stephen Abram

[This is a guest column by an industry leader about tech topics of importance to libraries. Authors write here only at the invitation of the editors. In this edition, we hear from speaker, blogger, and industry consultant Stephen Abram. —Ed.]

I think that libraries in particular are challenged with communicating our value and impact. I believe we are at a tipping point of ensuring that we thrive and adapt to the changes in our digitally enhanced, knowledge-based economy.
"Houston, we have a problem."

They say that the first step in any 12-step program is admitting that there is a problem in the first place. So, here goes: Our sector—comprising libraries, education, vendors, and publishers—has a problem. This problem isn’t a surprise. It’s sort of a dirty secret.

Stated simply, the statistics we use to track usage in our databases and digital products don’t tell us what we need to know, which is damaging the progress of quality digital content and libraries.

At a session during the NFAIS Annual Conference (nfais.org) in Philadelphia earlier this year, I did a presentation on this issue and called for sector-wide cooperation to address it. Let’s all get into a therapy group and look at what we count.

What Are We Counting and Sharing?

How do we evaluate digital usage? How do we make comparisons between databases and digital portals? Here, sadly, are the major things that we’re investing time in to evaluate our efforts in building a digital experience:

  • Titles
  • Clicks
  • Downloads
  • Sessions
  • Session length

We even have many detailed and excruciating standards for trying to make these statistics comparable such as COUNTER (Counting Online Usage of Networked Electronic Resources) and SUSHI (Standardized Usage Statistics Harvesting Initiative).

We generally invest a lot of time in the collection and management of raw statistics. What does this tell us? Not much. It might lead us to make decisions about what to cut or what to keep, but since it tells us little about whether our users are happy or whether we or they are accomplishing their goals, we’re none the wiser. Our focus on these numbers also can drive providers to libraries to drive for volume instead of impact. That’s likely not good.

What Should We Measure?

Are other measurements different (and more powerful) than traditional statistics? If we can clearly answer the real questions of impact on our institutional mission and goals—or on the value of the end-user experience in learning, creativity, invention, decision-making quality, or research—I believe that we will be on a better path.

These are the questions we should be asking and answering:

  • Was there improved satisfaction from the perspective of users and influencers such as scientists, professors, lecturers, and teachers?
  • Did learning happen? Did the performance on tests or projects improve?
  • Was there an impact on research or strategic outcomes for the community or institution?
  • Did the patient live, improve, survive, thrive?
  • Did the information improve the quality of decisions?

It’s time to think again about a sector-wide initiative that is collaborative and i nvests in studies of impact: what we’re doing and whether we’re making a difference, and how much of a role our digital initiatives play.

Dynamic Tension

I am not advocating that we abandon the collection of usage statistics. I am advocating for a sector-wide approach and sharing of impact and user experience studies that increase our understanding of the impact of our collections on the mandates of our institutions (such as R&D labs, universities and colleges, K–12 schools, communities, and targeted end users in general). We clearly need to rebalance our collection of quantitative usage information with qualitative information that talks to, and proves impact in, a digital world. We need to get off the hits train and start to understand that volume is not the prime directive. Activity doesn’t tell us what we need to know about impact, and statistics are not measurements. Transactions are not enough for us to understand transformations.

Additionally, in this world of Big Data, we need to start mining our data for knowledge—and not just information. That knowledge can allow us to make decisions on organization, presentation, alignment, and more. This can be done without compromising confidentiality and privacy (unlike in the commercially oriented consumer social site and search engine space).

We must start understanding usage better longitudinally. We must learn how usage in a digital context gives us the opportunity to understand the transformational nature of change in our key environments—especially those that are undergoing rapid, disruptive, and transformational change (such as is happening apace in education and research). While there are good academic studies of impact, they rarely seem to live past the bubble and change the digital experience enough. Many vendors do research the user experience but keep it internally so they have a competitive advantage. They don’t inform the community of their results, and restricting learning, comment, and debate.

The Downside of Statistical Models

When a market, such as library e-resource purchasing, relies too heavily on raw statistics to judge the worth of a database subscription, it can drive vendor and publisher behaviors that do not help the ultimate end-user experience. For example:

  • If the renewal depends on annual increases in raw statistics, it’s a simple matter to adjust the end-user interface in little ways to increase the clicks, hits, or downloads. This, of course, makes the end-user experience longer and less efficient, but it meets the renewal goals of the vendor. I know of one vendor who was able to reduce, by half, the time and clicks leading to the right information; that vendor was roundly pilloried by subscribers for messing up their management statistics regardless of the user-experience and satisfaction improvements.
  • If the renewal is going to be based on or aided by macro-measures of the number of titles of journals, it’s a simple matter to get on the size train, increasing title counts at any cost—regardless of quality or appropriateness for the database. Pulling in selected content from tangential titles, while counting the title as a contributor, potentially dilutes the database and pollutes the end-user experience. It increases the problem of swimming in an ocean of information, false drops, and undifferentiated content that might be aligned with need, learning style, readability, etc.
  • Lastly, there can be an emphasis on overall subscription cost changes rather than return on investment. One recent research article points out that cost per article used has declined greatly while library collections have grown exponentially and the scholarly research output has tripled. This can focus the conversation on costs to the detriment of alignment with institutional or community strategies. (See The Scholarly Kitchen’s post, "Have Journal Prices Really Increased Much in the Digital Age?" at bit.ly/11b3hP2.) According to the author, Kent Anderson, "Price increases have been caused by more science, more papers, and more journals, not by price increases in licenses. In fact, per-journal prices seem to have peaked around 2000, and steadily declined from there. …"

What do we do when our buyers are asking for usage data that does not properly or fully align with their long-term, strategic goals? What if prices of the predominant journal form have actually been falling? What if we’ve been measuring the wrong things, or we’re measuring insufficiently? And what if the growth in expenses is not the result of price increases but a result of the growth in science?

Another critical issue is the difference between a librarian’s criteria for evaluating electronic information services and end-user preferences and satisfaction factors. Librarians don’t necessarily test for things that align with end-user behaviors. Librarians fail to test with end users, and their evaluations are, basically, driven by a different mindset about how search should work. Combine this with problems associated with lightly monitored or unmonitored trials—which often use the inappropriate search test samples or poor sampling techniques—and the net result is that we wind up with little of true value for informed decision making. It also doesn’t help that vendors too rarely share their internal, academic, and independently contracted research about end-user satisfaction with their library contacts.

Therefore, I am calling for effort to be put into adding much greater dimension and color into our evaluation of the digital user experience and the impact of these important strategies.

Numbers Versus Measures

Libraries, publishers, and vendors must partner to develop and pursue research studies on the role of digital content on a much greater scale. Everyone must aggregate and share the studies they’ve conducted around learning and research. From the provider point of view, this should not be an issue of competition and competitive advantage. It’s too vital for everyone’s, indeed society’s, success. We need all boats to float higher.

Some areas that are worthy of further study include the following:

  • Usability versus user experience
  • End-user search behavior versus librarians’ habits
  • Known item retrieval (favorite test) versus immersion research
  • Results lists and display options, tiles
  • The role of native search, linked data, and discovery interfaces
  • Visual versus text-based display
  • The role of nontextual information in search experiences (video, audio, graphs, charts)
  • Scrolling versus pagination
  • Workflow alignment, citation management, tracking, etc.
  • Devices, browsers, and agnosticism
  • Search and experience satisfaction
  • Did the user use the information or experience to change his knowledge, decision, etc.?
  • The individual research experience versus the impact on groups in classrooms, labs, e-courses, LibGuides, training materials, etc.

Digital Analytics: What Do We Need to Know?

  • How do library databases compare with other web experiences and expectations?
  • Who are our core virtual users? Who matters most?
  • Is learning, discovery, or decision making being improved?
  • What are user expectations for satisfaction? What are influencers’ expectations (i.e., professors, lecturers, and management)?
  • How does a library search compare to a consumer search such as Google? How is it better?
  • How do people find and connect with library virtual services?
  • What should we fix and in what priority order?
  • Are end users being successful from their point of view? Are they happy? Will they come back? Will they tell a friend?

Who Needs Digital Analytics?

One challenge is that libraries have a very complex decision-making matrix. A large number of people can influence the decision to use digital information at a fairly granular level. While collection migration decisions are being made rapidly (along with decisions to align with learning management systems, to build targeted websites, and to create community experience portals), the measurement and evaluation systems to inform these decisions are lagging behind.

Who are the audiences for digital analytics? All of the following are decision makers:

  • Librarians (several languages management, reference, acquisitions, systems, learning management systems, etc.)
  • Institutional information technology and systems professionals
  • Elearning professionals and developers
  • Web design professionals
  • Library management team and chief librarians
  • City or university administration, provosts, and funders

In some respects, the knowledge and insights necessary to make informed decisions are buried in data and tables. While the satisfaction and impact measures tend to be focused on end users, it might be wise to develop a few visual representations of the data that allow decision makers to better understand, or take away, insights from the data.

Key Questions

That leaves me to conclude with these key questions:

Should our sector collaborate across vendors, libraries, and publishers and invest in the development and promotion of a suite of end-user impact and value-measurement tools and studies that actually inform and communicate the value in our initiatives, experiences, and products? Should vendors share their studies more openly?

Are we satisfied with the current situation, and, if so, should the vendors deliver the raw statistics that customers are currently asking for, letting the customer perform the analyses independently?

When I asked these questions at the NFAIS conference, a clear majority of the audience voted to invest in collaboration and sharing. A small, but statistically significant, group suggested that it was better to continue on the path of providing only what the subscribers were demanding.

I think that libraries in particular are challenged with communicating our value and impact. I believe we are at a tipping point of ensuring that we thrive and adapt to the changes in our digitally enhanced, knowledge-based economy. I fall on the side of better measurements and will continue to point out on my blog, Stephen’s Lighthouse, any value and impact studies that I discover. I hope to see more! Until then, I am reminded of this old African proverb:

Until lions learn to write their own story, the story will always be from the perspective of the hunter not the hunted.


Stephen Abram, M.L.S., is a strategy, marketing, and direction planning consultant with Dysart & Jones Associates. He is a past president of the Special Libraries Association, the Ontario Library Association, and the Canadian Library Association. He has worked in leadership roles in libraries and some of the major library vendors. He is the author of ALA’s Out Front With Stephen Abram and the Stephen’s Lighthouse blog. Stephen would love to hear from you at stephen.abram@gmail.com.
       Back to top