SEARCHER'S VOICE
To the Ozone and Beyond
by Barbara Quint
Editor, Searcher Magazine
In the course of Jill Grogg and Beth Ashmore’s research for next month’s article on the experiences of the libraries involved in Google Book Search, a story emerged. Of course, it’s apocryphal. All the best stories are. If you don’t believe me, buy (or borrow) a copy of the Oxford Book of Anecdotes. (Actually there is a whole series of anecdotal books in the Oxford reference series — literary, political, legal, etc.) Even when anecdotes carry clear credits, the doubt always lingers as to whether anyone (even literary lions or political pundits) could come up with such perfectly timed, perfectly worded responses to unanticipated opportunities for wit and wisdom.
The apocryphal tale involving Google Book Search came from one of the most recent libraries to join the program. Apparently the librarians at this research library — all excited about the imminent arrival of Google’s digitizing forces — prepared for the occasion by analyzing their collection for the most critical content. When it came to digitization, staff members were not amateurs. (As the article shows, most all of Google’s library partners have a history of digitizing portions of their collection.) Step one in all previous digitization projects always involved identifying and ranking key material.
So when the Google team arrived, the librarians were ready with their lists. The story goes that the Googlers listened very politely, glanced at the lists handed to them, and then asked where the 8 million books were. They wanted to get started right away. As a veteran executive of a library digitization vendor told me back when Google announced the launch of the Google Print service, “Google has just made every library digitization project in the world chump change.”
Oddly enough, however, the librarians were absolutely right to do what they did. In fact — and not oddly — Google Book Search has begun to lean toward their approach. As it speeds up its formation of library partnerships — adding a network of Catalonian libraries in Spain and the University of Texas at Austin just in the month since I received the final draft of the Grogg/Ashmore article, Google has begun targeting portions of collections, aiming at areas that will not overlap with what they have already gathered or at partners who complement collections already in the pipeline.
It’s not just Google out there. Digitization seems to have gotten a boost in general. The National Archive Publishing Company [http://www.napubco.com], the company that spun off of ProQuest Information and Learning before ProQuest I&L was sold to Cambridge Information Group, has announced a program to digitize hundreds of thousands of reports (“40 million pages”) entered into ERIC, the federal education information service. That program will even take on the burden of chasing down copyright holders to solicit permission to open the documents to electronic delivery.
Of course, digitization of entire documents is only one way to complete the transfer of existing knowledge to the World Wide Web. User-generated content, the so-called “wisdom of the crowd,” phenomenon continues to work its wiki magic and continues to raise wikied controversy. As we went to press, Microsoft was in trouble with Wikipedians because it had offered to pay a well-known blogger to “correct” technical errors in Wikipedia entries. In defense of its action, Microsoft claimed that it had tried to get the corrections entered in a more forthright manner, but no one would listen to them. So Microsoft thought that a third party with a good reputation might supply a more acceptable objective eye. The company also asserted that it would not have seen the submissions before they were sent. But apparently Microsoft is just too well-bred to expect someone to spend a lot of valuable time helping the company out without any remuneration. Clearly, with such a pale perception of amateur sports, Microsoft should avoid trying to get any company basketball teams into the NCAA, much less the Olympics. And, it appears, Wikipedia has a procedure in place that would have handled the problem, namely a “white paper” option that could have had Microsoft’s revisions linked to the core Wikipedia article.
The point remains. The ozone layer is not the only hole on Planet Earth we should be watching. Holes in the layer of information building on the World Wide Web also need plugging, but before you can plug them, you have to find them. So when is OCLC going to supply a master catalog to all Google Book Search holdings as part of Open WorldCat? When will library associations launch a coordinated program to urge and assist their members to identify unique content in their collections? When are the commercial aggregators/archivers of publisher content going to recognize and commit to their responsibility to include the digital-only content from those publishers and their publications?
When will information professionals unite to insure that the Web contains the best and most complete information and that users everywhere can at least identify the right source and find some way to access it? That way may even involve money or, again, it may not. Clearly, the open Web and open access and free anything will always attract users. But, again, before we can plug the gaps in the willingness of users to exert themselves by reaching for a credit card or driving to a nearby library or even twiddling their fingers to switch to a licensed database, we must know where the holes are. And users must know that their information professionals are on the job plugging in those holes.
bq
Barbara Quint's e-mail
address is bquint@mindspring.com.
|