TUESDAY,
MAY 16
9:00 a.m. - 10:00 a.m.
OPENING
PLENARY SESSIONS
9:00 a.m. - 9:30 a.m.
WELCOME
Tom Hogan, Information
Today, Inc.
9:30 a.m. - 10:00 a.m
Coping with New Technology
Chair: Martha E.
Williams, University of Illinois, Urbana-Champaign
This is the 21st National
Online Meeting (NOM) coinciding with the beginning of the 21st century.
As we start the new millennium we are faced with the prospect of ever-increasing
amounts of new information technology (IT) for use in electronic publishing,
database publishing, and search of and retrieval from resources available
on the Internet as well as traditional online services. How are publishers,
enterprises, and individual users coping? What can they do to simplify
their selection of hardware, software, and information resources? Browsers,
filters, indexes, and automated functions of a wide variety of types are
now available and far more will be developed in this century. Keeping up
has long been a problem but the volume of change may reach exponential
proportions.
Speakers at the 21st NOM
will address the problems and solutions from the perspectives of information
professional users, end users, researchers, and product developers.
Highlights of the Online Database Industry
and the Internet
Martha E. Williams,
University of Illinois, Urbana-Champaign
10:00 a.m. - 10:45 a.m.
KEYNOTE
SPEECH
Hot Topics in Internet Law: How Do
they Impact on Publishers and Users of Information?
David Mirchin, SilverPlatter
This talk will review the
most current developments in Internet law and how they affect the businesses
that produce information and users who access the information. David Mirchin
will cover the following topics:
1. Is spamming legal? Recent
cases & laws
2. Employee gripe sites:
Free speech or invasion of a company’s property?
3. Clickwrap licenses:
update on cases and the Uniform Computer Information Transactions Act
4. Protection of factual
databases: European and U.S. update
5. Deep-linking and framing
Depending on news breaks
between now and May 16 David’s list of what’s hot will be expanded to what’s
hot at the time of the conference. This will be a very up-to-date overview
of Internet issues, cases, and decisions.
10:45 a.m. - 11:15 a.m.
Coffee Break - A Chance to Visit the
Exhibits
Track
A • SEARCHING AND SEARCH ENGINES
11:15 a.m. - 12:15 p.m.
A1 • Search
Engines: Reliability, Quality, Comparative Characteristics
Chair: Ev Brenner,
Brenner & Associates
52 Pickup: Characteristics for Search
Engine Selection for Health Information Questions
Patricia F. Anderson
and Nancy Allee, University of Michigan
Health information needs
tend to be critical, both as an urgent need for the patient or caregiver
but also in that the quality and appropriateness of the information provided
can dramatically impact the quality of life for the health care consumer.
Is there a way to consistently find quality answers for these types of
questions? A qualitative comparison of standardized search results for
26 general search engines and 26 health information search engines yielded
the evidence base for developing a decision tree to assist with the selection
of an appropriate search engine for specific health information questions.
The Reliability of Internet Search
Engines: Fluctuations in Document Accessibility
Wouter Mettrop and Paul
Nieuwenhuysen, CWI and Vrije Universiteit Brussells
The result set of documents,
shown by an Internet search engine as response to a query, changes constantly
over time. Broadly speaking one can say that alterations in this set are
correct if they are a reflection of alterations in the (Web) reality (documents
are added or removed). If not, they are incorrect. Incorrect changes do
not only concern incorrect removals of documents from the set of indexed
documents, or incorrect (late) additions to this set, they also can result
from the situation that, once an engine has indexed a document, it from
then on does not always succeed in retrieving this document. Our investigations
point out that most engines suffer from this “incorrect variable behavior,”
in the sense that unexpected and annoying fluctuations exist in the result
sets of documents, which means that documents cannot be retrieved reliably.
The results from our investigations will be presented in this paper.
The author investigated
13 Internet search engines: AltaVista, EuroFerret, Excite, HotBot, InfoSeek,
Lycos, MSN, NorthernLight, Snap, WebCrawler and three Dutch engines: Ilse,
Search.nl and Vindex.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
A2 • Research
Toward Improved Systems, Services, & Resources
Chair: Ev Brenner,
Brenner & Associates
Information Security & Sharing
Elizabeth D. Liddy,
Syracuse University
The role of information
specialists has broadened in recent years to include responsibility for
the security of the intellectual property of the organization. Working
together with network and system administrators, information specialists
are asked to ensure that valuable internal information is not innocently
or maliciously sent out via electronic messages to recipients who should
not be in receipt of sensitive information. This presentation will cover
pros and cons of current approaches, and discuss a new NLP-based technique
that minimizes the drawbacks of the others. This technique goes beyond
current ‘dirty word list’ approaches and instead uses semantic models of
an organization’s business policies to better determine which electronic
documents are acceptable for release and which are not. The new approach
uses advanced Natural Language Processing techniques to extract vital facts
and conceptual relationships from the organization’s business rules or
corporate policies in order to construct semantic models of these policies.
Then, the ‘meaning content’ of outgoing documents is compared against these
models, which are known as semantic, releasability models. Based on similarity
to the models of acceptable vs. unacceptable messages, the documents are
either released or diverted for human review. As a result of the increased
sophistication of the approach, only documents that really should not be
permitted to pass through the organization’s firewall are halted. And increased,
appropriate sharing of information occurs.
The Evolving Psychology of Online Use:
From Computerphobia to Internet Addiction
Brian Quinn, Texas
Tech University
In the brief space of thirty
years, information technology has undergone a remarkable transformation
in the minds of end users. This study traces the evolution of end user
attitudes toward online use, from the time when computers were first introduced
and the subsequent rise of technophobia and technostress, to the emergence
of the Web and more recent psychological adaptations-including obsessive
compulsive disorders, addiction, and surrogate companionship. The reasons
for this transformation-both technological and human-are investigated.
A detailed analysis of what makes online use so psychologically engaging
is provided. The psychological potential of information technology, it
will be shown, has been largely ignored by the media and is still largely
untapped. The paper concludes with an in-depth look at some of the practical
ways professionals are combining this changing end user psychology with
emerging technology to bring about positive psychological adaptations and
outcomes in the health care field.
3:00 p.m. - 4:00 p.m.
A3 • Quality:
Metrics, Searching, & Practices
Chair: Peggy Fischer,
Management Decisions
Quality Metrics: How to Ensure Quality
Taxonomies
Claude Vogel and
Joshua Powers, Semio Corporation
With the ever growing amount
of information, directory and taxonomy building has become a very hot topic.
Very different solutions claim to solve this problem, from topic search
to neural or conceptual networks. In all cases, a central issue is the
quality of the resulting hierarchy. Frequent updates, multiple user needs,
growth of information mass are all adverse factors working against quality.
This presentation will propose a set of quality metrics and demonstrate
their applicability.
Quality Patent Searching
Cynthia Kehoe, University
of Illinois at Urbana-Champaign
A major challenge of patent
database quality control is the maintenance of the changing classification
scheme. Subject access to the patent literature is primarily through classification
codes, which undergo continual revision. Classes and subclasses are added
and dropped, as new technologies are developed and older ones modified,
and these changes are retroactive. An entire class can be dropped. The
original printed patents are not reissued, but database records are updated.
Searching a database from the U.S. Patent and Trademark Office (USPTO)
by the old classification will no longer retrieve any records for dropped
classes. Most database vendors have instituted schemes of regular replacement
of old patent records with the updated ones provided by the USPTO. It has
been found that the database records do not accurately reflect changes
in the classification scheme. Classification searches in the LEXIS patent
database can yield quite different results than in the USPTO databases.
Searches of classes that no longer exist in the USPTO files still retrieve
records in the LEXIS database. A second quality issue is presented by the
creation of patent image databases on the Web. The implications for quality
subject searches are examined.
4:15 p.m. - 5:15 p.m.
A4 • Searching
the Web by End Users and Professionals
Chair: Peggy Fischer,
Management Decisions
End User Database Searching On the
Internet: An Analysis of the State of Wisconsin's BadgerLink Service
Dietmar Wolfram and
Hong Xie, University of Wisconsin-Milwaukee
Wisconsin’s BadgerLink
service, which became available in 1998, provides access to a range of
databases from EBSCO and ProQuest to qualified institutions and individuals
in Wisconsin. Both EBSCO and ProQuest provide access to the full text or
abstract and citation information of more than 5,000 periodicals. Previously
only available to information professionals with online access, the Web-based
BadgerLink service has now made these databases available to hundreds of
libraries and countless end users. The recent availability of usage statistics
of the service presents a valuable opportunity to evaluate how it was being
used. The authors analyzed six months of usage data, covering the period
January through June. Data analyzed included databases accessed, periodical
and monograph titles selected, document formats viewed, and institutional
affiliation. The authors found that searchers did not just limit themselves
to default databases, but were selective about the databases searched.
The most frequently accessed titles reflected a broad range of information
seeking, with academic information or current events being most popular.
Implications of the findings for the provision of end user Internet-based
information services are discussed.
The Changing Landscape of Business
Research:
How Internet-Based Services Are Empowering
End Users and Focusing Information Professionals
Michael Gallagher,
Powerize.com
The Internet has dramatically
changed demands for information. Corporate personnel to corporate librarians
are on a never-ending hunt for more information that is credible and easily
attained. Traditional portals used by the general public don’t often produce
the results necessary. Information professionals don’t often have the time
or budget to launch full research initiatives to keep up with staff demands.
Bridging the gap between demand and supply are new information portals
that provide professional level information to all members of an organization
— sans the hefty price tag and complicated search algorithms. Such services
are enabling information professionals and their organizations to keep
up with demand, and provide satisfactory results that are highly relevant
and extremely budget-friendly. This paper will address what this means
to the business research. Specifically it will address how Internet-based
information services are empowering end-users and focusing information
professionals.
Searching the Web:
Users, Tasks and the Web: Their Impact
on the Information-Seeking Behavior
Kyung-Sun Kim, University
of Missouri-Columbia
This study seeks to investigate
how users’ cognitive style and online database search experience affect
their information-seeking behavior when they search for information on
the Web. Two dimensions of information-seeking behavior are included for
investigation: search performance and navigational tool usage. Search performance
is measured by the time spent and by the number of nodes visited for the
completion of a task. Navigational tool usage is gauged by the number of
times a search/navigational tool (e.g. embedded-link, back, etc.) is chosen.
Forty-eight undergraduate students participated in this study. Based on
their cognitive style (field-dependent vs. field-independent) and online
database search experience (experienced vs. novice), the participants were
evenly divided into four groups. In a lab session, each participant was
asked to search for information on the Web in order to complete tasks given.
All screen displays and user inputs were recorded in real time, and the
data were analyzed using ANOVA. Findings suggest that users’ online experience
and cognitive style interact to influence the search performance. Among
those with little or no online database experience (novices), the field-independent
individuals outperformed the field-dependents by spending less time and
visiting fewer nodes for completing a task. However, the difference created
by the cognitive style was almost erased in those individuals with considerable
online database experience (experienced searchers). A similar interaction
effect was observed in tool usage. Among the novices, the field-dependents
tended to use embedded-links and the home button more frequently than the
field-independents, whereas these differences disappeared again among the
experienced searchers.
Track
B • WORKING ONLINE: INFORMATION STRATEGIES IN AN ONLINE WORLD
Track Leader, Organizer,
and Chair: Jane Dysart, Dysart & Jones Associates
With the current infrastructure
of Web and Net technology, the world has become a much smaller place. People
work from various locations, continents away from their team and support
groups. How do information professionals deal with this environment? This
track focuses on working virtually, creating products and services for
the desktop wherever it may be, providing 24/7 service to remote users,
and more.
11:15 a.m. - 12:15 p.m.
B1 • Working
Online With Customers and Colleagues: Making the Invisible Visible
Working Online With Customers and Colleagues:
Making the Invisible Visible
Rebecca Jones, Dysart
& Jones Associates
Working virtually or "at
a distance" from colleagues, clients and, yes, management, presents both
tremendous opportunities and daunting challenges. The online content and
communication capabilities allow us to provide services globally, any time
and to collaborate with team members we may have never met. This can be
both innovative and isolating. Out of sight can be out of mind. This paper
examines the issues involved with and the competencies necessary to thrive
in these new working environments.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
B2 • Working
Online: Case Studies
Richard Hulser, IBM
This session focuses on
case studies reflecting the issues and experiences of information professionals
who are thriving in new working environments. Discussions range from dealing
with technology and telecommunications to ensuring that senior management
understands your value even if they can’t see you balancing politics and
team spirit from a distance.
3:00 p.m. - 4:00 p.m.
B3 • Working
Online: Case Studies
Marsha L. Fulton,
Arthur Andersen AskNetwork
Jim Hensinger, Bibliographical
Center for Research
Carol Myles, SilverPlatter
Our speakers describe the
products and services they have developed and currently deliver to global
communities. They cover the challenges and learnings they have from those
activities, including understanding client needs, licensing, working with
intranets and extranets, testing, training, and more.
4:15 p.m. - 5:15 p.m.
B4 • Working
Online: Case Studies (cont.)
Marsha L. Fulton,
Arthur Andersen AskNetwork
Barbara Herzog, SilverPlatter
Track
C • RESOURCES AND SEARCHING ON THE WEB
11:15 a.m. - 12:15 p.m.
C1 • What
Your Mother and Publisher Never Told You About Databases
Péter Jacsó,
University of Hawaii
Fee-based databases are
expected to provide accurate, timely and comprehensive information, reliable
and predictable coverage of core journals, and consistent indexing. The
gut reaction to searching fee-based databases is that you get what you
are promised and pay for. As the examples will demonstrate this is not
always the case. Respected information sources often fall short of expectations
and of the promises made in their promotional materials.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
C2 • Most
Bang for the Buck Databases
Péter Jacsó,
University of Hawaii
There are databases on
the Web that charge either a transactional fee or flat-rate subscription
fee for high quality information resources that are affordable for individuals
and corporate users on a shoe-string budget. They often provide far more
value per dollar than their expensive competitors in the traditional world
of information services.
3:00 p.m. - 4:00 p.m.
C3 • Grey
Literature and the Web of Innovation: Discovery Information Resources
Chair: Judy Luther,
Informed Strategies
Grey literature was redefined
at the 1997 Luxembourg Convention. It is that which is produced on all
levels of government, academics, business and industry in print and electronic
formats, not controlled by commercial publishers. Over the past 30 years,
since this term has been in use, it has steadily moved into the mainstream
of information society. In the last decade of the 20th Century, it has
come to dominate information supply side, and well before the end of the
first decade of this 21st Century, Grey Literature will also dominate information
demand side. The term Grey Literature had to be redefined not only because
it met the problems, which it originally faced; but, more importantly,
by doing so, it came to challenge commercial publishers, who had done everything
to question the value of grey literature. Because the producers of grey
literature are the users of grey literature, it was but a matter of time,
before academic institutions, business and industry, as well as government
and international organizations realized that what they were producing
in print was not a commercially viable product, however, its content was
of equal value to commercial publications. With the technological developments,
especially the World Wide Web, Grey literature is now also seen by commercial
publishers as both knowledge rich and market ready.
The Grey Link in the Information Supply
and Demand Chain
by Dominic Farace,
Grey Literature Network Service (GreyNet); presented by Eileen Breen, MCB
University Press
In the field of grey literature,
perhaps even more than in other fields, Internet brings a deep change in
the typology of information sources, in their circulation, production and
use. Some typical features of grey literature such as quick and informal
information exchange, particularly appreciated by businesses, are increased
by the potentialities of the Net. Since the rise of Internet and the development
of electronic access to information, an increasing mass of documents is
available to a growing number of users. In this context, we can no longer
talk of the limited dissemination of non-commercial, hard-to-access information
and documentation. The speakers in this session will help you to understand
and discover the wealth of grey literature on the web.
Corporate Research: Accessing Grey
Literature through Traditional Sources
Helen Barsky Atkins
and Helen Szigeti, ISI
It has long been the function
of secondary, or abstracting and indexing (A&I), services to provide
users with ready means to identify and access primary documents. A&I
services traditionally provide this access to a limited number of primary
document types; many services index only journal articles. As the various
types of grey literature (which in the past have been difficult to find
for both users and A&I’s) are becoming more easily accessible through
the World Wide Web, they are becoming integrated more and more into existing
A&I services. Secondary services are expanding their products’ content
beyond descriptions of journal articles and books to descriptions of, and
links to, new types of documents. In this paper we will provide examples
of how this trend — expanding the types of documents that are indexed and/or
linked to — serves the needs of the corporate research community.
4:15 p.m. - 5:15 p.m.
C4 • Grey
Literature in the Government and Public Sector
Grey Literature in the Government:
Mixing Up Black & White
Bonnie Carroll &
Bonnie Klein, Information International Associates, Inc.
Today’s information technologies
have fundamentally changed the nature of communication and are dramatically
impacting the publication processes that have supported governmental functions.
This includes scholarship (research and development), operations, and intelligence.
In particular, the Internet has transformed the traditional life cycle
of “documents” to the extent that the concept of a document itself has
been a point of debate. In this regard, the definition of gray literature
becomes a useful concept to help inform the discussion. We will explore
the nature of gray literature in the context of a government structure
which has been a strong advocate of an electronic society. Scientific and
technical information will be used as a case study.
Grey Literature in Academics and Other
Knowledge Rich Communities
Julia Gelfand, University
of California, Irvine
Universities and academic
research environments have been hotbeds of creativity in all disciplines
they support. In addition, the proliferation of multidisciplinary activity
and new emerging subdisciplines has spawned innovative practices of scholarly
communication suggesting a variety of creative information products and
partnerships between academic institutions, government and private industry.
Many of these resources and its content had and have enormous value to
diverse populations and did not make it into traditional publishing pipelines.
Options for grey literature have multiplied with electronic distribution
and the reliance of the Internet. This paper will explore how scholarly
communication channels and new information technologies now enhance the
attraction and value of grey literature from origins in academic communities
and have positioned grey literature as knowledge in a more mainstream path
for access, delivery, manipulation, use and archival retention.
WEDNESDAY, MAY 17
8:15 a.m. - 8:45 a.m.
SPECIAL
BREAKFAST PRESENTATION
e-GADS!
Ron Dunn, President,
Thomson Learning
The e-craze rolls on, and
as we’re constantly bombarded with stories about sky-high Internet stock
prices, megamergers, and overnight gazillionaires, it’s hard to keep those
visions of sugarplums from dancing in our heads.
But to paraphrase an old
TV commercial, “Where’s the e-beef?” How much is the real world going to
change because of the Internet, and how fast? How should we prepare to
cope with all the threats and opportunities the new century will bring?
Ron Dunn, whose views on
all things “e-” have been variously characterized as rational, skeptical,
irreverent, and even antediluvian (but never, to his knowledge, visionary),
will explore these and other current issues in this special breakfast session.
9:00 a.m. - 9:45 a.m.
OPENING
PLENARY SESSION
Chair: Martha E.
Williams, University of Illinois, Urbana-Champaign
Information Visualization
Ben Shneiderman,
University of Maryland
Human perceptual skills
are remarkable, but largely underutilized by current graphical user interfaces.
The next generation of animated GUIs, information search, and visual data
mining tools can provide users with remarkable capabilities if designers
follow the Visual Information-Seeking Mantra: overview first, zoom
and filter, then details-on-demand. But this is only a starting point in
the path to understanding the rich set of information visualizations that
have been proposed. Two other landmarks are:
Direct manipulation:
visual representation of the objects and actions of interest and rapid,
incremental, and reversible operations
Dynamic queries:
user controlled query widgets, such as sliders and buttons, that update
the result set within 100 msec. and are shown in the early HomeFinder and
FilmFinder prototypes, NASA environmental data libraries, WestGroup legal
information, LifeLines (for medical records and personal histories), Spotfire
(commercial multidimensional visualization tool), and PhotoFinder for personal
photo libraries.
Information visualizations
can be categorized into 7 datatypes (1-, 2-, 3-dimensional data, temporal
and multi-dimensional data, and tree and network data). This taxonomy and
its application can guide researchers and developers.
Track
D • NEW SEARCH AND RETRIEVAL TECHNOLOGY FOR THE 21ST CENTURY
Co-Track Leaders and
Co-Chairs: Stephen Arnold, Arnold Information Technologies, and Susan
E. Feldman, International Data Corporation
10:00 a.m. - 11:00 a.m.
D1 • New
Directions & Dilemmas
Retrieval Dilemmas: Barriers and Challenges
to Information Retrieval Systems
Susan E. Feldman, International
Data Corporation
Search engines are fairly
competent at matching queries and documents. Now, where do we go from here?
In particular, how will we deal with the special problems of searching
very large mixed collections of multimedia, text and relational database
materials? How do we make systems usable and understandable? Can we help
searchers ask for what they really want by improving their queries? How
do we measure how effective a retrieval system is? And how can we make
today’s systems adapt to changing information needs and changes in subject
terminology? This introduction to the New Technologies Track will raise
the questions that other speakers will answer during the day.
Spiraling Into Control: New and Improved
Searching
Matt Koll, AOL Fellow,
Founder, Personal Library Software
Matthew Koll will discuss
trends in the features and functionality of search services. He will talk
about functional advances — why some work and why others do not, how they
relate to each other, how they relate to research in information retrieval,
and how they relate to knowledge of how users interact with systems. He
will explore issues trade-offs in terms of scale vs. responsiveness, comprehensiveness
vs. precision, functionality vs. ease-of-use, and novelty vs. authority.
He will attempt to predict where this field is headed.
Visualization
Ben Shneiderman, University
of Maryland
11:15 a.m. - 12:15 p.m.
D2 • Pushing
the Retrieval Boundaries
The Eighth Text Retrieval Conference
(TREC 8)
Ellen M. Voorhees and
Donna Harman, National Institute of Standards & Technology
NIST and DARPA have sponsored
the Text Retrieval Conferences (TRECs) since 1992, providing a cross-system
evaluation forum for search engines that attracted 66 participating groups
from 16 countries in 1999. The basic task is to search large amounts of
text (around 2 gigabytes) and produce ranked lists of documents that answer
a specific set of test questions. The more recent TRECs have included additional
tasks called “tracks” that extend this paradigm to evaluating related areas.
This emphasis on individual experiments evaluated within a common setting
has successfully advanced the state-of-the-art: retrieval performance has
doubled since TREC began. TREC-8 (1999) included 7 tracks, which taken
together represented the bulk of the experiments performed in TREC-8. Tracks
continuing from previous years investigated cross-language retrieval, retrieval
of spoken documents (news broadcasts), automatic construction of text filters,
the effect of query variability on search effectiveness, and cross-system
evaluation of interactive systems. In addition, two new tracks were introduced.
The Question Answering track was the first large-scale evaluation of systems
that retrieve answers, as opposed to documents, in response to a question.
The Web track investigated how Web searching differs from other search
tasks.
Text Mining & Beyond
Elizabeth D. Liddy,
Syracuse University
Recent advances in Natural
Language Processing-based information access and analytic technologies
can be coupled with clarifying visualization techniques to produce systems
that facilitate advanced text-mining capabilities — a specialized form
of data mining. The utility of Text Mining is pervasive in organizations
where much of the knowledge needed to better manage, market, and sustain
an organization resides in unstructured textual documents — either internal
to the organization or externally. The same advanced NLP technology can
be applied to all such sources to automatically produce rich, structured
knowledge bases containing vital information for planning and decision-making.
Along with details of her
most recent NLP research which is producing applications for a range of
organization types, Dr. Liddy will describe extensions of Text Mining to
information extraction and summarization, as well as showing semantically
based tools for viewing, organizing, and retrieving text from a range of
complex document types for use in relevant knowledge management applications.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
D3 • New
Products Overview: Agents & More
New Search-and-Retrieval Services Technologies
Stephen Arnold, Arnold
Information Technologies
This talk offers a rapid
review of the newest search-and-retrieval services technologies, packaging
and business models. Web-centric search-and-retrieval services are undergoing
a sea of change. The implications of the shift from lists of sites and
robot-created indices will increase the stakes in the already high-stakes
games of content access. Through case analyses of new and innovative search-and-retrieval
services, the technical, business and market issues provide the attendee
with practical insight into these new directions.
The session will explore
the embedding of intelligence within the user’s browsing environment. This
will be illustrated by discussions of new search and retrieval technologies.
Among the topics that will be explored in the talk will be:
-
Metacrawlers. The deduplication
and relevance ranking tools found in Copernic 2000 and Bull’s Eye reduce
the time required to run searches across multiple engines.
-
For-Fee Content with Value-Added
Features. The addition of technical enhancements to fee and for-fee
Internet search engines. The services that will be reviewed are Northern
Light and Powerize.
-
Agent-Enhanced Search-and-Retrieval.
The focus will be on the use of the Internet Explorer 5.x software development
kit as a host for search and retrieval.
This talk provides the attendee
with insight into the new directions in which search-and-retrieval is moving
at a rapid rate, and the type of developments that can be used to respond
to market demand for better search-and-retrieval as user sophistication
increases.
Machine Learning for Text Categorization:
Background and Characteristics
David Lewis, AT&T
Labs Research
Text categorization is
the automated assignment of documents to controlled vocabulary categories,
or similarly meaningful classes. Interest in text categorization has exploded
recently, with new applications such as email filtering, Web directories,
and virtual databases. Current research focuses on machine learning
methods that produce text categorization rules automatically from examples
of categorized documents. We will review advances in these techniques
and discuss how ready they are for operational use.
3:00 p.m. - 4:00 p.m.
D4 • Natural
Language Processing (NLP)
Getting the User to Ask the Right Question
and Receive the Right Answer:
A Cognitive and Linguistic Science
Approach to Searching the Internet
Jeff Stibel, Simpli.com
Why is it so difficult
to search the Internet? It is difficult because people and computers store
and retrieve information in fundamentally different ways. By utilizing
the power of cognitive science and linguistics, search engines can correct
the communication process and create an optimal environment for retrieving
relevant information. Jeff Stibel, Chairman of Simpli.com will discuss
search and information retrieval based on these principles and offer new
suggestions. Simpli.com has developed a proprietary technology called SimpliFind™
to dramatically improve search and eCommerce infrastructure on the Internet.
Using NLP to Find High Quality Information
on the Internet
Ilia Kaufman, KCSL,
Inc.
Evolving Intelligent Agents for Dynamic
Information Retrieval
Edmund Yu, MNIS-TextWise
Labs
Productive use of online
resources is hampered by the very thing that makes them attractive: the
huge glut of information. An excessive amount of time is required to locate
useful data, and the dynamic and transient nature of many online data repositories
means that much information is lost, overlooked or quickly outdated. Information
seekers require an individualized, autonomous agent-based system that can
learn about a user’s specific interest in a particular topic, then efficiently
scour diverse resources unattended, looking for relevant information to
return to the user for inspection. We approach this agent learning task
from two different levels, the local level of individual learning agents,
and the global level of inter-agent operations. We ensure that each agent
can be optimized from local knowledge, while the globally monitoring evolutionary/genetic
algorithm acts as a driving force to evolve the agents collectively based
on the global pooled knowledge. The end goal of this learning scheme is
to produce a new generation of agents that benefit from the learning experiences
of individual ‘parent’ agents and the collective learning experiences of
all previous generations. We will describe our evolutionary, neuro-genetic
approach to creating and controlling intelligent agents, and a prototypical
Web-based agent system that we constructed using this approach.
4:15 p.m. - 5:15 p.m.
D5 • Criteria
for Selecting Information Products and Software
How to Choose Information Products
and Software
Susan E. Feldman,
International Data Corporation
We are on the very edge
of what is practical and available when we design Web-based information
access systems. Can we find affordable software that supports the most
advanced ideas in digital libraries and corporate intranets? Will it survive
the shifting demands and changes of the Internet? Find-It Illinois , a
project of the Illinois State Library, hired Datasearch to develop criteria
for selecting software that would search the Web sites of every state agency,
a state-wide library OPAC, and digitized collections being created by the
State Library. Since the ISL is the archive of record for state documents,
grabbing and storing all significant changes to every agency document was
of particular concern. This presentation discusses some of the criteria
that were developed for the project. A round table discussion which follows
the presentation invites the day’s speakers as well as the audience to
brainstorm about how to choose information software. How do you select
the
right software for a state of the art digital library? What are the criteria
to use, and can anyone meet them? We discuss the criteria developed for
the Illinois State Library’s Web-based state information system. This included
criteria for search engines, change monitoring, and Web-based OPAC access.
The project required Web access to all state documents, digitized collections,
and an OPAC covering all public, school, and some university holdings.
Changes in all state documents needed to be monitored, and a dependable
archive built for historical and legal purposes.
Round Table Discussion: Choosing Information
Software
Steve Arnold and Susan
E. Feldman will discuss with Technology Track speakers the best ways of
selecting information software.
Track
E • WALL STREET ONLINE
Track Leader, Organizer,
and Chair: Jane I. Dysart, Dysart, Jones & Associates
Instantaneous, global information
on Wall Street and other financial corridors around the world is a given
in the new millennium. Net and Web technologies provide organizations with
key tools to make this happen and to enhance their ebusinesses. This track
focuses on online financial information, intranets, desktop strategies,
and case studies of successful information services on the street.
10:00 a.m. - 11:00 a.m.
E1 • Content
& Intranets
Simon Bradstock,
Factiva
Richard Rowe, RoweCom
Gary Mueller, Internet
Securities Inc.
Leading providers of strategic
business information for corporate intranets share their views of distributing
critical information throughout Wall Street and other organizations. Each
provider gives a thumbnail sketch of their content in action and a brief
case study aimed at providing strategies for information distribution and
ideas to apply to an organization.
11:15 a.m. - 12:15 p.m.
E2 • Optimizing
Desktop Access: Enterprise Strategies & Impacts
Joel W. Bland, William
Blair & Co.
Additional speakers
to be announced
Web technologies are revolutionizing
the way business is done on the street. On the way to being a one stop
shop for critical content, these case studies focus on libraries that are
providing access to key content in a fast, reliable way to hundreds of
users over many different cities. They discuss the challenges, experiences,
and learnings in providing access, guidance to effective usage of the content,
and more.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
E3 • Optimizing
Online Opportunities: New Roles
Craig W. Wingrove,
Bear, Stearns & Co. Inc.
Additional speakers
to be announced
This session presents a
look at the roles that information professionals on the Street are not
only taking on but excelling in! From Internet and intranet librarians,
Web trainers and publishers, and content negotiators to information architects
and taxonomists, information professionals have key roles on Wall Street.
Hear their success stories and learn from their experiences.
3:00 p.m. - 4:00 p.m.
E4 • Financial
Vortals
Lawrence Sterne,
Wall Street Research Net
Representative from
Primark
“Portals and e-commerce,
while certainly presenting new opportunities for code and content, will
also accelerate the transformation of what were once proprietary products
to commodity items,” Primark’s Chairman Joseph Kasputys said. “In the quest
to gain subscribers and customers, portals and e-merchants will strive
to make their sites, and the services offered therein, as complete as possible.
So, an advertising supported portal may throw in an attractive free software
tool, just as an e-commerce site for investors will include some free financial
databases as an inducement to generate revenue-producing brokerage transactions.”
Vertical portals, or vortals, are the latest hot sites on the web. This
session presents several vortals in the financial services arena.
4:15 p.m. - 5:15 p.m.
E5 • Information
Industry Picks for 2000
Jed Laird, Chris
Balcius Laird, and Steve Case, Laird Squared, LLC
Additional speakers
to be announced
Speakers from leading investment
banks and venture capital firms, specializing in the information and information
technology industries, share their picks and forecasts of winners in business
information. Hear their ideas and make your own plans for the millennium.
Track
F • THE LAW, THE NET, AND COMPETITIVE INTELLIGENCE
10:00 a.m. - 11:00 a.m.
F1 • The
Law and the Net: Full Text, Copyright, and Rights Management
Chair: Cynthia Sidlo,
LEXIS-NEXIS
Changing the Times: The Tasini
Decision and Why Future Full-Text Sources May Cost More or Not Be Complete
Shelly Warwick, Queens
College Flushing
In October of 1999 the
Second Circuit Court of Appeals in Tasini vs. The New York Times reversed
a 1997 district court decision that had held that The New York Times and
the other defendants had the right to include the work of freelance writers
in the full-text database it sold to vendors, even if the contract with
the freelance writer did not specifically grant the rights for such further
publication. Some possible impacts of the appeals court decision on information
sources include a rise in cost of information resources based on publishers
paying additional fees to freelancers for additional rights; a rise in
the cost of information based on individuals having to pay per-article
fees for the right to access the work of a freelancer within a database;
the transition of full-text resources to “partial-text” resources if the
work of freelancers is not included; and the loss of the viewpoints of
freelancers who will not grant downstream rights. The National Writer’s
Union (NWU), which supported the freelancers in their suit, has created
the Publication Rights Clearing House to license the work of writers and
is encouraging writers to fight for their copyright and additional fees.
The New Law and Technology of Copyright
Glen Secor, Yankee
Rights Management, Inc.
Without effective rights
management, digital content coupled with digital delivery mechanisms will
only partially exploit the underlying value of copyrighted works. Effective
digital distribution of copyrighted works depends upon fully automated,
real-time rights management. How do you handle digital rights clearance
and rights protection? The very latest in digital rights management and
digital distribution technology and standards will be addressed, including
real-world advice on how to implement and utilize Digital Rights Management
Systems (DRMs). This paper also addresses the business case for digital
rights management, a technology overview, business models, copyright meta-data
standards, the new DRM tools and applications of DRMs in the online world.
A New Model for Publishing on the Internet
Mike O’Donnell, icopyright.com
Content creators and publishers
want to make money publishing online. They also want to have more control
over how their content is being used. The RIP Model (Reprints and Interactive
Permissions) provides a new source of revenue by taking full advantage
of the breadth, depth and diversity of the online audience according to
their needs for specific pieces of content and how they want to use it.
The model recognizes that
the product is not a newspaper, a magazine or a CD; it is the article,
the photograph, the graphic, the lyrics, the score, the frame and the other
“parts” that people really want and are willing to pay for. Each part over
the long run will provide a return greater than that of the publication
in which it was originally published. Therein lies the secret to a new
business model that works equally well for creators of content, publishers
of content and consumers of content in the digital age.
11:15 a.m. - 12:15 p.m.
F2 • High
Quality Free Web Alternatives for Information Professionals
Péter Jacsó,
University of Hawaii
There is an increasing
number of free Web databases that can substitute some of the time for some
of the most widely used fee-based resources that have been used by information
professionals. These include abstracting/ indexing databases, table of
content services, review sources and even full-text issues of journals
for current awareness.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
F3 • Competitive
Intelligence: Accuracy, Tools & Techniques for Small and Large Firms
Chair: Michael Gruenberg,
Carroll Publishing
Small Business Intelligence: People
Make It Happen
Jerry Miller, Simmons
College
A manager of a small business
needs to ask some critical questions: What is my business now? Who are
my actual competitors? What products and services are they offering now?
How can I grow my share of the market? These are challenging questions.
To answer them, learn from managers of successful small businesses that
use competitive intelligence extensively. This presentation will provide
an overview of Miller’s findings, including examples from these firms.
Recent statistics from the
Small Business Administration illustrate the rapid growth in this sector
within the United States. Small businesses (those with 500 or fewer employees)
number over 24 million; dominate the engineering, management services,
amusement and recreation industries; accounted for 2.4 million new jobs
in 1998; and employ a larger proportion of younger, older, female and part-time
workers than large- and medium-sized businesses. Today, these managers
are seeking a clear understanding of the intelligence function and how
they should conduct it appropriately. However, little is available, because
most studies have focused on Fortune 500 firms. In response, researchers
are now studying the topic.
Miller’s ongoing survey
includes firms that prominent business leaders, articles in the business
press, and small business-related Web sites identify as operating successfully.
To be included in Miller’s study, firms must exhibit the following attributes:
employ 500 or fewer people, respond quickly to changes in their marketplace,
and have been operating for at least five years.
Digging for Data? Make Sure to Use
the Right Shovel: How to Stay Ahead of the Competition With Online Intelligence
Gidi Cohen, Vigil
Technologies
Gidi Cohen will discuss
online intelligence in terms of the new tools and techniques out there
for keeping informed of important developments in your industry. Whether
you are an entrepreneur monitoring the competition, a manager keeping tabs
on your industry, or a sales exec personalizing your next pitch, you can
benefit by using the Internet. But who has the time to effectively manage
the enormous amount of information available on the Web? How can I use
sophisticated delivery tools that are available to reduce information overload
for myself and my employees in order to gain the greatest possible competitive
edge? Is online intelligence about more than just search engines?
How has the growth of the
Internet prompted entrepreneurs in the areas of competitive intelligence,
customer relationship management, sales, and marketing to come together
and develop sophisticated solutions for online information gathering?
Effective Competitive Intelligence
Morris Blatt, OnTrac
Solutions
In today’s increasingly
fast-paced, high technology world, project deadlines are becoming shorter
and data gatherers, as well as competitive intelligence and competitive
analysis professionals, are functioning in a time compression atmosphere
and may not or can not make the time to verify or validate data accuracy.
The session includes: (1) The definition of a competitive intelligence
process from efficient data gathering through effective strategic decision
making (including ways to improve the relationships and interactions between
data gatherers and competitive intelligence personnel) (2) Definitions
and examples of the differences between accuracy and precision (3) Examples
of the impact of those differences on data gathering and competitive intelligence
processes (4) The identification of numerous sources of global CI data
(5) The identification of pitfalls encountered in global financial and
non-financial data (6) Recommendations to resolve those pitfalls while
gathering and processing competitive information (7) Examples of how implementation
of these steps will enhance data gathering, competitive intelligence, competitive
analysis, and strategic planning processes.
3:00 p.m. - 4:00 p.m.
F4 • Virtuality
in Providing Information
Chair: Jerry Miller,
Simmons College
The Virtual Internet Service Provider
(ISP)
Ronald Lipof, ZipLink,
Inc.
Today’s consumers have
grown to expect a lot from their vendors. They now expect to get information
quickly and anonymously, and this means the Internet. But what can businesses
do to further differentiate themselves on the Internet after creating a
new Web site? Wholesale Internet connectivity providers offer new options.
These organizations maintain their own dedicated national communications
infrastructures to enable new opportunities on the Internet to organizations
and businesses of all kinds. Many wholesale Internet connectivity providers
also offer enhanced services such as billing, e-mail, and news services.
Partnering with one of these organizations enables the service provider
to become a virtual ISP, and to offer their subscribers low-cost Internet
access.
The advantages to outsourcing
the ISP functions are many. Through outsourcing organizations can gain
access to a national network and enhanced services without incurring the
cost of installing, maintaining, and constantly upgrading equipment.
Information Integration Technology:
The Gateway to New Revenue
Jim Berardone, Time
Today’s information services
are built upon a centralized data model. Content is acquired from multiple
sources, transformed into a normal form, indexed and loaded into a central
database. As a result, content aggregation is costly and the currency of
the information suffers. Information Integration Technology provides a
new model for aggregating information. This new technology is engineered
for today’s network of distributed sources. It features dynamic creation
of virtual databases from multiple heterogeneous sources, remote and local,
in real-time. Users benefit from having a single interface to multiple
distributed information sources. Information aggregators benefit because
information services can be created in real-time without loading data.
This presentation will discuss
and demonstrate the technology, its applications, and the benefits to information
aggregators, and highlight the providers that offer this new technology.
4:15 p.m. - 5:15 p.m.
F5 • Sharing
Resources
Chair: Cynthia Sidlo,
LEXIS-NEXIS
Project DL: A Digital Library Resource
Website
Thomas R. Kochtanek,
Karen K. Hein, Carol Lee-Roark, Tulsi Regmi, Juraruk Cholharn and Heidi
Currie, University of Missouri-Columbia
The Purpose of Project
DL is to provide an integrated resource where diverse information sources
on the topic of Digital Libraries (DLs) may be brought together to be used
as a learning tool to explore research and development of Digital Libraries.
The focus of this site is on accessing Digital Library collections as well
as information resources related to the study of Digital Libraries. As
such, the current Web site (www.coe.missouri.edu/
~is334/project DL) is segmented into three distinct but integrated
sections: DL collections, DL resources, and DL Web Sites. The first section
of the Web site portrays URLs identified and submitted by graduate students
in the School of Information Science & Learning Technologies at the
University of Missouri-Columbia. Students generated these unique resources
as part of their class discussions revolving around topics in the area
of Library Information Systems. The URLs presented in this first section
are organized in three different schemes: An alphabetical listing of all
of the DL URLs submitted to date, a listing of URLs arranged by subject
area, and DL research and development projects participating in the Digital
Libraries Initative—Phases 1 and 2.
The second section of the
Web site represents electronically-available information resources related
to issues of Digital Library Research and Development. This section includes:
Sites reporting DL R&D (including a few DL project overviews), Web
sites provided by researchers engaged in DL issues, bibliographic resources
leading to additional information resources (both print and electronic),
print resources and electronic journals providing broader coverage of information
technology including DL R&D, and conferences and proceedings addressing
DL R&D.
E-Mail This Story to a Friend: A Study
of Sharing Tools on Newspaper Websites
Sanda Erdelez and
Kevin Rioux, University of Texas at Austin
Many newspaper Websites
include buttons or links that are labeled with the text “Email this Story
to a Friend” or similar. These tools allow users to conveniently forward
news and feature stories of interest to friends, family and colleagues
via email. Forwarding behavior on the Web is a phenomenon that has attracted
the attention of marketers who are interested in extending and fine-tuning
their Web-based advertising campaigns. As their industry is heavily dependent
on advertising, newspaper managers would also benefit from increased knowledge
about “forwarding buttons.” Although forwarding buttons are increasingly
popular on newspaper Websites, a systematic study of their design, placement
and functions has not yet been undertaken. The proposed paper will report
on new study findings that are intended to fill this research void.
The authors have selected
from a list of the top 50 daily newspapers in the United States a sample
of 10-15 newspaper Websites that include forwarding buttons. Websites are
categorized in terms of the location and design features of the forwarding
button, functional features of the button (e.g., including multiple recipient
options, the ability to include a personal note, feedback from mailer)
and recipient issues (e.g., whether the recipient receives text or a URL
link, the existence of a privacy disclosure, the presence of advertising
included in the forwarded message, etc.). The paper concludes with
a discussion of perceived benefits and drawbacks of forwarding buttons
for senders, recipients, and content providers.
How to Harness Technology for Better
End-User Services
Gwyneth H. Crowley,
Texas A&M University
It is a prevalent misconception
that people can get any information they want on the WWW when in fact,
it is a mish-mash of information. Users must be very discerning. Technology,
though, does provide an excellent way to offer information. The challenge
is in providing and organizing good information. In striving to meet the
patrons’ demand for these electronic resources, Texas A&M General Libraries
is participating in several technological collaborative projects. Like
other universities and colleges, they can’t go it alone to provide electronic
resources due to the enormous expense. They spend 1.2 million dollars a
year on databases and e-journals, and this is a small portion of resources
that they offer. It is hoped that by sharing the cost of providing electronic
products cooperatively they can increase electronic access. The projects
to be discussed are TEXSHARE, The Big 12 Plus Initiative, Web of Science
E! DIS Service, and BioOne. These ventures entail database sharing, document
delivery and a full-text publication project.
THURSDAY, MAY 18
9:00 a.m. - 9:45 a.m.
OPENING
PLENARY SESSION
Electronic Publishing of an Online
Emergency Reference Text and Other Multi-Author Online Medical References
Scott Plantz, eMedicine.com,
Inc.
In Nepal, a pediatrician
determines the differential diagnosis for a sick child with fever and abdominal
pain. In Boston, a Harvard medical student performs background research
for a case study with the most up-to-date literature. In Albuquerque, a
woman suffering from migraines reads about the side effects of the medication
she is taking. Although at opposite ends of the globe, these three individuals
can simultaneously take advantage of the most current, accurate medical
information available located at www.emedicine.com. and developed during
the past year by Scott Plantz, M.D. and Jonathan Adler, M.D. The site offers
free, high quality, medical information on such topics as emergency medicine,
pediatrics, internal medicine, surgery, dermatology, neurology and ophthalmology.
The online references also integrate photos, x-rays, video and audio with
the text to allow for a new dimension that a printed text cannot offer.
But what sets this site
apart from other medical information databases is the cutting-edge nature
of the information available. Since the doctors who author these texts
are able to logon to the site and update their chapters any time, day or
night, emedicine delivers the most contemporary medical information in
an accessible medium with no cost to users.
Since almost forty percent
of the searches done by the general public ask health care-related questions
and the number one concern of these inquiries is the questionable quality
of the information found, emedicine has found a way to educate both doctors
and the general public. Updates are done through the Group Publishing System,
also known as GPS. This technology allows authors continuous access to
their works. GPS was designed and developed by Plantz and Adler in conjunction
with the software development team of Jeff and Joanne Berezin.
Track
G • SEARCH ENGINES, IR, AND THE WEB
10:00 a.m. - 11:00 a.m.
G1 • Search
Engines: Multiple Languages and Multiple Strategies
Chair: David Raitt,
The Electronic Library
An Exploration of Search Strategies
and Search Success in Digital Libraries vs. OPACS
Luisa Sabin-Kildiss,
Engineering Information, Inc.
Research in the area of
digital libraries has primarily focused on the development of techniques
for building and providing access to these new and expanding digital collections.
Less attention has been directed to questions about the uses and usability
of digital libraries, or about users’ experiences with this new library
form. This paper addresses a gap in our knowledge about digital libraries
and their users by asking how users of the traditional library will adapt
their pre-existing searching strategies and mental model of the library
itself to the new digital environment, and how this might effect search
success. To date, no published studies have focused on how searching for
information in a digital library compares to searching in a traditional
library, and what conceptual shifts the user of a digital library will
need to make in order to effectively access information within it.
An empirical investigation
was conducted to address the following questions: (1) What mental models
of the traditional library OPAC or card catalog do users bring to digital
libraries? (2) In what ways do these pre-existing mental models effect
searching behaviors in the digital library environment? (3) How successful
are novice users of the digital library at retrieving relevant information,
and what factors predict success or failure? Results of the study will
be presented.
How Are You Going to Find It: What
Librarians Don't Know, Think They Know, Want to Know, and Should Know About
Web Search Engines
Dennis Brunning,
Arizona State University
Current estimates put the
indexable Web at 320 million hypertext formatted pages. Of these pages,
only one-third are indexed by available search engines. Within this indexing,
scope, coverage, method, and performance vary widely, creating huge problems
of finding pages relevant to queries typically dealt with by librarians.
This paper focuses on dilemmas, solutions, strategies, and questions librarians
face in using Web search engines. We will address the following questions:
-
Which search engines do reference
librarians rely upon in daily practice and why?
-
How do reference librarians
keep up-to-date on search engine technology and interfaces?
-
How well do reference librarians
understand the scope of search engine coverage?
-
How do reference librarians
go about confirming understandings of the scope, coverage, searching interface,
and performance of search engines?
-
What design, database scope
and coverage, and performance features would reference librarians wish
to be incorporated into search engines?
Knowledge gained in our study
can be used to advance search engine design, performance, and standards.
It will underscore and elaborate how reference librarians cope with this
ever-evolving powerful technology.
Multilingual Search Engines on the
Internet: Country Distribution and Language Capability
Shaoyi He, Long Island
University
The advanced evolution
of the Internet has brought many new features into Web search engines and
one of them is to search Websites in different languages. Among the hundreds
of Web search engines that have been developed in various parts of the
world, many of them have search capabilities in different languages, e.g.,
Arabic, Chinese, French, German, Italian, Japanese, Russian, and Spanish.
Despite many studies on Web search engines, little attention has been paid
to multilingual search engines, and two questions remain unanswered: (1)
Which countries have developed the most number of multilingual search engines?
(2) Which multilingual search engines have included the most number of
languages for Web searching? In order to answer the above questions, this
paper discusses a survey on multilingual search engines for their distribution
in countries and capability in languages. Shaoyi He suggests some future
research directions for studying multilingual search engines on the Internet.
11:15 a.m. - 12:15 p.m.
G2 • Online
Search Systems: Quality, Features and Functions
Chair: Brian Quinn,
Texas Tech University
Establishing and Maintaining Trust
in Online Systems
Claire McInerney,
University of Oklahoma
Quality may be the most
important consideration in the responsible creation and maintenance of
information. If high quality standards are missing from an organization's
commitment to information collection, organization, and dissemination,
the entire information and knowledge system may have serious flaws affecting
how information is valued and used by clients and organizational members.
The literature of the social
sciences and humanities help us to understand trust and how it breaks down
in organizations. As organizations become international players in a global
economy, and individuals find themselves working in virtual offices and
on virtual teams, trust becomes even more critical to work and collaboration.
This session will examine
trust factors in communication and information provenance and apply those
factors to online systems. There has been a small amount of research on
trust in electronic systems, but not a great deal. The presenter will report
on some of her own research involving trust and online information. The
research will be summarized, and used as a bridge to the practical matter
of creating trust in systems design and implementation. Particular emphasis
will be given to Web sites and Web designs that foster trust and credibility.
Online Interface Comparison: Features
and Functionalities
Hong (Iris) Xie,
University of Wisconsin-Milwaukee and
Colleen Cool, Queens
College, CUNY
This paper reports results
of a study that was conducted to investigate user preferences for a variety
of Web and non-Web interfaces to online databases. In particular, the focus
was on identifying aspects of system features and interface functionalities
that are preferable among online database users. Twenty-eight graduate
students participated in the study. Each student performed similar searching
tasks using multiple online systems with different interface conditions.
Participants were asked to evaluate each of the Web and non-Web interfaces
for usability, effectiveness and overall preference. Results of the study
indicate that some of the functions of Web interfaces outperform non-Web
interfaces; but at the same time they are not universally preferred. This
study identifies specific system features and interface conditions that
are highly preferable and usable, along with a discussion of particular
weaknesses in poorly designed systems. Suggestions for improvements in
the design of existing online systems and interfaces are discussed.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
G3 • Search
Systems
Chair: Mary Berger,
Ei
Internet vs. Traditional Online: The
Changing Face of Government Information Access
Teresa McGervey,
National Technical Information Service
As new search engines become
available on the Internet and continue to make vast quantities of data
and information accessible to the novice searcher, the traditional online
search services, such as DIALOG, STN, and OCLC, have moved their primary
public interface to the Internet. In addition, many database creators are
now providing (in varying degrees and formats) direct access to their databases
via the Internet. How have these “changes in venue” affected searching
for government information? How have these changes affected the ways Federal
agencies disseminate their information to the public as well as the perception
of how the Federal government should disseminate information? This paper
looks at some of the differences between databases (as well as database
providers) presented through traditional online services versus the
Internet, giving special consideration to efforts by Federal agencies to
disseminate information.
Read It To Me!
Frank L.Walker and
George R. Thoma, National Library of Medicine
New Technology and software
such as Ariel and DocView have made it possible for libraries to distribute
information to patrons over the Internet. Ariel, a product of the Research
Libraries Group, converts paper-based documents to monochrome bitmapped
images, and delivers them over the Internet; the National Library of Medicine’s
DocView enables library patrons to receive, display and manage documents
received from Ariel systems, while some still find it difficult to use
library document information. For example, the formats of files received
through the Internet may not be usable on the recipient’s computer. Disadvantaged
groups of people such as the blind, visually impaired and physically handicapped
often have difficulties in reading either printed or electronic library
information. Consequently, the Web site called the DocMorph Server was
developed. The DocMorph Server provides speech synthesis to let users convert
scanned images of the printed word into the spoken word. From any place
on the Internet, a user can upload scanned images of a book, for example,
and have DocMorph return a Web page that reads the material out loud on
the user’s computer.
3:00 p.m. - 4:00 p.m.
G4 • Improving
Information Retrieval: Influential Factors and Effectiveness of Data Fusion
Chair: Colleen Cool,
Queens College of the City University of New York
The Factors Influencing the Evolution
of an Information System and Their Impacts: From the Intranet to Advanced
Information Systems
Shin-jeng Lin, Rutgers
University
Information systems evolve
through the following phases: invention, development, innovation, transfer,
growth, competition, and consolidation (Hughes, 1987). Yet, scholars from
the information science discipline tend to focus on the development phase
only, studying user’s information seeking behavior and improving system
information retrieval algorithms. Neglected is the context in which information
systems are developed and used, not to mention the factors contributing
to the evolution of information systems. Consequently, the necessity for
the information science discipline to incorporate with other disciplines,
such as computer science and management information systems, is less manifested.
The practical applications of studies of information science are limited,
and so are their impacts.
Shin-jeng Lin identifies
the potential factors that are likely to influence the evolution of information
systems and creates a conceptual model that describes the interactions
among those factors.
Predictive Models for the Effectiveness
of Data Fusion in Information Retrieval
Kwong Bor Ng, Queens
College, Flushing, NY
Effective automation of
the information retrieval (IR) task has long been an active area of research,
leading to sophisticated retrieval models. With many IR schemes available,
researchers have begun to investigate the benefits of combining results
of different IR schemes to improve performance, a process called data fusion
(DF). The idea of applying data fusion method in IR has been discussed
and explored. Empirically, data fusion in IR works well. Many data fusion
experiments have been done and have shown positive results. However, this
scattered empirical success still lacks a full theoretical foundation.
In particular, there seems no way to predict, a priori, when data fusion
will be effective.
This research builds on
several small pilot investigations on data fusion conducted by the author
(all published). The author analyzes hundred of thousand of cases of data
fusion generated from the famous Text Retrieval Conferences (TRECs) and
tries to create predictive models for deciding when two IR schemes (or
more than two) can be effectively used in data fusion.
4:15 p.m. - 5:15 p.m.
G5 • Web
User Communities
Chair: Michael Gruenberg,
Carroll Publishing
Web Information Communities, Gatekeepers,
Gurus, and Users, Defining New Relationships
Tula Giannini, PRATT
Institute
The Web has changed the
user’s relationship to information in unforeseen ways. Traditionally, users
have connected with information through published sources and collections,
and publishers have served as the user’s link to resources (books, journals,
etc.). Today,with Internet access, organizations, associations and societies
have assumed a central role as a user’s information gateway communicating
directly to users in a Web environment where publications are but one facet
of a wider range of information and communication options. This study examines
the impact of this shift on traditional information delivery systems, including
libraries, and tests user perceptions about information quality and authority
in this new venue. Results of this study are discussed in terms of their
implications for libraries and users.
A Marriage Made in Cyber Heaven: International
Business Research on the World Wide Web
Jeanie M. Welch,
University of North Carolina, Charlotte
The World Wide Web is an
ideal medium for research in the fields of international business, foreign
trade, and international economic conditions. It is a challenge for researchers
to find and exploit this information efficiently. This paper discusses
researching these topics, using sources of information that are available
free via the Web. These sources include Web sites maintained by international
agencies, government agencies, commercial publishers, and banks. Many foreign
Web sites provide bilingual or English-language versions of their Web pages,
making them very useful to American researchers and students. Several scholarly
institutions have also compiled meta pages—Web sites that contain hundreds
of hot links to these sources, arranged in subject categories—that are
useful starting points in Web-based international business and economic
research.
Types of Web sources of
international business and economic information will be discussed.
Criteria for evaluating
international business and economic Web sites will be offered.
The Library-Use Survey Meets the 21st
Century:
New Methods for Evaluating Patron
Needs and Electronic Resources in a Technological Library Environment
Philip M. Hurdle
and Julia E. Schult, Elmira College
Using traditional research
tools and new techniques such as terminal and Web-use monitoring software,
the project forms a comprehensive picture of current use and patron needs
at a small-college library. The goal of the project was to use new technological
solutions to relate World Wide Web and research-database use patterns to
user satisfaction. The study examines patrons’ responses to a satisfaction
survey with a detailed account of their use of electronic resources in
an attempt to increase user satisfaction and to evaluate the efficacy of
future expenditures on a seemingly never ending need for new technology.
The practical difficulties of processing huge amounts of computer-generated
statistical data and the ethical issues of using monitoring software on
public computers are also considered.
Track
H • DIGITAL LIBRARIES AND WEB-BASED EDUCATION
10:00 a.m. - 11:00 a.m.
H1 • Digital
Library Users: Where Do They Come From?
Chair: John Hearty,
OCLC
If We Build It, Will They Come?
Anne Prestamo, Oklahoma
State University
In recognition of the information
needs of Oklahoma State University’s distance learning students, as well
as the increasing demands for remote access to library resources by all
OSU constituents, the library created a new unit. The Digital Library Services
unit facilitates access to electronic information, print materials, and
library services to ensure that the information needs of OSU students,
faculty, and staff are met, regardless of their location. Its goals are
to: (1) insure that off-campus faculty and students have equal access to
library materials and services, including research tools, print materials,
electronic resources, course reserves, and interlibrary loan services;
(2) provide reference services by telephone, Web forms, and e-mail; (3)
design bibliographic instruction programs to enable students and faculty
to effectively use research tools and library services available to them;
(4) work proactively with faculty to meet their students’ information needs
through appropriate linkages from course materials to library services,
and to integrate bibliographic instruction into course materials; and (5)
develop and implement policies and procedures to address the information
needs of off-campus students and faculty.
Library Services to Distance Learners
Across the Pacific: Experiences and Challenges
Wen-Hua Ren, Rutgers
University
As American higher education
institutions explore and develop international markets by offering educational
programs in foreign countries, academic libraries face challenges of serving
international distance users. The Graduate School of Management at Rutgers
University has its International Executive MBA (IEMBA) program in Beijing
and Singapore, administered by the school’s International Program Center.
Though the courses are taught on site, library services are provided remotely.
To identify the students’ needs for library service and resources, a survey
was conducted. Based on the survey responses, Rutgers University Libraries
(RUL) has established distance service for the IEMBA program in Beijing
and Singapore. The students are provided with remote access to the library
online catalog, index databases and other electronic resources from the
site countries. Furthermore, a library resources and services Web page
has been created to integrate the characteristics of the program. In addition
to providing services from the United States, RUL also collects and integrates
Internet information about the site countries into its resources for the
program. Arrangements with libraries in the site countries enable students
to utilize local library resources and assistance.
Digital Libraries: Their Usage From
the End User Point of View
Mounir A. Khalil,
City College of New York
Raja Jayatilleke,
College of Staten Island of CUNY
There have been many published
papers, articles, and even books about the Digital Library (also denoted
Electronic Library or Virtual Library in various contexts). These address
the definition, description, components, usefulness etc. of the Digital
Library—but nothing has been researched and/or written about its usage
by the end user, as well as the meeting of expectations and satisfaction
by accessing needed information—in contrast with traditional methods of
access. A questionnaire has been developed to survey the attitudes and
behavior of end users and measure the level of understanding of the definition
as well as nature of the Digital Library, and the expectations for meeting
their information needs in any discipline be it in any location worldwide.
“Globalization” is a common theme in the literature of library and information
science, but no attention has been paid to the end user’s spectrum of satisfaction
relative to such usage as is presently extant relative to the Digital Library
throughout the world. Preliminary findings will be presented.
11:15 a.m. - 12:15 p.m.
H2 • Digital
Libraries: Telecommunications, Typology and Management
Chair: John Hearty,
OCLC
A Typology of Digital Libraries and
Their User Communities
Colleen Cool, Queens
College of the City University of New York
Digital libraries are capturing
the attention of many in the online community, yet there seems to be no
single definition of what the digital library is or should be. It appears
as if the term “digital library” is used as an umbrella concept by many
who refer to quite different entities, some of which bear little or no
resemblance to libraries at all. In a recently published article, Christine
Borgman (1999) argues that there are two “competing visions” about the
purpose of digital libraries; that which is held by the research community,
and that which is held by academic librarians. According to Borgman, researchers
are content driven, while librarians themselves are institution or service
oriented. While this characterization is a useful first step, many questions
remain about the conceptual boundaries surrounding digital libraries, and
perhaps more importantly, about their uses, usability and effectiveness
among the various communities they are designed to serve. This paper examines
the vast range of projects, initiatives, and services that carry the label
“digital library” and then presents a typology of existing digital libraries,
along with their goals, objectives, and intended user communities.
A Project Management Approach to Digital
Library Development
Robert Downs, Montclair
State University
This paper describes how
organizations can apply a project management approach to digital library
development. Employing a project management approach can assist various
types of organizations in effectively and efficiently implementing a digital
library. Applying this approach to digital library development includes
obtaining top management support to create a team that integrates project
management processes during each of the phases of the development effort.
These processes include planning and managing time, costs, human resources,
stakeholder communications, risks, quality, procurement, and scope of the
digital library project. Using project management techniques, team leaders
and members must plan, schedule, budget, and control digital library development
efforts. In applying these techniques, the digital library project team
should strive to create a document and knowledge management information
system that supports the on-line learning and research behavior of both
expert and novice researchers, regardless of their experience, using computer-based
learning environments.
Telecommunications Alternatives in
Accessing Image Intensive Digital Libraries
Harry Kibirige, Queens
College of the City University of New York
Digital libraries are mushrooming
in the information arena with various types of contents. Perhaps the most
fascinating are those that are either solely or predominantly image based.
The improvement in computer and telecommunications technology has enabled
information professionals to include digital image intensive collections
in their offering either internally within the organization or as links
to external sites. The need to access such collections is not only vital
to basic research, but also invaluable to human communication in the digital
age. The adage “a picture is better than a thousand words” had never been
more pertinent as when it is used to refer to digital resources on the
Web. The Internet, particularly the Web, has made it possible to access
such collection and has in fact accentuated the creation of remotely accessible
image based digital collections. But unlike text based information access,
image intensive digital libraries are fraught with downloading bottlenecks.
Careful design of distribution and receiving information systems is needed.
Various alternatives have been used to alleviate the bandwidth bottleneck,
which include: cable modems, frame relay, ISDN, Asynchronous Digital Subscriber
Line (ADSL), satellites, and high-end modems. This paper will summarize
the latest developments of these technologies and how they can be used
by various types of information professionals and end users in accessing
digital libraries.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
H3 • Web
Security and Web Pages for Libraries
Chair: Cynthia Sidlo,
LEXIS-NEXIS
Do-It-Yourself: A Special Library’s
Approach to Creating Dynamic Web Pages Using Commercial Off-the-Shelf Applications
Gerald Steeman, NASA
Langley and Christopher Connell, Institute for Defense Analyses
Librarians of small libraries
may feel that dynamic Web pages are out of their reach, financially and
technically. Yet we are reminded in the library and Web design literature
that our static home pages are a thing of the past. This paper describes
step-by-step how librarians at the Institute for Defense Analyses (IDA)
library developed a database-driven, dynamic intranet site using commercial
off-the-shelf applications. Administrative issues include gaining managerial
support, surveying the library users group for interest and needs evaluation,
and committing resources to managing time to populate the database and
train in FrontPage and Web-to-database design. Technical issues will cover
Access database fundamentals, lessons learned in the Web-to-database process
(including redesigning tables and queries to accommodate the Web interface,
understanding Access 97 query language vs. Standard Query Language (SQL),
and setting up Database Source Names (DSNs) in Microsoft FrontPage). This
paper will also offer tips on hacking Microsoft FrontPage template Active
Server Pages (ASP) scripting to create desired results. Additionally, a
how-to annotated resource list will close out the paper.
An Introduction to Web Security in
Academic Libraries
Tammie Alzona and
Yolanda Hollingsworth, University at Albany, State University of New York
Libraries are now faced
with an increasing need for additional control over their computer networks.
As users become more computer savvy, securing networks become more difficult
and time consuming for academic library professionals. After examining
the types of Web security issues and problems in academic libraries, our
findings reveal a pattern of solutions that will offer enlightenment as
well as some relief. Encryption and firewalls as technical solutions will
be assessed, while products such as Kiosk and Fortress will be examined.
Provided data charts act as resource guides to define terminology and bridge
concepts.
3:00 p.m. - 4:00 p.m.
H4 • Education:
Distance/Electronic
Chair: Mary Berger,
Engineering Information
The Next Wave of Integration in Electronic
Information:
The Integration of Electronic Journals,
Full-Text Periodical Databases and Web Content into Curriculum and Decision
Support Models
Donald Doak, EBSCO
Publishing
Information Resource Managers,
including librarians at public, university, school, medical and corporate
libraries, face the challenges of determining what information will add
value when integrated with their current collections and how to go about
combining information resources of varying formats.This presentation will
address how internal and external electronic information is managed and
used, as well as the different integration needs of various types of libraries.
In particular, topics to be discussed are: integration of electronic journals
and other data types into full text aggregated databases, methods of access;
authentication, statistics, MARC record information, allowing integration
with the World Wide Web, editorializing information on the Web, linking
to databases of differing formats, integration of dissimilar data types,
obtaining complete and thorough search results, adding value to collections
with Web integration and linking, and establishing gateways to other paths
of inquiry. In addition, this presentation will address the integration
of electronic content into curriculum through the use of customized text
books and course-packs. Searchers retrieve information to use or incorporate
into curriculum or decision support models. How do we move from information
retrieval to curriculum integration, and should we?
Distance Education in Virtual Classrooms:
The Model and the Assessment
Alexius Smith Macklin,
Leslie Reynolds and Sheila R. Curl, Purdue University
Brent Mai, Vanderbilt
University
As distance education moves
more classrooms into a virtual world, students and faculty engage in a
learning environment where on-demand instruction, hands-on training, and
immediate access to information are available at any given moment. This
evolution of intellectual exchange, however timely and convenient, makes
a case for establishing and implementing high standards of excellence in
information literacy across the curriculum. At Purdue University, members
of the libraries’ faculty received a statewide grant in the spring of 1998
to develop a required, one-credit, distance education course designed to
teach information strategies to undergraduate students in the School of
Technology. Because of the high demand for this course, the libraries’
faculty continue to make use of emerging technologies to reach students
at the main campus, as well as those registered across the state. To assess
the effectiveness of a distance program versus the traditional classroom,
a comparison study was conducted.
Distance Learning for Library and Information
Science: Building Distributed Asynchronous Learning Networks
Thomas R. Kochtanek
and Karen K. Hein, University of Missouri-Columbia
The introduction of Web-based
course instruction into an existing degree program offers the opportunity
to re-examine models that support learning and the transfer of knowledge
among students enrolled in such course offerings. By removing the barriers
of time and place, instructors can set about to create and sustain student
learning communities using interactive communication support tools grounded
in asynchronous learning models. The instructor’s role moves to that of
a facilitator who seeks to stimulate student-student, student-group, and
student- instructor interactions in the pursuit of opportunities that lead
to improved learning and knowledge base construction.
A Web-based distributed
course in “Library Information Systems” supported by asynchronous communications
tools was offered by the University of Missouri School of Information Science
and Learning Technologies beginning with the Fall of 1998 and again in
Winter semester 1999. Sixty students were enrolled in these two courses.
In each of the two course offerings students were presented with project-based
learning opportunities. These group projects were the focus of semester-long
team efforts for each of the two courses. There were five projects each
semester, with about 5-7 members in each project group. Communications,
both synchronously (chat) and asynchronously, were supported by FirstClass,
a client-server based proprietary communication tool. Thomas Kochtanek
and Karen Hein qualitatively document how asynchronous communications support
increased student learning and collaborative opportunities that are representative
of those professional team problem-solving tasks student learners will
likely engage in upon graduation.
4:15 p.m. - 5:15 p.m.
H5 • Instruction:
Web-Based and Computer-Based
Chair: Mike Koenig,
Long Island University
Adopting Principles of Good Practice
for Web-Based Instruction
Thomas Walker, University
of Southern Mississippi
Educational institutions
and other organizations using Web-based instruction should be concerned
about quality of instruction. Guidelines to ensure the development and
offering of such instruction have been created by several institutions
or consortia, and can be applied to existing or new ones. They can also
be used to create assessment tools for evaluating such courses or programs.
This session reviews some groups of principles and suggests a measurement
instrument.
The CREATE Network Project and its
Aftermath:
An Effort to Improve Computer-Based
Information Access in Tennessee’s Historically Black Colleges and Universities
Fletcher Moon, Brown-Daniel
Library, Tennessee State University
The project’s overall objective
was to assist libraries in the participating institutions in incorporating
and/or improving computer information retrieval technology, with the desired
goal of creating a level of Computer Equity of Access in these Tennessee
Educational institutions (CREATE) through a cooperative network. Tennessee
State University served as the lead institution for the project, and was
awarded a three-year grant from the Fund for the Improvement of Post Secondary
Education (FIPSE), U.S. Department of Education, to support this activity
between 1991 and 1994. The paper will present an overview of the institutions
and their libraries, with emphasis on the status of information technology
prior to the CREATE Network project, the effectiveness and impact of the
project during its funding cycle, and further developments at each institution
in the five years since the grant.
Track
I • WEB RESOURCES
10:00 a.m. - 11:00 a.m.
I1 • Web-Based
Information Sources for Consumers and Professionals
(This session is sponsored
by Online Information Review)
Chair: Mike Koenig,
Long Island University
Health and Medical Internet Resources
for Consumers and Providers
Patricia Anderson,
University of Michigan
Patients, patient advocates,
and other consumers of health care information account for some of the
largest growing segments of Internet use. There is a growing community
of health information providers committed to providing reliable qualitative
information. Resources for health care providers tend to be harder to find,
and the commercial presence is quite strong. This presentation will cover
Internet-based resources for health reference, drug information, clinical
guidelines, employment, health in the news, and more for both health consumers
and providers.
Ready Reference on the Internet
Linda Smith &
Sarai Lastra, University of Illinois at Urbana-Champaign
Many libraries are building
Web sites that include “virtual reference collections,” often divided into
categories of materials found in their print ready reference collections
(e.g., almanacs, biographical sources, directories). This paper will explore
the strengths and weaknesses of Web-based resources for answering ready
reference questions when compared to print resources typically found in
library reference collections.
Environmental and Chemical Sources
on the Internet: Availability and Evaluation Approach
Kristina Voigt and
Gerhard Welzl, GFS-National Research Center for Environment and Health
and
Joachim Benz, University
of Kassel
In a constantly expanding
world of environmental and chemical information resources on the Internet,
the need for their effective detection gains more and more importance.
This paper presents a strategy to handle the variety of data sources in
order to find the information required. Important tools for finding environmental
and chemical information are the so-called directories; they are context-specific
listings of relevant URLs. Directories are compiled intellectually; hence
they are rather small in size and have no claim to completeness at all.
Examples of such directories will be given in the final paper.
The next step in the hierarchy
of finding information on the Internet in a special field of interest are
the so-called catalogs, metadatabases or metainformation systems. These
are databases that index Internet resources according to special subjects.
The main emphasis will be put on the DAIN —Metadatabase of Internet Resources
for Environmental Chemicals, which is set up and maintained by the two
authors [URL: http://dino.wiz.uni-kassel.de/dain].
DAIN comprised 568 documents in December 1999. The structure of this metadatabase
will be given, its search interface explained, and the user statistics
presented.
11:15 a.m. - 12:15 p.m.
I2 • High
Quality Free Web Databases for Ready Reference
Péter Jacsó,
University of Hawaii
There are hundreds of free
Web databases that offer responses to short, factual questions or provide
directional assistance. Many of them sport user interfaces and search capabilities
that surpass those of the fee-based services. The top-notch and free encyclopedias,
dictionaries, fact books, and directories are discussed and illustrated.
12:15 p.m. - 1:45 p.m.
Lunch Break - A Chance to Visit the
Exhibits
1:45 p.m. - 2:45 p.m.
I3 • Evaluation
of Systems and Databases
Chair: Kristina Voigt,
GFS-National Research Center for Environment and Health
Evaluating the Journal Base of Databases
Using the Impact Factor of the ISI Journal Citation Reports
Péter Jacsó,
University of Hawaii
Databases in science and
technology, arts and humanities differ widely in what journals of the target
discipline they cover and what is the depth and retrospectivity of their
coverage. The appropriateness of the journal base depends on the — occasionally
subjective — preferences of the user community but there are also objective
criteria that can be used to evaluate the appropriateness of the scope
of journals and other serial publications (annual reviews, conference proceedings)
covered by a database. The Institute for Scientific Information (ISI) has
been monitoring several thousand journals to determine — among other things
— their importance in the discipline. The Journal Citation Reports (JCR)
has historical data for science and social science periodicals that provide
important measures every year about those sources. Although there is no
perfect consensus about the core journals of a discipline, and the algorithm
of calculating their impact factor, they are widely accepted and can be
used as a benchmark for evaluating the journal base of many databases in
the sciences and social sciences.
A New Way of Evaluating IR Systems
Performance: Median Measure
Howard Griesdorf,
University of North Texas and Amanda Spink, Pennsylvania State University
In this paper we examine
new approaches to evaluating IR systems performance and propose a new evaluation
measure called Median Measure. This research builds on previous work by
Greisdorf and Spink on partial relevance. Our study of relevance judgments
of 36 end-users shows that: (1) the distribution of end-users’ relevance
judgments is bi-modal (from not relevant to highly relevant) no matter
what the scale used, and (2) the median of a relevance frequency distribution
correlates with the number of relevant and partially relevant items retrieved.
The median data point corresponds to the start of partially relevant items
in the distribution. The paper will discuss the implications of the “Median
Measure” for end-users and the evaluation of IR systems.
3:00 p.m. - 4:00 p.m.
I4 • Electronic
Resources in Academic Libraries: ILL/Public Work Stations/Journals
Chair: David Raitt,
Electronic Library
What Journals, If Any, Should Still
Be Printed?
David Goodman, Princeton
University
Contrary to expectation,
most academic libraries that have adopted electronic journals still receive
the print format as well. Indeed, the only journals that ordinarily are
received in electronic format only are those published in that format only,
or those received in publisher’s packages. Some resistance is due to libraries’
conservative attitude to archiving and distrust of the commercial stability
of electronic publishing enterprises. But users, even in the most technological
disciplines, often insist that their working patterns require paper. This
study suggests that for reading specific known articles the electronic
format alone, either directly or by way of paper prints, is always optimum,
while for scanning the literature both electronic and paper are required.
The data show that a minority of the titles used are actually read or scanned
in unbound format, and are thus presumably suited to electronic availability
only. It is predicted that the availability of detailed use data
for electronic titles will confirm this result and facilitate the comparison
with other libraries.
Evaluating the Use of Public PC Workstations
at the Arizona State University Libraries
Scott Herrington
and Philip Konomos, Arizona State University
From the moment the ASU
Libraries migrated from dumb terminals to PC workstations for access to
electronic resources, there was great concern that these workstations would
be used “inappropriately.” Whether students should be allowed to check
their email from the workstations was debated, as was the need to restrict
access to the Internet. The Information Technology division at the University
was concerned with how the Library would provide accountability for anything
that happened at a public workstation. After much discussion, it was decided
that the library PC workstations would provide unrestricted access to the
Internet. Telnet access was limited to library-related resources requiring
telnet, in an effort to keep students from doing computing assignments
and personal email on these workstations.
After casually observing
patrons’ use of the workstations for several months, the Library Systems
department decided to take a more empirical approach to evaluating how
workstations were being used. This presentation will describe the data
collection techniques, the results of data analysis, and how the results
of data analysis are being used to better manage the PC workstations in
the library.
4:15 p.m. - 5:15 p.m.
I5 • Web
Aids and Needs: Classification of Portals/Common Language for Web Sites
Chair: Brian Quinn,
Texas Tech University
Divided by a Common Language: A Look
at University Library Web Sites in English Speaking Countries
Julie Still, Rutgers
University
Any American traveling
in Britain is repeatedly reminded that, although a common language is used,
there are many differences. Library practices in English speaking countries
also vary. Even something as simple as a dialog search is constructed differently,
with clear cut cultural patterns. Would it not be likely that cultural
differences would appear in library Web sites as well? While community
college and research university library Web sites in the U.S. have some
differences, they are often constructed along similar lines. Are these
structures universal or cultural? This study will compare research university
Web sites in four English speaking countries: Australia, Canada, the U.K.,
and the U.S. Concrete data on whether cultural practices extend into cyberspace
and how they are manifested will be presented.
International Indexing Classification
Activities of Internet Portals
Manuel Prestamo,
Oklahoma City Community College
The exchange of information
via electronic means propels us at an ever increasing pace toward the rapidly
evolving realities of a world economy. At the same time, statistics show
the amount of space given to international news in an “average American
paper” declined from 10.2 percent in 1971 to a low of 2 percent at the
present time. However, Nua Ltd. estimates that there are 201 million people
using the Internet this year and there are indications that the number
will continue to increase dramatically. Web communities that are language
driven, and perhaps culturally related, are obviously developing throughout
the Web. With more and more international users flowing into the Internet,
it is important to realize the number of options available to facilitate
communications with these rapidly emerging communities around the globe.
This presentation will focus on the variety of international components
present in Yahoo!, Altavista, HotBot, Lycos and others.
|