Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > September 2024

Back Index Forward
SUBSCRIBE NOW!
Vol. 44 No. 7 — September 2024

FEATURE

Embracing Conversational AI Agents: The Agentic Future of Libraries
by Yrjo Lappalainen and Nikesh Narayanan

Zayed University Library's conversational AI agent Aisha
Generative AI (gen AI) has already shaken up our world. Libraries are no exception. At Zayed University Library (ZU Library) in Dubai, we decided to join the game by creating a conversational AI agent named Aisha to supplement our services. What started as a small AI experiment with limited AI experience quickly turned into an epic journey revealing the incredible potential of gen AI for libraries. This article describes our experiences, the technical aspects of Aisha’s development, the challenges we faced, and the endless possibilities of conversational AI for libraries.

AI agents are a step up from traditional chatbots that follow predefined scripts. They use large language models (LLMs) to make decisions and perform tasks with the help of various tools and external services, thus greatly extending the functionality, reliability, and usefulness of LLMs. Agents that can process and retrieve information from multiple sources are incredibly valuable in places like libraries, where having access to a wide range of accurate information is essential.

Before OpenAI launched ChatGPT in November 2022, our AI experience was limited. ZU Library was part of a university-wide IBM Watson chatbot project, but when ChatGPT was released, Watson suddenly felt outdated due to its rule-based nature. ChatGPT’s advanced natural language processing and generative capabilities blew everyone’s minds, even surprising many AI researchers.

For us, the real game changer was the ChatGPT API. Suddenly, we could access a highly sophisticated, pretrained model (and the computing power) through the API without needing specialized hardware or additional training. Seeing the potential, we immediately jumped in. By February 2023, we had developed our first prototype of Aisha. In June 2023, we published an article detailing our progress (“Aisha: A Custom AI Library Chatbot Using the ChatGPT API,” Journal of Web Librarianship, v. 17, n. 3, 2023, pp. 37–58; doi.org/10.1080/19322909.2023.2221477).

Introducing Aisha: Our Conversational AI Agent

Aisha, meaning “lively” or “she who lives” in Arabic, perfectly captures our vision of a dynamic and engaging library assistant. We envisioned Aisha as a tool to improve accessibility and as a way to offer around-the-clock support and personalized assistance to our diverse userbase. Developed completely in-house with Python, Aisha uses the OpenAI API for text generation and image analysis.

Aisha has been integrated into the ZU Library webpage along with our Libchat service (zu.ac.ae/main/en/library/index). Aisha is available 24/7. This ensures that help is always available.

Here are some of Aisha’s key features:

  • Retrieval-augmented generation: Aisha’s question answering relies on a custom knowledgebase amplified by retrieval-augmented generation (RAG) that makes answers more accurate and less prone to errors.
  • Voice capabilities: Aisha can perform speech-to-text and text-to-speech conversions, making the interaction more natural and accessible, particularly for users who prefer voice communication.
  • Multilingual support: Aisha can communicate in Arabic and other languages supported by the LLM, ensuring that users can interact with Aisha in their preferred language.
  • Multimodal interaction: Aisha adds a visual dimension, allowing users to upload images or documents for analysis and generating images in response. This makes the interaction more versatile and informative.
  • Access to external tools: Aisha can retrieve information from various sources, including Google, Wikipedia, the ZU library catalog, the ZU institutional repository, and Google Scholar. We have even given her other tools, such as a calculator for handling tough math questions!
  • Virtual persona: Aisha’s AI-generated identity, including images of her in fun settings, makes her more personable and relatable. This playful approach serves to highlight the technology’s potential and educates users about authenticity in the era of gen AI.

With these features, Aisha has already become much more than just a “chatbot”—she’s a dynamic, multilingual, and highly capable AI assistant, ready to help users with a wide range of tasks.

Aisha is a dynamic, multilingual, and highly capable AI assistant ready to help users in a wide range of tasks.

Technical Implementation

Developing Aisha involved integrating several key technologies and frameworks:

LangChain framework (langchain.com): LangChain facilitates the development of LLM applications, providing features such as document loaders, vector embeddings, memory for conversational context, and support for multiple LLM providers. Although similar functionality could be built without LangChain, this framework makes it much easier.

Chroma vector database (trychroma.com): This open-source database stores custom data in vector format, enabling efficient retrieval and processing. Vector databases are crucial building blocks for RAG applications, allowing the LLM to read and utilize custom data effectively.

Streamlit platform (streamlit.io): We used Streamlit to create and deploy Aisha’s user interface, ensuring an accessible and interactive experience for users.

Stable Diffusion (stability.ai/stable-image): This tool was used to generate the images that are crucial to Aisha’s identity, making her presence more relatable and engaging.

Aisha’s knowledgebase is a curated collection of information from our library website, LibGuides, and LibAnswers. Beyond generating text, LLMs can be used for reasoning and acting, simulating human thought processes and executing actions based on natural language queries.

The introduction of function calling by OpenAI in June 2023 made this even easier. Function calling allows the LLM to convert natural language queries into function calls, enabling the integration with other systems through APIs. This enables the agent to make decisions and use external tools such as Google and Wikipedia. For example, when tasked with searching the library catalog for books about AI published between 2020 and 2022, Aisha can autonomously execute a function call to perform the search:

Input: “Search the library catalog for books about AI published between 2020 and 2022”

Output: primo_search(
query=”AI”,
document_type=”books”,
year_from=”2020”,
year_to=”2022”)

Aisha can thus independently select the right tool for each task and handle everything, from simple questions to complex interactions that involve multiple data sources and functionalities. This capability transforms her from a regular chatbot into a versatile AI agent.

Building on the idea of reasoning and acting, we also added a feature in which Aisha shares her “thought process” after each response. This helps users understand how Aisha makes decisions and also serves as a diagnostic tool. Although everything is still based on probabilities (no, she doesn’t really think like humans do!), it adds a human-like touch to her interactions. This transparency helps users build trust in Aisha’s responses, making each interaction more informative and educational.

Example thoughts: The question was about citation management. I used the Knowledge tool to find information about the available resources for citation management at Zayed University Library. The search provided details about ProQuest Research Companion, RefWorks, and an online Citation guide, along with links to access these resources.

Challenges and Limitations

Despite Aisha’s impressive capabilities, we have encountered several challenges and limitations during our journey:

Hallucinations: Familiar to everyone by now, LLMs often generate plausible but incorrect information, a phenomenon known as “hallucination.” We tackled this issue by refining our knowledgebase and Aisha’s instructions. Although hallucinations can be difficult to completely eliminate, we have been able to reduce them significantly by instructing Aisha to rely on factual information from her knowledgebase and other credible sources.

Source data quality: Inaccuracies or outdated information in the source data naturally affects the response quality, which is why ensuring up-to-date and accurate source data is crucial.

API limits: While LLM API limits (for example, the number of tokens or API calls) have increased since the launch of ChatGPT, they still pose constraints. Some LLMs have higher token limits, but they can also be costly. We continue to monitor and optimize our API usage, ensuring cost-efficient operation.

Memory constraints: We started out using Streamlit Community Cloud, which has limited memory capacity. We have since migrated to a private server with more robust resources to accommodate multiple users. This upgrade has significantly improved Aisha’s performance and stability, allowing her to handle more complex tasks and a larger userbase.

Ongoing Research and Development

Our current research involves comparing different LLMs in the context of conversational AI agents. Using the LangChain framework, we can easily switch between models like GPT-3.5, GPT-4o, Google Gemini, Anthropic Claude, Mistral, Mixtral, and Llama 3, assessing their performance in generating text, reasoning, and acting. Our study combines human evaluation and LLM-assisted evaluation (yes, LLMs can also be used to evaluate LLMs!), focusing on response quality, adherence to instructions, and agent performance. Preliminary observations indicate that while all models can reason and act, some are more stubborn about following instructions. Curiously, we have also encountered a new issue: hallucinated tool use, in which the LLM imagines using a tool (such as the knowledgebase or Wikipedia) without actually doing so—new times, new challenges!

To ensure Aisha meets the needs and expectations of our users, we conducted a pilot survey among 15 library staff members. The results were overwhelmingly positive, with a strong satisfaction rating of 4.07 out of 5. Attendees highly valued Aisha’s potential and current capabilities for tasks such as Google search, library catalog search, and image analysis. They also provided valuable suggestions for improvements, including technical enhancements for handling documents and images, more elaborate responses with examples and links to sources, and regular updates to ensure accuracy. Encouraged by these results, our next plan is to conduct a comprehensive survey among students, faculty, and other library users to gather diverse perspectives and further refine Aisha’s functionalities.

Our future research will focus on further enhancing Aisha’s capabilities and exploring new applications of conversational AI in libraries. This includes integrating Aisha with new LLMs, expanding her range of services, and exploring new user engagement strategies. Additionally, we are interested in studying the long-term impact of AI on library services, user behavior, and information literacy.

Development Ideas

We have numerous ideas for further developing Aisha:

Animated avatar: Creating an animated avatar for Aisha to enhance her lively persona

Long-term memory: Implementing long-term memory to retain user preferences and continue conversations over multiple sessions while incorporating appropriate privacy measures

Expanded library services: Connecting Aisha to additional library services, such as room reservations and renewals, to perform more complex tasks

Communication modes: Introducing WhatsApp or email modes for more versatile interactions, allowing users to engage with her through their preferred communication channels

Personalized support: Developing scenarios such as course tutor, opponent, or interviewer to offer tailored support for individual courses and activities

Self-improvement capabilities: Enabling Aisha to analyze her own responses and suggest improvements, leveraging the LLM’s capabilities for self-assessment

The Impact of Conversational AI on Library Services

Aisha has already demonstrated significant benefits in her role at Zayed University Library. Students and faculty can now receive assistance at any time, without waiting for human staff availability. This 24/7 accessibility is particularly valuable for distance learners and those with tight schedules. Moreover, Aisha’s multilingual capabilities have the potential to make our library services more inclusive.

One of the most exciting aspects of Aisha’s development is her potential to engage users in new and innovative ways. For example, her image recognition and generation capabilities allow users to interact with visual content, making the library experience more dynamic. Whether it’s analyzing historical documents, identifying objects in images, or generating visual aids for presentations, Aisha adds a new dimension to how users engage with information.

Furthermore, the ability to explain her reasoning and thought process behind responses supports information literacy. Users can learn how Aisha arrives at her answers, which enhances their understanding of the process. This educational aspect is crucial in developing critical thinking skills and fostering a deeper appreciation of how information is sourced and validated.

Despite the numerous advantages, integrating AI into library services also comes with challenges and ethical considerations. One significant issue is data privacy. As Aisha’s capabilities expand, especially with features such as long-term memory, it’s essential to implement robust privacy measures to protect user information. Ensuring that data is handled securely and transparently will be crucial to maintaining user trust. Currently, Aisha is not dealing with any personal data, but this will likely change in the future.

Another challenge is managing AI bias. Aisha’s responses are influenced by several factors, such as the LLM she relies on, her specific instructions, and the tools she uses. To ensure her outputs reflect diverse perspectives and remain free from biases, we continuously monitor Aisha’s outputs and revise her instructions as necessary.

CONCLUSION

The journey of developing Aisha has been both challenging and rewarding, showcasing the immense potential of conversational AI in libraries. Aisha hasn’t just improved access to library resources, she’s become a beloved and engaging part of our library community. Her ability to simulate human interaction and to reason and act autonomously is simply mind-blowing. The fact that she can comment on her own instructions, give us development ideas, or even develop herself (if we let her!) is like a sci-fi dream come true!

The success of Aisha at ZU Library illustrates the broader potential of conversational AI agents in libraries worldwide. As these technologies become more accessible and sophisticated, they can play a crucial role in modernizing library services. Conversational AI agents can serve as knowledgeable assistants, helping users navigate vast amounts of information, access resources efficiently, and engage with library content in new and innovative ways. AI agents can also support librarians by handling routine queries, freeing up staff to focus on more complex tasks and personalized user support. This synergy between human expertise and AI efficiency can enhance the overall quality of library services.

As we continue to refine Aisha and explore new possibilities, we remain committed to sharing our experiences and collaborating with others in the field. The future of library services lies in embracing innovative technologies such as conversational AI agents, which can enhance user engagement, improve accessibility, and support the evolving needs of library patrons.

We invite library professionals and institutions to join us on this exciting journey. In fact, all librarians possess essential skills for the field of AI. Optimizing prompts and effectively organizing underlying information are at the heart of building conversational AI agents and other AI applications. Not to mention that librarians are the ones who teach information literacy, a critical component in understanding and managing AI technologies. Together, we can harness the power of AI to create more dynamic, inclusive, and efficient library services. If you have any questions or ideas or are interested in collaborating, please reach out to us. And you can talk to Aisha any time—she’s available at aisha.zu.ac.ae or zu.libguides.com/ai/aisha. Let’s shape the future of library services together!

Yrjo Lappalainen (yrjo.lappalainen@zu.ac.ae) is data services librarian, Zayed University Library & Learning Commons. Nikesh Narayanan (nikesh.narayanan@zu.ac.ae) is assistant professor and IT librarian, Zayed University Library & Learning Commons.

Comments? Email the editor, Marydee Ojala (marydee@xmission.com).