FEATURE
DEFINING AI: A Lexicon for Librarians and Their Patrons
by Laura Warner
Developing a working vocabulary is one of the first steps in better understanding AI. |
Libraries have long served as pillars of guidance, offering their customers and communities reliable information and a space for open discourse. Our profession has been pivotal through many information revolutions. From the invention of the printing press to the rise of the information superhighway, we have been there, evolving alongside societal information needs to navigate users through the changing tools and available resources. Right now, we are on the precipice of unparalleled transformation. AI will change how we learn, work, live, and interact with one another, as well as our sense of self and understanding of the world. At the same time, there are concerns about AI: There is a lack of unified and clear leadership on AI regulation and usage, ambiguity surrounds privacy and copyright issues, and there are worries about misinformation.
While each citizen is responsible for understanding AI, I believe it’s at the core of librarianship to be a knowledgeable resource for our clients and communities, regardless of whether you’re a CIO at a private corporation, a reference librarian at a university, or a library staffer delivering programs at a public library. Even though rapid changes in society often draw our professional attention in many different directions, understanding and preparing for the impact of AI are crucial to our communities and organizations. Rather than fearing it or seeing it as a background concern, it is best to understand it and develop our own set of ethics, guardrails, and guidelines for usage. Librarians can act as AI guides for their communities by understanding AI, developing a framework and policies around ethical AI usage, and providing programming and education on its usage.
This article is an introduction to the basic terms used to define and discuss AI. I will highlight terminology and provide examples and context about each. Developing a working vocabulary is one of the first steps in better understanding AI, starting to use it responsibly and ethically, and creating our own regulations around its role in our lives.
High-Level Distinctions
AI can be broken down into three types: narrow AI, generative AI (gen AI), and super AI.
Narrow AI
Narrow AI, sometimes dubbed “weak AI,” is a type of learning algorithm in which the AI is designed to do a specific task. This AI has one job and follows a set of instructions to perform it. Tools powered by this type of algorithm include spelling and grammar checks in our word processors, email, and other spaces. Autocorrect in our text messages is narrow AI. If you have a smartphone, use email, or subscribe to a streaming service, AI has been working in the background for years. Autocorrect, email filters, voice to text, and Netflix recommendations are all products of AI. If you ask Siri or Alexa to play a song or inquire what the weather is like, this is narrow AI.
Developers program and power these tools. They are sophisticated, but they still require some supervision. For example, have you ever been expecting an email that got sent directly to your junk folder? That’s an error made by narrow AI. It is important to note that despite narrow AI’s efficiency, its scope is minimal. Narrow AI does not understand or operate outside of its specific tasks and cannot adapt or generalize. For example, a chatbot on a retail webpage is designed to answer frequently asked questions about the business, such as store hours, return policies, or product information. If you ask it a question that is too nuanced or outside of its scope (“I’m planning a ski trip to Aspen—what outerwear do you recommend?”), it will not be able to provide you with a tailored response. That would be where a human, or a gen AI model, would be more supportive.
Gen AI
Gen AI is intended to think on its own and generate content. Widely used examples of these models include Chat-
GPT by OpenAI, Claude by Anthropic, and Gemini by Google. These models have advanced rapidly since Open-AI’s release of ChatGPT 3 in November 2022. They can create text, code, tell stories, write songs, and develop images. Beyond creating, gen AI can also solve mathematical problems, generate itineraries, brainstorm ideas with you, provide feedback, and have a complete conversation. When used ethically and responsibly, gen AI can augment work and eliminate repetitive tasks. Some examples of how you can experiment with a gen AI model in libraries include having the model help you draft newsletters and informational emails or generate summaries of books and articles.
Hallucinations
One caveat is that despite gen AI models’ impressive abilities, they are still prone to hallucinations—basically making up an answer and stating it as truth. They still need human oversight, and, as we know in our business, it is always important to fact-check.
Super AI
Super AI, a potential future state, may come up in the AI conversation. It refers to a hypothetical, advanced AI with capabilities far surpassing human intelligence. This concept remains largely theoretical but is still worth mentioning and monitoring while we discuss the future of AI.
Nitty-Gritty Aspects
In addition to knowing the higher-level terms, the following underlying concepts are important to understand.
Pattern Recognition
Pattern recognition is another term that’s commonly used in the AI lexicon. It’s the process of teaching a computer to recognize similarities or repeated elements in data. This is one of AI’s techniques that helps computers mimic the same human ability to recognize and process information. Pattern recognition enables many AI features to notice patterns. To understand how it learns, think about a very young child learning about animals. If they are shown numerous images of cows and numerous images of horses, then, over time, they recognize the difference between the two. Pattern recognition in AI is used to discern a legitimate email versus a spam or harmful email. It is a foundational process in machine learning and deep learning (discussed later), helping AI “learn” through repeated exposure to data.
Algorithms
An algorithm is a series of step-by-step instructions that tell a computer how to solve a problem or perform an action. Algorithms are also used as procedures for handling data and are essential to how automated systems work. An algorithm could be used for something simple, such as sorting and categorizing information, or it could be used for more advanced outputs, such as pushing content to your social media feed. With AI, algorithms form the foundation of various models and systems, guiding each step in processing data or recognizing patterns.
Machine Learning
This is another subset of AI, in which machines learn through large amounts of data and mimic human brain functions to produce content. Machine learning happens when an algorithm is trained on large datasets, allowing it to recognize patterns and make assumptions and predictions. Training data helps teach the machine how to respond to similar data in the future. For example, if the weather outside is getting progressively colder, you assume it will be cold tomorrow.
The most common learning processes are supervised and unsupervised learning. In supervised learning, the machine learns through data that is labeled, just like flash cards, and a human is in the loop of the learning process. In unsupervised learning, the machine is just fed loads of information and allowed to process it as it pleases. From there, it discovers patterns without the labels attached. Machine learning is being used across many industries. In healthcare, it can help medical professionals detect the probability of a patient developing a disease. In retail, machine learning powers Amazon’s recommendation algorithm. Banks and financial institutions also use machine learning to detect and prevent fraud.
Deep Learning
Deep learning uses cognitive science to reach even more interesting conclusions. This type of AI mimics the functions of the human brain. It uses layered neural networks to identify complex patterns, making it ideal for advanced products and tasks, such as self-driving cars or interpreting medical images.
Neural Networks
Neural networks, also known as artificial neural networks (ANNs), are a method for teaching computers how to process information. They are a subset of machine learning and use layers of algorithms inspired by the human brain to help machines recognize patterns and learn through connections between datapoints.
Large Language Models
Large language models (LLMs) learn language just like humans do. They start with the alphabet and move on to words, then to phrases and complete sentences. Finally, they learn context and culture to predict what comes next in a sentence. So, if you were to ask an LLM to fill in the blank, you would say New York, and the LLM would say city. Like the ones behind ChatGPT, LLMs can generate human-like text, rewrite content, summarize content, and even converse.
Natural Language Processing
Natural language processing (NLP) is a type of AI that enables computers to understand spoken and written human language. It enables features such as text and speech recognition.
Natural Language Generation
This is a branch of NLP associated with processing unstructured and structured fields into natural language. It’s the “writing” aspect of NLP, in which data is used by computers/machines to generate content and information in a readable format.
Generative Pre-Trained Transformer
Generative pre-trained transformers (GPTs) may sound familiar—i.e., ChatGPT—but what does that actually mean? Breaking it down, generative means that it can produce new information based on the information it was trained on. Pre-trained means it was initially trained on an extensive dataset before fine-tuning to be more specific. Transformers are a specific type of AI architecture that can process large amounts of information in real time rather than sift through it piece by piece. These transformers, such as ChatGPT, use this design to generate logical responses, answer questions, and engage in conversation by rapidly analyzing language patterns.
AI Bias
It is also important to point out the issue of ethics and bias in AI, as it is imperative to understand AI’s social implications. Note that machines are not biased themselves; they are taught this bias as they are trained by humans, who possess a bias, and from information on the internet. Bias in AI refers to unfair or skewed outputs resulting from biased training data.
Conclusions
While many things are happening around us, AI will continue to advance and change our lives. By grasping these fundamental AI terms, we can better guide our communities in an informed way. Understanding AI tools and adapting them responsibly and ethically into our lives and work will help us understand AI’s abilities, limitations, risks, and nuances, therefore better equipping us as information professionals to inform others on the tools that are impacting their lives and work. Furthermore, we can use this knowledge to become advocates for the ethical and unbiased development of AI models in the future. This introduction marks the beginning of a journey toward using AI thoughtfully, keeping libraries and information professionals at the forefront of innovation and ethical practice.
|