FEATURE
The Modern Prometheus: How AI Is Pushing the Limits of Human Knowledge
by Chad Mairn and Shelbey Rosengarten
More than 200 years ago, a young woman imagined a story about a scientist who created a monstrosity. The story of Victor Frankenstein is familiar to many of us, although our recollections often defer to the green, square-headed creature of Hollywood’s design. In the novel, the creature has no green skin, no squared-off skull, no mother, no father, no name. The novel’s title comes from the scientist who created this hulking superhuman—and then, in a fit of shock and terror when it lurched to life, abandoned it.
Mary Shelley’s novel, Frankenstein: The Modern Prometheus, is considered the origin of science fiction. Between the emergence of an innovation and the threat that it comes to pose, it embodies the types of conflicts we have come to expect from the genre. Since then, we have spawned a world of fascinating inventions that can massively improve our lives—or wipe them out. One of the most recent is AI.
During class discussions, students wonder whether Dr. Frankenstein’s biggest mistake is running from his invention and trying to pretend it doesn’t exist. His poor judgment destroys his life—a cautionary tale, if ever there was one. In light of that narrative arc, it’s probably best to look at what AI is, understand its capacities and limitations, consider some of its best uses for education and information seeking—and just imagine what will come of its integration.
AI Technologies
Machine learning (ML) applications use algorithms, which resemble cooking recipes,
to learn and predict behaviors derived from large datasets that are used to train the applications for specific purposes.
Computer vision (CV), a subset of ML, learns from multiple images in a dataset to ultimately give computers the ability to see and classify objects.
Natural language processing (NLP) resembles CV in that it gives computers the ability to understand human language by learning from these datasets.
Third-generation generative pre-trained transformer (GPT-3.5) is currently the language model that powers ChatGPT, which was trained using a dataset that connects trillions of words to generate text.
AI and related technologies are evolving fast. Learn more at shelrose13.wixsite.com/my-site. |
What Is AI?
AI is quickly moving from a science-fiction concept to a reality in which machines have the capability to perform tasks commonly associated with humans. AI is an umbrella term that includes various technologies such as machine learning (ML), computer vision (CV), and natural language processing (NLP), as defined in the sidebar on the right.
When combined, these processes can be quite powerful. Google Lens, for example, uses CV and NLP coupled with the power of Google’s search engine to see, understand, and augment the real world. For instance, a user can scan a restaurant menu in a foreign language with their smartphone’s camera, and the CV function will recognize the text while NLP translates the menu into their native tongue in real time. Although ML technologies do most of the behind-the-scenes work, developers must spend time configuring the algorithm, and they also need human feedback to make the application more reliable.
Science fiction has constructed literary visions in which an all-powerful AI will develop a soul and subsequently work to end all of humanity, just as Victor Frankenstein’s creation destroys his life. Our imaginations can sometimes get the better of us, but the truth is usually both more mundane and more complex. AI is divided into four categories. Reactive machines have no memory, so they cannot use past experiences to help with decision making, but they can be programmed to find a strategy to win a game such as chess. Limited memory applications can refer to past experiences, such as what autonomous cars use while driving. The next two are the spaces in which science fiction thrives. Theory-of-mind AI will one day have applications that develop social intelligence and will have the capability to understand emotions and other human expressions. And lastly, self-aware AI will be sentient and know that it exists.
Where does the seemingly ubiquitous ChatGPT fall into all of this? Developed by OpenAI, the popular language model saturated the headlines in late 2022; within 5 days, it had 1 million users. It took Instagram 2 months to achieve that. At the time of this writing, ChatGPT is considered a limited memory application and relies on human prompts and feedback to work well. A language model essentially learns from text stored in large datasets, while ML predicts the next valid word and so on to generate original text. The keyword “original” here has been the focus of much debate that will not be resolved in this article. The third-generation generative pre-trained transformer (GPT-3.5) is currently the language model that powers ChatGPT, which was trained using a dataset that connects trillions of words to generate text.
Imagine in the future that an AI chatbot gets stumped by a challenging problem or question, but instead of providing flawed information, it has been programmed to ask a human for help and clarification. ChatGPT, in a way, resembles reference librarianship in that librarians provide authoritative sources to help solve a problem or answer a question without any judgments. In February 2023, Springshare introduced LibAnswers Chatbot, which will use an API to connect the chatbot to a library’s FAQ database. The chatbot will be able to automatically gather patron information, answer common questions regarding the library, and route requests to a human operator if the question cannot be answered by the chatbot (Sundell-Thomas). AI is being used in many industries (e.g., streaming media services, financial institutions, medical research, education, the military), and it will undoubtedly continue to generate controversy as it evolves.
AI Applications
Although ChatGPT is getting much attention these days, generative AI models have existed for many years. Generative AI does precisely what one would think: It generates content such as text, imagery, and audio from large datasets. In fact, many of us have been asking these AI applications (such as Apple’s Siri or Amazon’s Alexa) to tell us a joke, to turn lights on and off, to find a recipe, to provide basic information on a topic, and more.
In 2005, a few MIT researchers created SCIGen, a Perl program that generated random computer science research papers with charts, graphs, and citations. It has been estimated that there are more than 30,000 peer-reviewed journals with close to 2 million articles published per year. This has led to significant scholarly growth in many disciplines, but it has also incentivized some publishers to “spam researchers with weekly calls for papers” that, in many cases, are not peer-reviewed before they are accepted (Conner-Simons). The researchers used SCIGen to write a fake paper titled, “Rooter: A Methodology for the Typical Unification of Access Points and Redundancy,” and it was accepted to the World Multiconference on Systemics, Cybernetics and Informatics.
AI image generators such as DALL·E and Midjourney are also getting a lot of attention in the media these days. When a user enters a prompt, AI-generative “art” basically removes noise based on the prompt’s keywords until an image appears. Right now, these types of AI applications can’t do much without human engagement and reflective feedback. The word “art” previously mentioned is within quotations because there is a growing argument that imagery created from an AI-generative application is not considered art. Again, this issue will not be solved in this article; in fact, it will take years of humans using these tools to truly understand the consequences. One thing is certain: These applications will be integrated into most technologies soon. In February 2023, Microsoft started to use ChatGPT to power its search engine, Bing, and it initiated an AI gold rush among the big technology companies. These generative AI applications may come under much scrutiny in higher education; some factions may want to eliminate them without truly understanding their potential use in scholarship. But similar to what happened to Victor Frankenstein as he ran away from his creation, denying AI’s existence may cause more problems. As a result, AI literacy should be included in the curriculum to prepare students for a future in which mundane tasks may be done by AI.
Typical Concerns About AI
Since it exploded on the scene in late 2022, ChatGPT has been the subject of many think pieces. One might say that it has generated headlines without actually generating them itself, and the tenor of these articles ranges from sunny optimism to fear and despair. At the core of this is often a singularity-oriented sentiment that resonates in the question, “Will AI writing models replace me?”
While AI writer bots may be novel, this reaction is nothing new. It’s still worth pondering, and we can look at analogues from the past that involve splashy technology. Cameras didn’t replace painting, but pushed artists to experiment with new techniques and modes of self-expression. Calculators didn’t replace our need to master basic computation. In fact, the use of them pushed the teaching of math in new directions, and being adept at computation is still a necessary skill in work and in life outside of work. Centuries ago, the printing press eventually led to widespread literacy and publication, and, more recently, the invention of e-readers has not made print books any less popular. If anything, the introduction of ebooks has caused us to re-evaluate what we love about books, how to design them, and when we prefer to read them—often as a break from too much screen time. Musicians feared the synthesizer when it first came out, thinking that it would replace musicians; however, it provided yet another composition tool (i.e., another instrument) for musicians to write and perform music with. Without synthesizers and other music technology, there would be no innovative electronic bands, such as Kraftwerk, New Order, and Depeche Mode.
The same is true with computer coding. In the early days, programmers had to physically build logic with wires, and it took forever. Fast-forward many decades, and programmers rarely do anything from scratch. Instead, they use tools such as GitHub’s CoPilot, which acts as an assistant to help speed up the coding process to ultimately make life better for people. ChatGPT can help write code as well and has recommended modules, functions, and other code that programmers may not have known about originally. Therefore, the AI, in a sense, is teaching the human. Visit tinyurl.com/chatgpt-pacman to see a video with an explanation of ChatGPT coding a PAC-MAN game using Python.
All of this is not without caveats. Generative AI applications such as ChatGPT may appear to take on voices, personalities, and lives of their own. It’s startling to see a chatbot respond as though a person were behind the screen, casually regurgitating our own language back at us. As previously mentioned, these AI applications are large language models (LLMs) trained on datasets from all across the internet. Our ability to self-publish has made it easier to spawn and spread all kinds of misinformation and, of course, terrible prose. In addition, other AI models have already shown bias. ChatGPT is a responsive tool that doesn’t initiate the conversation. Just as with other digital tools, the input determines the output.AI writing models are changing quickly. They will continue to become more sophisticated and versatile, and our engagement with them will determine how they evolve. They are being used across a variety of sectors. That is why it’s important not to run from our own creations or deny their relevance.
Potential Benefits
Can we learn about ourselves from using AI? What can it show us that we don’t know, individually or collectively? We will find out by digging in. Several generative AI platforms are free and just require signing up for an account. Others require a subscription, which may become more common as more emerge; for now, there’s no reason not to log in and play around. Asking questions of ChatGPT may lead to better results than a Google search, which returns so many ads and sponsored results that undermine its central use.
In addition, what it generates may show us more than what we set out to find. Asking ChatGPT to transmit information is different than just Googling an answer and getting more than a million results. Its response, often a few paragraphs long, may not necessarily be accurate, and AI writer bots have been known to make up answers. Therefore, prompting ChatGPT to answer may cause us in turn to cross-check facts. Users will also really need to think about how they ask or phrase their queries. This may lead us to put more thought into our own writing and communication. The same goes for using image-generation AI applications such as Midjourney and DALL·E. Highly specific prompts lead to better image generation. Once again, it remains dependent on the user.
Some educators are adamantly against AI writing models, and others are cautious but curious; this is consistent with other disruptions. However, plenty of instructors are embracing AI as an opportunity to revisit the relevance of our content and assignments, as well as what we want students to learn and apply. What’s necessary? What may be discarded? That said, it requires a lot of effort for professionals who are already stretched rather thin. Small moves are the wisest. One of the best ways to wade into using ChatGPT may be to generate examples or summaries and then discuss or critique them in class.
Since ChatGPT and other easy-to-use generative AI applications have come into focus, many people’s fears about AI have grown exponentially. As Victor Frankenstein says, “Nothing is so painful to the human mind as a great and sudden change.” This fear of change resonates because many people are concerned about the rapid development of AI and the potential consequences that it may bring. Society should be cautious not to let generative AI veer into monstrous territory. As professionals, we need to continue to encourage others to seek knowledge and information, regardless of how it is created, and to think and reflect critically on what we find.
|