Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > October 2024

Back Index Forward

SUBSCRIBE NOW!
Vol. 44 No. 8 — October 2024
FEATURE

An Evaluation of Cutting-Edge AI Research Tools Using the REACT Framework
by Susan Gardner Archambault and José J. Rincón


These tools are revolutionizing how researchers conduct comprehensive literature reviews.
As AI continues to revolutionize academic research, librarians must stay informed about the latest AI-powered tools and their potential applications. This article explores a range of cutting-edge AI research tools, evaluating their key features, benefits, and drawbacks using the REACT framework (Relevancy, Ease of Use, Assessing DEIA [diversity, equity, inclusion, and accessibility], Currency, Transparency & Accuracy). We focus on two categories of tools: citation-based literature mapping tools and text-extraction tools for literature reviews. The citation mapping tools are Litmaps, Connected Papers, and ResearchRabbit, which help researchers discover and visualize related academic literature. The text-extraction tools—Elicit, scite, and Consensus—assist in finding, summarizing, and analyzing relevant papers.

Background

To evaluate these six tools, we tested the paid subscription versions across a range of topics in the social sciences, sciences, and humanities. Information was gathered from multiple sources to gain a comprehensive understanding of each tool’s capabilities and limitations. These sources included official tool websites and documentation (see the Resources section), product FAQs, customer support interactions, and Aaron Tay’s blog on librarianship. The REACT framework offers a systematic method for assessing these AI tools, and each criterion is rated on a scale from 1 to 4, with higher scores indicating better performance:

  • Relevancy—How well the tool meets specific research needs and goals, from poor relevancy with unrelated results (1) to highly relevant results consistently matching the research topic (4)
  • Ease of use—Usability and user experience, ranging from complex and challenging to navigate (1) to an intuitive interface with minimal learning curve (4)
  • Assessing DEIA—Commitment to DEIA, from low-commitment with minimal accessibility features and high bias (1) to exceptional commitment with full accessibility, multilingual support, and equitable access for all users (4)
  • Currency—Use of the latest AI advancements and datasets, from rarely updated (1) to real-time updates reflecting the latest research (4)
  • Transparency & Accuracy—Clarity and precision of the tool’s processes and outputs, from no transparency and low accuracy (1) to high transparency with clear explanations of decision-making processes and consistently accurate information (4)

To ensure a balanced assessment, the final REACT score for each tool represents the average of our individual scores. For the complete REACT Framework, see libguides.lmu.edu/GAIL24/REACTFramework. By understanding these tools and their implications, librarians can effectively support researchers and contribute to the evolving landscape of academic research. The first section of this article will focus on citation-based literature mapping tools, while the second will focus on text-extraction tools for literature reviews. We will provide detailed analyses of each tool, followed by a discussion of ethical considerations and a conclusion summarizing key findings.

Citation-Based Literature Mapping

Recent advancements in open scholarly metadata and academic knowledge graphs have led to the development of citation-based literature mapping tools. These AI-powered platforms are changing how researchers discover, explore, and visualize academic literature. A knowledge graph in this context represents academic information as a network of interconnected entities (e.g., papers, authors, concepts) and their relationships, enabling sophisticated querying and analysis.

These tools leverage vast databases of academic citations and metadata, typically relying on large, open scholarly databases and services such as OpenAlex (a free, open source index of scholarly works for the scientific community), Semantic Scholar (an AI-powered search engine for academic papers using machine learning to identify connections between works), and Crossref (a service that provides DOIs for academic content, enabling persistent links to research outputs). Common features of these include web-based interfaces connected to open citation databases and knowledge graphs, starting points based on user-selected seed papers, citation-based algorithms for recommending relevant papers, interactive visualizations of connections between papers, and iterative processes for refining and expanding literature maps.

By using these data sources and algorithms, the tools can offer a more interconnected view of a field of study. They can help identify influential works, emerging trends, and unexpected connections that might not be apparent through traditional search methods. In this section, we’ll examine three citation-based literature mapping tools: Litmaps, Connected Papers, and ResearchRabbit. Each offers distinct features to help researchers navigate academic literature, uncover relevant works, and gain deeper insights into their fields.

Litmaps

Litmaps is a visual literature discovery tool that enables researchers to explore and uncover relevant articles through citation connections. Powered by data from OpenAlex, Semantic Scholar, and Crossref, Litmaps provides access to more than 270 million academic papers, focusing on items with a DOI. Litmaps explores connections up to two degrees of separation in both forward (papers that cite a seed paper) and backward (papers cited by a seed paper) directions. By repeating this process for newly discovered papers, it uncovers both directly and indirectly related works. This approach helps researchers identify seminal works, emerging trends, and key influencers while tracing the evolution of ideas within a research area.

One key feature of Litmaps is three search algorithms: top shared citations and references, common authorship patterns to help identify research teams and collaborative networks, and semantic similarity search by title and abstract, using natural language processing (NLP). There are also customizable workspaces and a color-coded tagging system, integration with reference management tools such as Zotero, export options for maps and citations, monitoring of new papers in a research area, and the option to create a paid team account.

See Table 1 for a summary of how Litmaps scored on the REACT framework. In terms of relevancy, Litmaps generally provides good results, although users may occasionally encounter off-topic suggestions. Like all of the tools in this category, its visual approach benefits those who prefer spatial representations of information. However, users should be prepared to invest time in learning the tool’s various features to make the most of its capabilities. The interface may be challenging for novice users and researchers.

Regarding DEIA, Litmaps offers Google Translate integration for more than 130 languages. However, its visual nature may challenge users with visual impairments. The freemium pricing structure includes a free version with basic features and two maps capping at 100 articles per map, while paid options (Education Pro at $8–$12.50 a month and Commercial Pro at $40–$50 per month) offer more advanced features and unlimited maps and articles. This tiered structure may create equity barriers for some users.

Litmaps stays relatively current with new findings, allowing users to filter results by date, although there may be a slight lag in the availability of the most recent articles. In terms of transparency and accuracy, Litmaps provides a clear privacy policy that outlines data collection, usage, sharing practices, and data rights, and it offers responsive customer support.

Connected Papers

Connected Papers is a visual exploration tool for academic literature that generates a force-directed graph to cluster related papers. Powered by the Semantic Scholar corpus, it analyzes approximately 50,000 papers and selects the few dozen with the strongest connections to the seed paper, creating intuitive visualizations to help researchers discover relevant works and understand their field’s structure. Connected Papers uses a similarity metric that combines co-citations (where two documents are cited together by other documents) and bibliographic coupling (where two works reference a common third work in their bibliographies). This approach allows users to identify both seminal works (through the Prior Works view) and recent developments (via the Derivative Works view) related to the seed paper. By visualizing these connections, researchers can discover papers that are conceptually related, even if they don’t directly cite each other.

Key features of Connected Papers include force-directed graph visualization of paper relationships with line length showing the degree of connectedness; a similarity metric; Prior Works and Derivative Works views; a list view with similarity percentages; mobile app support; filtering options by publication year, keyword, and OA availability; a multi-origin graph option for refining searches; and integration with reference management tools such as Zotero. See Table 2 for a summary of how Connected Papers scored on the REACT framework. In terms of relevancy, Connected Papers generally provides highly relevant results, effectively finding conceptually related papers that might not be apparent through traditional keyword searches or citation tracing. The tool’s simple and intuitive interface contributes to its ease of use, making it accessible even to novice researchers. However, this simplicity also means that it offers fewer customization options compared to some of its competitors.

Regarding DEIA, Connected Papers is currently limited to English, which could restrict its inclusivity for non-English speaking researchers. Its primarily visual interface may present challenges for users with visual impairments, with subtle color contrasts that may not be ideal for users who are color-blind. The tool offers a free version with all features and five graphs per month, with paid options for unlimited access ($6 a month for academic and $20 a month for business accounts). It is the only tool in this category that works on a mobile device.

Connected Papers demonstrates good currency, regularly updating its database and allowing for date filtering, although there may be a slight lag in incorporating the very latest research. In terms of transparency and accuracy, it provides clear information about its methodology, data sources, and data-handling practices.

ResearchRabbit

ResearchRabbit is a comprehensive research platform that discovers and visualizes relevant literature from hundreds of millions of academic articles based on user-created collections of papers. Powered by a combination of data sources, including OpenAlex and Semantic Scholar, it employs custom recommendation engines and borrows search algorithms from PubMed for medical topics or Semantic Scholar for other subjects. The platform is designed to support the workflow of unstructured searching while providing a path back to the original seed publication, mimicking the rabbit hole of research. The endlessly expanding panel setup allows for exploration in a single interface without getting lost.

One key feature is tailored paper suggestions based on the user’s collection, including recommendations for earlier work (influential papers that preceded and influenced current research), later work (recent papers that cite or build upon the user’s collection), and similar work (papers covering related topics or using similar methodologies). Additional features include Rabbit Radar email alerts for newly published relevant papers; collaboration features, including shared collections; integration with reference management tools such as Zotero; discovery of linked content mentioning your papers (websites, Wikipedia, patents); organization of collections into categories; export options; and the ability to add notes to papers in your collections.

See Table 3 for a summary of how ResearchRabbit scored on the REACT framework. In terms of relevancy, ResearchRabbit generally provides highly relevant results across various disciplines, effectively identifying papers closely related to the user’s research interests. However, it may struggle with very recent or niche topics, as is common with many citation-based systems. The platform’s ease of use is somewhat mixed. While it offers a workflow that aligns well with how experienced researchers conduct literature reviews, the tool has a steeper learning curve compared to simpler alternatives. The multitude of features can be overwhelming for new users, but they provide valuable flexibility for those who invest time in learning the system.

Regarding DEIA, ResearchRabbit currently only supports English, which may limit its inclusivity for non-English-speaking researchers. Its visual nature may pose challenges for users with visual impairments, especially due to its use of blue and green coloring, which can be problematic for users who are color-blind. However, ResearchRabbit demonstrates a strong commitment to equity by being “free forever for researchers” and is the only tool that supports collaboration at no cost.

ResearchRabbit provides regular updates, staying relatively current with new findings, although it may lag on emerging topics. In terms of transparency and accuracy, there is a detailed privacy policy. ResearchRabbit provides some insights into its data sources and recommendation algorithms, but there is still some opacity regarding the specific workings of the platform.

Text-Extraction Tools for Literature Reviews

Recent advancements in NLP and machine learning have led to the development of AI text-extraction tools for literature reviews. These tools are revolutionizing how researchers conduct comprehensive literature reviews by automating the extraction of relevant information from large datasets of academic texts. AI text-extraction tools can quickly identify key themes, extract pertinent data, and synthesize findings from multiple sources. In the context of literature reviews, they can extract citations, datasets, summaries, abstracts, research findings, methodologies, and other critical elements from academic papers. While this automated process can save significant time, it’s important to note that these tools are not infallible and may sometimes produce inaccurate results. Therefore, human oversight and verification remain crucial to ensure the quality and accuracy of the review.

These tools typically rely on advanced NLP algorithms and machine learning models to process and analyze academic texts. Common features include automated extraction of key information from academic papers, summarization of research findings and methodologies, identification of common themes and conflicting viewpoints, integration with academic databases and search engines, and customizable extraction parameters to suit different research needs.

By using these tools, researchers can focus on higher-level analysis and synthesis, ultimately producing more robust and insightful literature reviews. In this section, we’ll examine three text-extraction tools for literature reviews: Elicit, scite, and Consensus. Each offers distinct features to help researchers find, summarize, and analyze relevant papers, potentially boosting productivity and the quality of research outcomes.

Elicit

Elicit, developed by Ought, is an AI research assistant that transforms interactions with academic literature. Powered by the Semantic Scholar corpus and using machine learning models such as GPT, Elicit helps find relevant papers, summarize findings, and extract key information. A standout feature is its ability to prioritize and present the most relevant literature based on a user’s specific research questions and interests. Users can locate papers, extract data from PDFs, and generate concept lists while receiving detailed source information, including SCImago journal rankings, citation counts, and DOI links (Kung 2023). The Unpaywall plugin offers open access to PDFs, enhancing research accessibility.

Key features include filtering papers using 30-plus predefined criteria, such as methodologies and limitations; organizing research with “notebooks” to add steps, log every query, and collaborate efficiently; citation exploration, including a trail search feature for exploring references forward and backward; refining searches by publication period and modifying prompts; combining results from multiple queries into a single table for systematic analysis; citation exploration with the “Show more like these” feature; integration with reference management tools; automatic saving of searches for future use; interactive chat engagement with papers; text extraction using high-accuracy mode; and support for specific study types, such as reviews and randomized controlled trials.

See Table 4 for a summary of how Elicit scored on the REACT framework. Elicit excels in providing relevant insights for literature reviews across various fields but may lack comprehensive coverage in some specialized areas. Its user-friendly interface offers guided workflows, customization options, and integration with other research tools, but there may be a slight learning curve for some users.

Regarding DEIA, Elicit offers some language support and user-friendly features but lacks comprehensive accessibility options. Its tiered pricing model includes
Basic (Free), which provides limited features, with four-paper summaries and 10 PDF extractions monthly; Plus ($12 a month) with eight-paper summaries, 25 PDF extractions monthly, and additional features; and Pro ($49 a month) with 100 PDF extractions monthly and advanced features for systematic reviews.

While the free tier provides access to core features, the paid tiers offer more comprehensive capabilities, potentially impacting equitable access for some users. Elicit remains current, with frequent software updates, regular database synchronization, and algorithm enhancements. However, it provides limited transparency into its internal processes, making it difficult for users to understand how results are generated and ranked. While Elicit generally offers accurate results, users should verify findings by reading the original articles or consulting additional sources to ensure reliability. This practice helps mitigate potential errors in the AI’s extraction and interpretation process.

scite

scite is a smart citation index that provides context and classification for scientific citations. It categorizes citations as supporting, contrasting, or merely mentioning the cited work, allowing users to see how papers have been cited within their original context. The platform contains more than 1.2 billion classified citation statements from 187 million full-text articles across various disciplines.

Using the GROBID machine learning tool and deep learning models, scite extracts and classifies citation statements (Nicholson et al. 2021). This enables researchers to understand an article’s reception and search for specific facts, methods, datasets, and claims within citation contexts. scite sources its data through indexing agreements with academic publishers such as Wiley and Cambridge University Press, as well as open sources such as Unpaywall, PubMed, and OA journals.

Key scite features include smart citations showing context and classification; search filters for classification, section, and year; user feedback on classification accuracy; scite reference check for screening manuscript references; Chrome extension and integration with reference management tools; custom dashboards for tracking paper groups; an AI research assistant powered by ChatGPT 3.5 and a proprietary database; and a citation visualization map. See Table 5 for a summary of how scite scored on the REACT framework. scite delivers moderately relevant results, with its main strength lying in providing citation context. Regarding ease of use, scite requires some familiarity with citation concepts and research terminology. Its interface may present a learning curve for novice users, but the AI assistant enhances usability through plain-language queries.

Regarding DEIA, scite is partially conformant with Web Content Accessibility Guidelines (WCAG) 2.1 level AA (a mid-level industry standard for web accessibility) and can process requests in multiple languages, covering topics across science and the humanities. While it offers free resources such as the Chrome extension and a Zotero plugin, most features require a paid subscription. After a 7-day free trial, individual accounts cost $12–$20 a month, with team and institutional accounts also available.

scite demonstrates good currency, with results that include very recent publications. It provides detailed explanations of its citation-extraction and -classification process, although it doesn’t fully explain the AI’s decision making for individual classifications. The platform shows good accuracy, particularly for the most common “mentioning” class, and allows users to flag incorrect classifications for review. Also, it has access to more full text than some of its competitors, so it relies less on metadata alone.

Consensus

Consensus leverages Semantic Scholar’s extensive database of more than 200 million papers, providing high-quality summaries and insights across various scientific domains. By integrating datasets such as CORE and Sci-Score and using custom language models, Consensus delivers precise and relevant information. Its unique Consensus Meter measures agreement among studies, making it effective for summarizing literature and analyzing research cohesion. Consensus Copilot enhances research by guiding AI searches and creating interactive content. It assists users with complex topics, generating drafts and providing tailored insights through structured assistance and real-time updates.

Key features of Consensus include filter application for high-quality research from top journals and highly cited papers, guided and summary modes (you can switch between Copilot and Synthesis for refined searches and concise summaries), study details access (e.g., population, sample size, methods, outcomes), citation generation in multiple formats, integration with reference management tools, detailed data export for easy analysis, efficient saving and organization of papers and searches, agreement levels showing consensus and controversy levels among studies, query-accuracy improvement through keyword context understanding, and visual interpretation with charts and graphs for research trends.

See Table 6 for a summary of how Consensus scored on the REACT framework. It generally provides relevant results, drawing from Semantic Scholar’s database and integrating datasets such as CORE and SciScore for precise information. Its fine-tuned language models and Consensus Meter offer quality summaries and measure study agreement, although consistent relevance across queries could be improved.

The user-friendly interface and features such as Consensus Copilot make navigating complex research tasks easier. Although familiarity with AI and research concepts is helpful, the tool’s structured guidance and real-time updates simplify the process for both novice and experienced researchers. The intelligent design enhances the overall user experience, although refining the learning curve could further improve usability. A strong commitment to DEIA is evident in its features. The tool supports researchers with disabilities with assistive technologies. Consensus offers content in multiple languages and maintains minimal bias with a diverse range of authors. Many features are available for free, although advanced functionalities require payment.

In the Currency category, Consensus demonstrates its commitment to providing up-to-date research through regular updates and database integrations. However, variability in update frequency from source databases can occasionally affect its ability to deliver the most recent information in all fields. In terms of transparency and accuracy, Consensus provides moderate insight into its data processing and decision making through features such as Consensus Meter, which shows study agreement. While some aspects of its algorithms remain unclear and occasional minor errors may occur, these do not significantly affect the overall quality and reliability of its results.

Ethical Considerations

As AI tools become prevalent in academic research, ethical implications beyond the REACT framework require attention:

Digital divide—Freemium models may exacerbate inequalities. Librarians should advocate for institutional subscriptions to ensure equitable access.

Critical thinking and overreliance—AI summaries may reduce deep reading and limit serendipitous discoveries. Librarians can integrate AI literacy into research methods courses, emphasizing AI as an aid, not a replacement for human judgment.

Stability—Many AI tools are from startups, raising concerns about long-term data preservation. Librarians need to regularly assess and update resources to include reliable AI tools and best practices.

Environmental impact—AI systems have a significant environmental footprint (Crawford 2021). Librarians can promote awareness and encourage the use of energy-efficient AI tools.

Data privacy and algorithmic bias—Consider how tools handle sensitive data and potentially perpetuate biases. Librarians should offer workshops on responsible AI use and encourage diverse sources.

Accuracy and transparency—AI misinterpretation could propagate misinformation, and opaque algorithms may affect research reproducibility. Librarians should emphasize verifying AI-generated content and advocate for transparent AI systems. They can highlight limitations in retrieving current articles or accessing paywalled full texts.

Ethical use and intellectual property—There is a need to address proper attribution and citation practices for AI-generated content. Librarians can raise awareness of the implications on academic integrity and originality in research.

Librarians play a crucial role in developing AI literacy among researchers. By addressing these considerations, they can ensure that AI research tools enhance, rather than compromise, academic research integrity, maximizing benefits while mitigating risks.

Conclusion

The AI research tools evaluated in this article offer diverse capabilities to enhance academic research processes. This article provides a comparative overview of these tools, highlighting their key features, benefits, and drawbacks (see Table 7). As these technologies continue to evolve, researchers and librarians should remain informed about their potential applications and ethical considerations. By leveraging these tools judiciously, the academic community can improve research efficiency while maintaining the integrity and quality of scholarly work.

REACT Framework evaluation for LitMaps
REACT Framework evaluation for Connected Papers
REACT Framework evaluation for ResearchRabbit
REACT Framework evaluation for Elicit
REACT Framework evaluation for scite
REACT Framework evaluation for Consensus
Overall comparison of AI research tools for literature reviews

Resources

Archambault, S.G., and Rincón, J. (2024). “Accelerating Academic Research With AI—GAIL Conference ’24.” libguides.lmu.edu/GAIL24.

Connected Papers. (2024). Connected Papers (July 31 version) [AI research tool]. connectedpapers.com.

Consensus NLP. (2024). Consensus (July 31 version) [AI research tool]. consensus.app/search.

Crawford, K. (2021). Atlas of AI. Yale University Press.

Kung, J.Y. (2023). “Elicit (product review).”Journal of the Canadian Health Libraries Association, 44(1), 15–18.

Litmaps. (2024). Litmaps (July 31 version) [AI research tool]. litmaps.com.

Nicholson, J.M., Mordaunt, M., Lopez, P., Uppala, A., Rosati, D., Rodrigues, N.P., Grabitz, P. and Rife, S.C. (2021). “scite: A Smart Citation Index That Displays the Context of Citations and Classifies Their Intent Using Deep Learning.”Quantitative Science Studies, 2(3), 882–898.

Ought. (2024). Elicit (July 31 version) [AI research tool]. elicit.com.

ResearchRabbit. (2024). ResearchRabbit (July 31 version) [AI research tool]. researchrabbit.ai.

scite. (2024). scite (July 31 version) [AI research tool]. scite.ai.

Tay, A. (2024). Aaron Tay’s Musings About Librarianship. musingsaboutlibrarianship.blogspot.com.

Susan Gardner Archambault

Susan Gardner Archambault (L) (susan.archambault@lmu.edu) is the head of the reference and instruction department at Loyola Marymount University in Los Angeles, Calif., as well as a guest faculty member at the University of Washington’s iSchool. Her recent research focuses on algorithmic literacy and AI literacy as subsets of information literacy, aiming to empower students to critically evaluate algorithmic systems and their societal impacts.

José J. Rincón (R) (jose.rincon@lmu.edu) is the reference and instruction librarian for business at Loyola Marymount University in Los Angeles, Calif. He holds an M.L.S. and an M.B.A. and serves as the liaison librarian for the College of Business Administration. Outside of his professional work, Rincón has a keen interest in exploring innovative AI technologies

José J. Rincón