Skip to Main Content
The University of Oklahoma Libraries Crimson Inline Logo
Research Guides

Digital Scholarship

This guide will provide a jumping off point for key digital scholarship concepts, as well as point to resources both on and off campus for your DS endeavors.

What is A.I.?

As artificial intelligence rapidly becomes more integrated with our lives, it is imperative to have a clear, informed understanding of the technology before diving in. Therefore, it is important to define what A.I. is, or better yet, what constitutes as A.I.

Venn diagram showing relationship between artificial intelligence (as the umbrella term), machine learning, neural networks, and generative A.I. (as the subsets)

The Original Benny C, CC BY-SA 4.0, via Wikimedia Commons

Artificial Intelligence

Artificial intelligence, broadly, can be defined as a system that performs tasks simulating human cognition (problem solving, decision making, etc.). This is accomplished through the provision of data that the system can train on to then learn from and make decisions about. These systems are provided with feedback either from human intervention (supervised learning), or from the system itself (unsupervised learning), or from a mix of the two (semi-supervised). 

As indicated by the graphic above, there are multiple classifications of artificial intelligence. In our current moment, the generative form of A.I. (such as chatbots and text-to-image models) is arguably the focal point of the A.I. conversation. However, there are other kinds of A.I., such as predictive A.I., which is responsible for forecasting patterns based on data.

Predictive A.I., and artificial intelligence more generally, has been a part of our technical lives much longer than the unveiling of ChatGPT in 2022. If you have used a search engine, you have engaged with artificial intelligence. If you have had posts recommended to you on social media, you have engaged with artificial intelligence (known as "algorithms"). Being literate in the many ways that we have, knowingly or otherwise, engaged with artificial intelligence will be incredibly useful in thinking through ways to integrate A.I. tools in your work. 

A.I. Assessment Guide

The age of artificial intelligence is advancing more rapidly than many of us can keep pace with. There’s a growing pressure to simply "embrace A.I." before it feels like the world moves on without us. This assessment guide is designed to prompt reflection and critical thinking when engaging with A.I. tools. Below, you’ll find a ROBOT test to assess the legitimacy of the tool you are engaging with, as well as a collection of frameworks to help you: (a) evaluate whether using A.I. tools is appropriate in a given context, and (b) assess your own literacy and readiness to work with them. Understanding both the capabilities and limitations of A.I.—as well as the broader implications of its use—is essential for making informed and responsible decisions.

 

Some overarching questions to consider include:

  • Do I have enough information about the provider of this technology to make an informed choice on the impact of my use?
  • Do I have an understanding of what that impact (environmental, social, etc.) looks like?
  • Am I using A.I. to automate, ideate, offload, etc.? How would A.I. help me in my work?
  • Do I understand how this technology works? Do I know what kind of information system this technology relies on?

ROBOT Test

This test was developed by Hervieux & Wheatley to help users assess the legitimacy of the A.I. tool you are using. This is a key first step before thinking through whether or not A.I. use is appropriate. Asterisks indicate an addition intended to help educate or expand upon some of these questions included in the assessment.

Reliability 

  • How reliable is the information available about the AI technology? 
  • *Does the company/organization have open documentation on how the models were developed?  
  • *Do they share potential pitfalls and problems with the model? 
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias? 
  • If it is produced by the party responsible for the AI, how much information are they making available?  
  • Is information only partially available due to trade secrets? 
  • How biased is the information that they produce? 

Objective 

  • What is the goal or objective of the use of AI? 
    • *To automate? 
    • *To ideate? 
    • *To offload? 
  • What is the goal of sharing information about it?
    • To inform? 
    • To convince? 
    • To find financial support? 
  • *Are there any other stakeholders that might be impacted by the use of AI? 
  • *Is the use of A.I. the most appropriate tool for the task? Are there other technologies that might be better suited for the task? 

Bias 

  • What could create bias in the AI technology? 
    • *Bias in AI systems can range from conflicts of interest to discrimination. Check out *this AI LibGuide* for more information on bias.  
  • Are there ethical issues associated with this? 
    • *Be clear on what your ethical frameworks are. Ethical issues with artificial intelligence are almost implicit at this point, and they can often be interrelated. Understanding what your key values are will be most beneficial when deciding whether or not to use an A.I. tool. You can find more information on some of the ethical issues with A.I. on this page of the A.I. LibGuide.  
  • Are bias or ethical issues acknowledged? 
    • By the source of information?
    • By the party responsible for the AI? 
    • By its users? 

Owner 

  • Who is the owner or developer of the AI technology? 
    • *Who provides the resources, and who develops the infrastructure? 
  • Who is responsible for it? 
    • Is it a private company? 
    • The government? 
    • A think tank or research group? 
  • Who has access to it? 
  • Who can use it? 
  • *Are there any conflicts of interest the owner might have with their datasets, your usage of the tool, etc.? 

Type 

  • Which subtype of AI is it? 
    • *LLMs? 
    • *Algorithms? 
    • *Machine learning? 
  • Is the technology theoretical or applied? 
    • Applied = Narrow/Weak AI (think chatbots, algorithms), Theoretical= General/Strong AI, Super AI (think the stuff of Project 2027) 
  • What kind of information system does it rely on? 
    • *Collection of computing machinery that create the tool 
    • *Can the model be ran locally, so that you have control over where the model is ran, or is it ran on a third-party server where that party has control over the data that the model is being trained on?  
  • Does it rely on human intervention?  
  • * Are we giving answers to the model on what the answer is, or is the model doing that autonomously? (Supervised vs. Unsupervised Learning) 

A.I. Tool Assessment Frameworks

  1. SECURE Framework 
    1. The S.E.C.U.R.E. framework outlines risks associated with A.I. use for university staff, as well as key questions to ask to assess whether A.I. use is appropriate.  
  2. Evaluating A.I. Tools – The Curious Educator’s Guide to A.I. 
    1. This page offers two rubrics, one basic and one comprehensive, on the evaluation of how appropriate the use of A.I. tools are in a pedagogical context.  
  3. Design of an Ethical Framework for Artificial Intelligence in Cultural Heritage 
    1. For those in the cultural heritage sector, artificial intelligence can offer many opportunities to automate menial work, as well as opportunities to repeat historical harms done to the communities served by the CH sector. This papers offers a domain-specific approach to analyzing the ethical issues surrounding A.I. use in the cultural heritage sector.  
    2. S. Pansoni, S. Tiribelli, M. Paolanti, E. Frontoni and B. Giovanola, "Design of an Ethical Framework for Artificial Intelligence in Cultural Heritage," 2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS), West Lafayette, IN, USA, 2023, pp. 1-5, doi: 10.1109/ETHICS57328.2023. 
  4. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness 
    1. While designed specifically for a medical context, these 20 questions can be useful in assessing whether the use of A.I. tools (particularly if you are conducting research that requires IRB approval) is appropriate in a research context. 
    2. Note: there have been two corrections submitted for this article. The corrections include a misspelled author name and an incorrect summary box in the typeset PDF of the article.

    3. Vollmer S, Mateen B A, Bohner G, Király F J, Ghani R, Jonsson P et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness BMJ 2020; 368 :l6927 doi:10.1136/bmj.l6927 

A.I. Literacy Assessment Frameworks

  1. AI Literacy Framework: Perspectives from Instruction Librarians and Current Information Literacy Tools 
    1. ACRL Choice white paper for teaching A.I. literacy as part of library instruction 
  2. MAILS – Meta AI Literacy Scale 
    1. Peer-reviewed AI literacy questionnaire based on competency models and meta-competencies. The questionnaire is featured in the appendix of this article. 
  3. University of Adelaide Library’s Artificial Intelligence Literacy Framework 
    1. Framework divided into four, easy to digest dimensions that outline specific competencies around supporting education: Recognise and Understand, Use and Apply, Evaluate and Critique, Reflect and Respect 

Generative AI

Generative AI can be used to automate tasks and generate content for your research. For more comprehensive information on A.I., including ethical considerations and tool analysis, please check out our A.I. LibGuide. The featuring of these tools is not an endorsement of A.I. nor does it fully cover the breadth of nuanced considerations *required* when engaging with a tool like this. 

 

Large Language Models (LLMs)

ChatGPT

Chatbot-style large language model. 

Claude

Chatbot-style large language model.

HuggingFace

For the more technically curious, HuggingFace is a repository of open-source models, LLMs or otherwise, that can be hosted locally on your machine.