As artificial intelligence rapidly becomes more integrated with our lives, it is imperative to have a clear, informed understanding of the technology before diving in. Therefore, it is important to define what A.I. is, or better yet, what constitutes as A.I.

The Original Benny C, CC BY-SA 4.0, via Wikimedia Commons
Artificial intelligence, broadly, can be defined as a system that performs tasks simulating human cognition (problem solving, decision making, etc.). This is accomplished through the provision of data that the system can train on to then learn from and make decisions about. These systems are provided with feedback either from human intervention (supervised learning), or from the system itself (unsupervised learning), or from a mix of the two (semi-supervised).
As indicated by the graphic above, there are multiple classifications of artificial intelligence. In our current moment, the generative form of A.I. (such as chatbots and text-to-image models) is arguably the focal point of the A.I. conversation. However, there are other kinds of A.I., such as predictive A.I., which is responsible for forecasting patterns based on data.
Predictive A.I., and artificial intelligence more generally, has been a part of our technical lives much longer than the unveiling of ChatGPT in 2022. If you have used a search engine, you have engaged with artificial intelligence. If you have had posts recommended to you on social media, you have engaged with artificial intelligence (known as "algorithms"). Being literate in the many ways that we have, knowingly or otherwise, engaged with artificial intelligence will be incredibly useful in thinking through ways to integrate A.I. tools in your work.
The age of artificial intelligence is advancing more rapidly than many of us can keep pace with. There’s a growing pressure to simply "embrace A.I." before it feels like the world moves on without us. This assessment guide is designed to prompt reflection and critical thinking when engaging with A.I. tools. Below, you’ll find a ROBOT test to assess the legitimacy of the tool you are engaging with, as well as a collection of frameworks to help you: (a) evaluate whether using A.I. tools is appropriate in a given context, and (b) assess your own literacy and readiness to work with them. Understanding both the capabilities and limitations of A.I.—as well as the broader implications of its use—is essential for making informed and responsible decisions.
Some overarching questions to consider include:
This test was developed by Hervieux & Wheatley to help users assess the legitimacy of the A.I. tool you are using. This is a key first step before thinking through whether or not A.I. use is appropriate. Asterisks indicate an addition intended to help educate or expand upon some of these questions included in the assessment.
Note: there have been two corrections submitted for this article. The corrections include a misspelled author name and an incorrect summary box in the typeset PDF of the article.
Vollmer S, Mateen B A, Bohner G, Király F J, Ghani R, Jonsson P et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness BMJ 2020; 368 :l6927 doi:10.1136/bmj.l6927
Generative AI can be used to automate tasks and generate content for your research. For more comprehensive information on A.I., including ethical considerations and tool analysis, please check out our A.I. LibGuide. The featuring of these tools is not an endorsement of A.I. nor does it fully cover the breadth of nuanced considerations *required* when engaging with a tool like this.
Chatbot-style large language model.
Chatbot-style large language model.
For the more technically curious, HuggingFace is a repository of open-source models, LLMs or otherwise, that can be hosted locally on your machine.