Skip to Main Content Reference and User Services Association Division of American Library Association Business Guides

Artificial Intelligence (AI) for Business Librarians

Ethical Considerations

In using or teaching about Generative AI, users should be aware of some of the broader ethical considerations and conversations around artificial intelligence. This includes, but is not limited to: how learning models are generated, potential biases present, and climate impacts.

We encourage you to research these areas further using peer-reviewed and journalistic resources, consider these ethical implications, and formulate your own perspective as an informed user of the artificial intelligence ecosystem.


Copyright

As AI models are developed, they ingest astonishingly large quantities of data and information. For example, one of Meta's Generative AI was trained on over one billion publicly available images from Facebook and Instagram. This mass collection of data poses issues around violation of copyright, as artists find their original content and other copyrighted creative works ingested without their knowledge and consent. Tools such as Have I been Trained? emerged to aid users in identifying if their work has been utilized for training on AI models.

Due to this mass aggregation and training process, copyrighted and commercialized works are at risk of unauthorized reproduction. Although Generative AI's powerful capabilities can create impressive imagery, it can also create images that likely violate commercial licenses. Whether this occurs through a developmental bug or a temporary error, this deepens the ethical questions one may have in utilizing these technologies.

 


Bias & Socio-Economic Inequity

Generative AI has the potential to enhance the workplace, assist with interdisciplinary learning, streamline workflows, among many other potential beneficial uses. However, in each of these, artificial intelligence and its deployment in a real-world context can also exacerbate exisiting inequalities. Other technologies have demonstrated racial discrimination and biases, such as the development of facial recognition systems. As such, one has to consider that similar issues surrounding discrimination and bias may emerge in the development of AI models and their corresponding outputs.

Preliminary research has been conducted around these topics, and users should ultimately keep in mind how their actions and engagement in the artificial intelligence ecosystem may, or may not, contribute to bridging or widening socio-economic divides.

Bias is present within Artificial Intelligence due to the human-made nature of these technologies. Whether bias be conscious or subconscious, it can be transferred to developmental stages such as algorithm design and data aggregation. Bias is also present when humans are interpreting the outputs of AI-generated outputs. Holistically, there are numerous opportunities for bias to compound upon itself in all of these stages, leading a user to a flawed understanding, assessment, and application of generative AI. These technologies are not fully neutral and reinforces the need for developing robust Information and AI Literacy practices.


Climate Change

The challenges and threats stemming from our planet's warming climate, are frequent discussions for many of us. Within the contexts of Artificial Intelligence, again, we see there is a dichotomy of potential benefits and drawbacks. it's immense and rapid processing power could offer benefits in identifying solutions, developing predictive models, optimizing systems, and aiding in the implementation of tangible ideas. However, it also requires energy in order to develop and sustain Generative AI and other AI tools. This carries its own set of implications, as researchers have begun to explore, such as in "Energy and Policy Considerations for Deep Learning and NLP" by Emma Strubell, et al.

Misuse of AI

The proliferation of Misinformation and Disinformation are challenges for society and for educators. At times, it feels increasingly difficult to discern the difference between AI generated content and reality. Researchers have developed platforms to test users' confidence and ability in assessing the differences, such as in this example from the Kellog School of Management's Deepfake experiment. Initial findings and results stemming from this were published in an article in PNAS, in 2022.

The Brookings Institute highlights how AI can be leveraged with malicious intent to rapidly create and spread misinformation, causing disruptive effects upon targeted groups of people. As Generative AI becomes increasingly more capable, such as in OpenAI's examples of Sora, the potential for stronger disruptive effects likely increases. As such, there are conversations about and recommendations towards AI Governance policies, such as from the Brookings Institute, MIT, the European AI Office, and more.

Plagiarism and AI

Most, if not all, institutions of higher education have an academic integrity policy that includes a definition of plagiarism. Librarians are accustomed to educating students about plagiarism i.e. what it is, why it matters, and how to avoid it. But with the rise of ChatGPT and other generative AI tools, what constitutes plagiarism is up for debate (Anders, 2023, Dehouche, 2021, Perkins, 2023).

With the release of ChatGPT, administrators and faculty have been scrambling to decide whether the use of generative AI by students to complete academic course requirements is tantamount to plagiarism. The concerns are primarily around the copying and pasting of AI generated text into assignments and the use of AI generated concepts students may use as their own instead of using their own thinking, problem solving, or creativity (Halaweh, 2023). Concerns aren’t just limited to text created by ChatGPT, but also Claude, Perplexity AI, MS CoPilot, Gemini, Elicit, Consensus, Semantic Scholar, OpenAlex, Assistant by Scite, Consensus, Keenious, Inciteful. Generative AI tools have become ubiquitous. Detection software has had limited results in identifying text generated by LLMs (Elkhatat, et.al., 2023).

Resources for librarians related to plagiarism and AI include:

Lo, L. S. (2023). An initial interpretation of the U.S. Department of Education's AI report: Implications and recommendations for academic libraries. The Journal of Academic Librarianship, 49(5), 102761. 10.1016/j.acalib.2023.102761

This article provides an analysis of the U.S. Department of Education's report on Artificial Intelligence (AI) and its implications for academic libraries. It emphasizes the need for libraries to promote AI literacy, involve librarians in AI implementation, develop guidelines for AI use, prepare for AI issues, and collaborate with other stakeholders. The article concludes with a call to action for academic libraries to take a proactive approach to AI, ensuring its effective, ethical, and responsible use in library services and operations. This analysis serves as a roadmap for academic libraries navigating the evolving landscape of AI in education.

Anders, B. A. (2023). Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking? Patterns (New York, N.Y.), 4(3), 100694. 10.1016/j.patter.2023.100694

Anders discusses some of the key issues and concerns regarding the ethics and usage of AI. T The author defines what constitutes AI Literacy including the importance of critical thinking, the method used to create the result, the sources used to create the result, and biases that might exist within the system.

James, A., & Hampton Filgo, E. (2023, Where does ChatGPT fit into the Framework for Information Literacy? The possibilities and problems of AI in library instruction. College & Research Libraries News, 84, 334. 10.5860/crln.84.9.334 https://search.proquest.com/docview/2878097053

        The authors discuss ways the Framework can address AI tools. Regarding plagiarism, the authors discuss how librarians can address Information Has Value Frame in the context of AI.

On April 25, 2024, the Association of Research Libraries released Research Libraries Guiding Principles for Artificial Intelligence. Click here to access.

References:

Anders, B. A. (2023). Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking? Patterns, 4(3), 100694. https://doi.org/10.1016/j.patter.2023.100694

Dehouche, N. (2021). Plagiarism in the age of massive generative pre-trained transformers (GPT-3). Ethics in Science and Environmental Politics ESEP, 21, 17-23. 10.3354/esep00195

Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity, 19(1), 17-16. 10.1007/s40979-023-00140-5

Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), ep421. 10.30935/cedtech/13036

Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2). https://doi.org/10.53761/1.20.02.07

Creative Commons BY icon Content on this site is licensed under a Creative Commons Attribution 4.0 International license.
©2000-2021 BRASS Education Committee.

BRASS acknowledges Springshare's generous support in hosting the BRASS Business Guides.