According to Reuters Legal in August 2025 Perplexity AI failed to convince a judge to dismiss a lawsuit over its alleged misuse of articles to train its AI...
Caveat: In general, there are many ethical issues key stakeholders need to consider in using any new technologies. For librarians, our focus is on developing a basic understanding of AI and recognizing ethical dilemmas with it, how tools and technologies can conflict with our core values, and adopting responsible strategies to deal with questions from users. One goal for library and information professionals is to develop AI literacy skills; to help users understand and address potential ethical challenges that arise for them when using generative AI in university teaching and learning (and, searching), and to promote responsible, ethical uses (and even non-usage).SeeAI refusal statements by Violet Fox, for example.
The aim of this entry is to explore the ethical, legal, institutional and strategic concerns librarians have about using AI in searching. This includes, in the case of biomedicine and health-related searching, the importance of upholding scientific integrity in view of artificial intelligence (AI) tools and their lack of transparency. I'm still grappling with the complexities of this topic, and lack academic preparation in computer science; however, I can share many of the concerns expressed by other librarians in some of the published research.
Therefore, I discuss ethical concerns generally and then pivot to the impact of AI-powered search tools in knowledge synthesis and AI on our licensing requirements, bibliographic databases, and support of researchers. As this literature review is growing, and the issues complex, I may take a while to process all the information so please be patient. This is an important topic, and there is considerable analysis involved. Happy to debate the issues with anyone, and find ways to collaborate.
Note: Any discussion about AI geared towards librarians should start with a look at the ethical, legal, institutional and strategic concerns many librarians have about AI. Talk to your colleagues / librarian about your concerns to make informed decisions. For a broader discussion of the issues, see this Wikipedia entry: Ethics of artificial intelligence.
Sections
Ethical AI talking points for academic librarians: value alignment, bias and fairness, legal challenges, repeatability and reproducibility
Summaries of N=27 papers re: AI ethics in libraries or authored by librarians;
Library guides re: AI Ethics and libraries;
Librarians: toward a philosophy of information and AI ethics paradigm, quoting Luciano Floridi.
Broader ontological, epistomological and philosophical discussions and papers 2025
Section I: Ethical AI talking points for academic librarians
The idea of value alignment is one consideration to ensure AI systems accord with core human and library values and operate in ways that reflect diverse social values and ethical principles. SeeViolet Fox's well-argued, "A Librarian Against AI", https://violetbfox.info/against-ai/
The value alignment problem seems particularly relevant for librarians – not simply because of hypothetical threats from AI to our work, but because of the conflicts we are confronted with about AI vis a vis the core values of our profession ie., copyright, diversity, equity, intellectual property and climate protection.
AI systems are developed and deployed by Silicon valley actors who wield massive power in society, making use of intellectual property.
Many in AI operate under the idea of the ‘scaling hypothesis’ that increasing computational power and model size leads to better performance. This conflicts with librarians’ values re: climate and social justice.
Lacroix’ book introduces a parallel idea: as AI systems grow in scale, the risks associated with value misalignment increase. Understanding this is essential for ensuring that AI serves the public interest and humanity, rather than reinforcing harmful political and economic incentives.
Most existing books on AI alignment are written for a general audience – the most well-known works being Stuart Russell’s ‘Human Compatible’ and Brian Christian’s ‘The Alignment Problem’. Many academic discussions of alignment focus on speculative artificial general intelligence (AGI).
Academic libraries are committed to equity, diversity and inclusion (EDI), and work to ensure any tools we use do not reinforce systemic biases or discrimination; this is a major problem in Generative AI - What is it? due to AI companies' policies and practices (ie., and data in large language models);
Emerging research reveals GenAI models perpetuate a range of biases, leading to unfair / discriminatory outcomes for marginalized groups;
Relevant Resource: The American Library Association (ALA) emphasizes equity of access and intellectual freedom in the ALA Library Bill of Rights. Our values are what ground us in our work, and they are important values to uphold in libraries.
Privacy and Data Security:
ChatGPT interactions raise concerns for librarians re: privacy and data protection; not just for libraries but for their user communities;
Academic librarians must ensure compliance with national data protection laws ethical standards and guidelines to safeguard user information in using GenAI;
Relevant Resource: The International Federation of Library Associations and Institutions (IFLA) has published guidelines on Privacy in the Library Environment. These values are vital to uphold in academic libraries.
Transparency and Accountability:
GenAI systems operate as "black boxes," making it difficult for librarians to assess sources, understand how decisions or responses are generated by these systems;
Academic libraries must ensure transparency in how AI tools are created and provide accountability mechanisms for errors or misuse.
Relevant Resource: The European Union’s Ethics Guidelines for Trustworthy AI highlight transparency and accountability as key principles. These are important concepts in providing reliable library services in academic libraries.
Intellectual Freedom and Censorship:
GenAI-generated content may restrict access to information by design, or monetization, and set up to promote certain viewpoints over others;
Academic libraries must balance GenAI innovation and urgency, with a commitment to intellectual freedom and neutrality in sources of information;
Relevant Resource: ALA’s Intellectual Freedom Manual provides guidance on maintaining neutrality and avoiding censorship. These are important values to uphold in academic libraries.
GenAI-generated content raises questions legally about underlying training and models; also copyright ownership and infringement of copyrights;
Libraries navigate the many legal implications of using these tools to generate or disseminate content; artists, creators and authors have a right to retain their intellectual property in the AI era;
Relevant Law: The Berne Convention for the Protection of Literary and Artistic Works and the Digital Millennium Copyright Act (DMCA) in the U.S. address these copyright issues. However, the crawling of websites and sources of information, and whether this falls under fair use / fair dealing is being sorted by the courts. These are important values to uphold in academic libraries.
Data Protection and Privacy Laws:
Academic libraries must comply with data protection regulations when using AI tools that process user data. These are important values to uphold in academic libraries.
Relevant Laws:
General Data Protection Regulation (GDPR) in the European Union.
California Consumer Privacy Act (CCPA) in the U.S.
Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada.
Liability for Misinformation:
At times, AI tools will generate inaccurate or misleading information, potentially leading to legal liability for libraries, and result in poor scholarship.
Libraries can establish clear disclaimers and guidelines for the use of AI-generated content.
Relevant Resource:IFLA’s Statement on Libraries and Artificial Intelligence discusses the need for clear policies on AI use.
Accessibility and Compliance:
Libraries must ensure that AI tools are accessible to all users, including those with disabilities, to comply with accessibility laws.
Relevant Laws:
Americans with Disabilities Act (ADA) in the U.S.
Web Content Accessibility Guidelines (WCAG) internationally.
4) Repeatability & reproducibility:
Draft
Challenges arise with regards to reproducibility due to AI algorithms, which may hinder transparency and replicability even if fully documented.
Ensuring reproducibility requires open-source models, clear reporting of AI methods, and processes.
The evidence is emerging, but validation is needed in studies comparing AI vs. human-led syntheses.
By prioritizing transparency, AI can strengthen the reliability of scientific knowledge synthesis, but we are a long way off.
5) Stolen labour:
Draft
AI raises ethical concerns about exploitation and inequity of development and deployment with regards to the labour force worldwide.
"Stolen labour" refers to uncompensated or undercompensated human efforts behind AI e.g., data annotation, content moderation, and training dataset creation.
Low-paid workers, in developing nations, perform repetitive tasks such as labeling images or transcribing audio to train AI models, under poor working conditions with minimal wages—sometimes less than $2 per hour, as reported in 2023. Workers are not acknowledged or equitably rewarded, raising questions about exploitation in global AI.
AI can "steal" labour by automating jobs without fair transition plans, displacing workers in industries like manufacturing, customer service, or creative fields. While AI boosts efficiency, profits concentrate among tech giants, leaving workers with little in return; ethical AI requires transparent labour practices, fair wages, and upskilling to mitigate job losses.
Addressing "stolen labour" demands prioritizing human dignity over cost-cutting, ensuring that the benefits of AI are shared equitably rather than built on the backs of underpaid or displaced workers.
Section II: Papers re: AI ethics in libraries or by librarians
The following section lists, alphabetically, n=27 papers discussing the ethics of librarians’ use of artificial intelligence in delivery of library services.
This study investigates the ethical dimensions of AI-driven cataloguing and classification systems, exemplified by ChatGPT. It examines privacy, accountability, transparency, and bias; it’s a comprehensive literature review of AI applications in libraries, emphasizing ethics and assessing existing guidelines from ALA and IFLA. The study identifies gaps in current ethical frameworks and emphasizes the need for guidelines. Privacy concerns, biases in AI outputs, and challenges related to user trust and transparency are highlighted. The impact of AI on job satisfaction for librarians is discussed. The study contributes to the broader discussion on ethical AI by addressing AI-driven cataloguing and classification. It underscores the importance of aligning AI use with ethical standards, proposing strategies for mitigating biases and fostering user trust.
A systematic review of AI in academic libraries. Comprehensively examines technical and social implications through a sociotechnical systems framework. The research addresses professional concerns, operational challenges, and ethical considerations in library services, providing substantive analysis of technological integration in higher education library settings. A critical next step would be to contextually assess academic library readiness for AI and analytics from a sociotechnical standpoint.
3) Bradley F. Representation of libraries in artificial intelligence regulations and implications for ethics and practice. Journal of the Australian Library and Information Association. 2022 Jul 3;71(3):189-200.https://www.tandfonline.com/doi/abs/10.1080/24750158.2022.2101911
“As AI policies and regulations emerge, more is learned about bias in machine learning data, surveillance risks of smart cities and facial recognition, and automated decision-making by government, among other applications of AI and machine learning. This paper introduces AI regulatory developments and engagement by libraries, concerns around ethics, privacy, and data protection. While AI applications are emerging in libraries, some mature examples can be identified in research literature searching, language tools for textual analysis, and access to collection data. The paper presents a summary of how library activities are represented in national AI plans and how libraries have engaged with other aspects of AI regulation including development of ethical frameworks. Based on the sector's expertise in related regulatory issues including copyright and data protection, the paper suggests further opportunities to contribute to the future of ethical, trustworthy, and transparent AI.”
4) Bridges LM, McElroy K, Welhouse Z. Generative artificial intelligence: 8 critical questions for libraries. Journal of Library Administration. 2024 Jan 2;64(1):66-79.
“...provides an overview of generative AI and large language models; librarians pose eight critical questions that libraries should ask when exploring this technology. We argue libraries have a unique role in facilitating informed, responsible use of generative AI, as well as safeguarding the values of access, privacy, and intellectual freedom. AI should never be presented as a replacement for library workers, and we must be vigilant to ensure that AI is not used to make decisions that should be made by people. Automation bias, the tendency to trust decisions made by automation more than those made without it, can let us concede crucial spaces for discussion, debate, and decision-making. We can take a cautionary tale from the Iowa school superintendent who asked ChatGPT about books that should be removed from schools due to sexual content (Schmidt, 2023). These are only a few of the many ethical quandaries AI raises for libraries. Centering our values is a way to ensure we move forward responsibly.”
5) Bubinger H, Dinneen JD. “What could go wrong?”: An evaluation of ethical foresight analysis as a tool to identify problems of AI in libraries. The Journal of Academic Librarianship. 2024 Sep 1;50(5):102943. Dean: one of the more interesting papers using an example of in-house automated indexing and the problems with bias. https://www.sciencedirect.com/science/article/pii/S0099133324001046
Ethical concerns, such as bias and discrimination (Strasser & Niedermayer, 2021), privacy and safety (Kazim & Koshiyama, 2021, pp. 8–9), explainability and transparency, and accountability (Kroll, 2020) are key to AI discourses in libraries. Intersectional concerns arise in training data such as stolen labour, lack of consent, global equity issues and environmental impact (Bender, Gebru, McMillan-Major, & Shmitchell, 2021). Approaches have been developed to encourage ethical AI and audit applications in libraries. We applied Ethical Foresight Analysis as an approach to identify ethical risks for (semi-)automated subject indexing in a large research library. Specifically, to identify risks we conducted a two-round ethical Delphi study where experts on AI development, library practices, and AI ethics sought consensus. Experts' post-test reflections were collected to inform an evaluation of the approach's feasibility. Ethical risks of AI indexing were identified such as discrimination and under-representation (e.g. varied historical contexts and gaps left by unindexed items). We identified drawbacks: (1) it is time-consuming, prohibitive for many libraries, and (2) identified risks were mainly AI and its training data rather than the subtle, application-specific, and human-centred issues in ethical foresight analysis. Libraries should model ethical AI through careful planning, alternative development and auditing.
“The widespread use of artificial intelligence (AI) has revealed numerous ethical issues from data and design to deployment. In response, countless broad principles and guidelines for ethical AI have been published, and following those, specific approaches have been proposed for how to encourage ethical outcomes of AI. Meanwhile, library and information services are seeing an increase in the use of AI-powered and machine learning-powered information systems, but no practical guidance currently exists for libraries to plan for, evaluate, or audit the ethics of intended or deployed AI. We report on several approaches for promoting ethical AI adapted …for AI-powered information services and in different stages of the software lifecycle.”
This paper provides definitions of AI, analyzes umbrella technologies that make up AI, and types of use by area of library operation, reflecting on implications for libraries, including equality, diversity and inclusion perspectives. For librarians interested in AI from a strategic rather than a technical perspective. Five types of cases are identified, each with its drivers and barriers, and skill demands. They are applications in library back-end processes, in library services, through the creation of communities of data scientists, in data and AI literacy and in user management. Each of the different applications has its own drivers and barriers. It is hard to anticipate the impact on professional work but as the information environment becomes more complex it is likely that librarians will continue to have an important role, especially given AI’s dependence on data. However, there could be some negative impacts on equality, diversity and inclusion if AI skills are not spread widely.
8) Cox A. The ethics of AI for information professionals: Eight scenarios. Journal of the Australian Library and Information Association. 2022 Jul 3;71(3):201-14.
Information professionals need to navigate ethical issues of AI because they are likely to use AI in delivering services as well as contributing to the process of adoption of AI in their organizations. Professional ethical codes are too high level to offer precise or complete guidance. The purpose of this paper is to review the relevant literature and describe eight ethics scenarios of AI which have been developed specifically for information professionals. The paper considers how AI might be defined and presents applications relevant to the information profession. It summarizes the key ethical issues raised by AI in general both inherent to the technology and arising from the nature of the AI industry. It considers existing studies that have discussed aspects of the ethical issues specifically for information professionals. It describes a set of eight ethics scenarios that have been developed and shared in an open form to promote their reuse.
9) Cox A. How artificial intelligence might change academic library work: Applying the competencies literature and theory of the professions. Journal of the Association for Information Science and Technology. 2023 Mar;74(3):367-80.https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/asi.24635
“...AI is actively resisted as incompatible with the culture of the sector for ethical reasons. There has been a huge amount of controversy about the ethics of AI (Jobin et al., 2019). Much mirrors the debate in the area of library analytics where librarians have found many objections on ethical grounds to the exploitation of data about users (Jones, 2019). AI is based on data, so many of the same issues apply. The ethical issues are less glaring in knowledge discovery, but biases in algorithms and in collections do pose distinct ethical challenges to the uptake of AI (Cordell, 2020; Padilla, 2019).“
"We believe that the intellectual, ethical, and institutional downsides to using this technology are so substantial that normalizing its integration into pedagogy poses risks that far outweigh whatever benefits one might associate with it. In fact, we would argue that thus far the only benefits to using AI in art historical research have been to demonstrate how poorly equipped it is to conduct research in the historical humanities." See also Art History and Ten Axioms: environmental, ethical, institutional, etc. https://journals.ub.uni-heidelberg.de/index.php/dah/article/view/90400/89769
“...This paper presents findings from a broad, national survey and a workshop focusing on the challenges and opportunities the advancement of AI poses for PhD candidates, seen from the perspective of library staff working with research support in a number of research libraries in Norway. The paper looks into how research libraries could adapt to the development, addresses the roles of various stakeholders and proposes measures regarding the support of PhD candidates in the responsible use of AI-based tools. Based on insights from the survey and the workshop, the paper also shows what is lacking in the libraries' research support services concerning the understanding and utilisation of AI-based tools. The study reveals a degree of uncertainty among librarians about their role in the AI academic nexus. For the development of competences of teaching staff in academic libraries, the paper recommends to integrate AI-related topics into existing educational resources and to create arenas for sharing experiences and knowledge with relevant partners both within and outside the university.”
12) Hodonu-Wusu JO. The rise of artificial intelligence in libraries: the ethical and equitable methodologies, and prospects for empowering library users. AI and Ethics. 2024 Feb 19:1-1. https://link.springer.com/article/10.1007/s43681-024-00432-7
“...Artificial intelligence (AI) is one of the most promising technologies …and can help libraries automate processes, provide personalized services, and improve user experiences. However, with great power comes great responsibility, and AI is no exception. Libraries have an ethical and equitable promise to their users, and AI must be deployed in a way that upholds these promises. This study explores the ethical and equitable use of AI in libraries, how it can empower users, and what librarians need to consider when implementing AI. The result of the reviewed articles showed that 1499 out of 170,262 papers have been identified as describing AI in libraries through ethical and equitable methodologies and prospects for empowering library users. The future studies can focus on other professional terms, such as Trustworthy AI, Fairness in AI, Explainable AI, and Human-in-the-loop, and how this can impact libraries, and other professionals.
13) Ikwuanusi UF, Adepoju PA, Odionu CS. Advancing ethical AI practices to solve data privacy issues in library systems. International Journal of Multidisciplinary Research Updates. 2023;6(1):033-44.
This study investigates the role of ethical AI practices in addressing data privacy issues, ensuring trust, transparency, and compliance with global privacy standards. Ethical AI emphasizes principles such as user consent, data ownership, and the minimization of bias, which are essential for safeguarding privacy in library systems. Privacy-preserving AI techniques, including differential privacy and federated learning, offer robust solutions by anonymizing data and enabling decentralized processing. Additionally, encryption, secure storage methods, and real-time monitoring systems enhance data security while mitigating risks of unauthorized access. This paper highlights the importance of explainable AI (XAI) in fostering user trust by ensuring transparency in how AI systems process and utilize data. Ethical frameworks tailored for libraries emphasize stakeholder involvement, accountability, and adherence to global privacy regulations such as the GDPR and CCPA. Case studies of libraries implementing ethical AI demonstrate the feasibility and benefits of these practices, including improved user confidence and compliance with legal standards. However, challenges such as balancing personalization with privacy, addressing resource constraints, and overcoming resistance to change are explored. Recommendations include fostering global collaborations, advancing open-source ethical AI tools, and conducting regular audits to uphold privacy standards. By advancing ethical AI practices, libraries can build secure, user-centric ecosystems that protect data privacy while leveraging AI’s transformative potential. This research underscores the necessity of proactive measures to ensure libraries remain trusted guardians of information in the digital age.
14) Johnson A. Generative AI, UK Copyright and Open Licences: considerations for UK HEI copyright advice services. F1000Research. 2024 Feb 22;13:134. https://pmc.ncbi.nlm.nih.gov/articles/PMC11109589/
UK higher education institutions and library copyright advice services will see an increase in questions around use of AI. Staff working in library services are not lawyers or able to offer legal advice to academic researchers. They must look at the issues raised, consider how to advise in analogous situations of using copyright material, and offer opinion to researchers. While the legal questions remain to be answered definitively, copyright librarians can still offer advice on both open licences and use of copyright material under permitted exceptions. We look here at how library services can address questions on copyright and open licences for generative AI for researchers in UK HEIs. In section 5, this question: Does the AI attribute the works used to train it?
Recent advances in AI have raised concerns about the consequences of the uncontrolled development of technology for society and humans. Information and knowledge professionals working in research libraries are in professions that have globally applied ethical codes that serve as self-regulatory ethical norms. New AI technologies in libraries’ operations cause confusion among librarians and challenge ethics. In this paper, we examine these challenges and present a qualitative study that reveals the ethical considerations that research librarians face when they approach new AI technologies. As there are no established AI ethics norms for research librarians, we compared the international code of conduct for libraries against the European AI guidelines to identify relevant themes. We analyzed data from two Scandinavian workshops for librarians. Our findings highlight the central role of research libraries in making AI-powered research ethical. Our study indicates a need to update international codes of conduct for libraries for AI by including aspects of AI agency and the interests of future generations. This helps librarians better orient themselves and their patrons towards a trustworthy and existentially sustainable future with AI systems.
Key quotes:“...librarians should also consider independently acting algorithms as new users of library services” …and “Another observation is that librarians must acquire new skills and competencies to cope with ethical issues when using AI-powered tools and providing AI-enhanced services. These may include the copyright of the output of a service, possible hidden biases in training data for algorithms, and an understanding of where training data originates”.
16) Kennedy ML. What do artificial intelligence (AI) and ethics of AI mean in the context of research libraries. Research Library Issues. 2019;299(299):3-13. https://publications.arl.org/18nm1db/
Published in 2019, this exploration of AI and ethics is one of the earliest to provide sufficient background for research libraries. Much of the discussion is broad but raises issues such as what kind of information society we want to live in, and what we can do to ensure that our collective future is grounded in a humanistic ethics of information.
17) Lund BD, Wang T. Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Library Hi Tech News. 2023 May 16;40(3):26-9. Available from: http://repository.ifla.org/handle/123456789/2622
“...Ethical considerationsneed to be taken into account, such as privacy and bias…how to use this technology responsibly and ethically, and how we, as professionals, can work with this technology to improve our work, rather than to abuse it or allow it to abuse us in the race to create new scholarly knowledge and educate future professionals.”
18) Mabona A, Van Greunen D, Kevin K. Integration of Artificial Intelligence (AI) in Academic Libraries: A Systematic Literature Review. In2024 IST-Africa Conference (IST-Africa) 2024 May 20 (pp. 1-9). IEEE. https://ieeexplore.ieee.org/document/10569288
Most organizations have done integrations of AI but critical elements are not considered, resulting in negative user adoption, including unethical use and cost. This paper found that old traditional operations in Libraries are somehow mostly automated using Chatbots in the form of Generative AI and humanoid robots, with concern of policies that are non-existing to guide the use of the tools for ethical reasons. Recommendations are provided with key factors to consider as a guide when integrating AI in libraries. Future improvement in this research, such as developing a design model of AI integration in academic libraries, is recommended, and its applicability should be evaluated.
18) Mannheimer S, Bond N, Young SW, Kettler HS, Marcus A, Slipher SK, Clark JA, Shorish Y, Rossmann D, Sheehey B. Responsible AI practice in libraries and archives: a review of the literature. Information Technology and Libraries. 2024 Sep 23;43(3). https://ital.corejournals.org/index.php/ital/article/view/17245
Also: Mannheimer et al “Values Toolkit Re: AI” Viewfinder is a participatory toolkit designed to facilitate ethical reflection about AI in libraries and archives from different stakeholder perspectives. "We hope that practitioners will use Viewfinder to reflect upon complex AI issues and build a more responsible, people-centered AI landscape in libraries and archives." Developed by librarians and technology ethicists at Montana State University, University of Montana, James Madison University, and Iowa State University, based on research conducted between 2022 and 2025. Our work was made possible in part by funding from the Institute of Museum and Library Services. https://www.lib.montana.edu/responsible-ai/ OR https://osf.io/yue9s
This paper examines AI projects implemented in library and archives settings, asking the following research questions: RQ1: How is artificial intelligence being used in libraries and archives practice? RQ2: What ethical concerns are being identified and addressed during AI implementation in libraries and archives? The results show that AI implementation is growing in libraries and archives and that practitioners are using AI for increasingly varied purposes. AI implementation was most common in large, academic libraries. Materials used usually involved digitized and born digital text and images, though materials ranged to include web archives, electronic theses and dissertations (ETDs), and maps. AI was most often used for metadata extraction and reference and research services. Just over half mentioned ethics or values related issues in their discussions of AI implementation and only one-third of all resources discussed ethical issues beyond technical issues of accuracy and human-in-the-loop. We expect subsequent discussions of relevant ethics and values to follow suit, particularly growing in the areas of cost considerations, transparency, reliability, policy and guidelines, bias, social justice, user communities, privacy, consent, accessibility, and access. As AI comes into more common usage, it will benefit the library and archives professions to not only consider ethics when implementing local projects, but to publicly discuss these ethical considerations in shared documentation and publications.
19) Michalak, Russell. From Ethics to Execution: The Role of Academic Librarians in Artificial Intelligence (AI) Policy-Making at Colleges and Universities. Journal of Library Administration. 2023;63:7, 928-938. https://doi.org/10.1080/01930826.2023.2262367
“...This paper highlights the importance of involving academic librarians in writing ethical AI policies. The Academic Librarian Framework for Ethical AI Policy Development (ALF Framework) is introduced, recognizing librarians’ unique skills and expertise. The paper discusses the benefits of their involvement, including expertise in information ethics and privacy, practical experience with AI tools, and collaborations. It addresses challenges, such as limited awareness, institutional resistance, resource constraints, and evolving AI technologies. By actively involving librarians, institutions can develop comprehensive, ethical AI policies that prioritize social responsibility and respect for human rights.”
20) Mishra S. Ethical Implications of Artificial Intelligence and Machine Learning in Libraries and Information Centres: A Frameworks, Challenges, and Best Practices. Library Philosophy & Practice. 2023 Jan 1. https://www.tandfonline.com/doi/full/10.1080/01930826.2023.2262367
“...The use of artificial intelligence (AI) and machine learning (ML) is increasingly prevalent in libraries and information centres. These technologies pose significant ethical challenges and risks, including bias and discrimination, privacy and security, automation and job displacement, and lack of human interaction in service delivery. This paper provides an overview of the key ethical frameworks and principles relevant to the use of AI and ML in libraries and information centres, and analyzes how these frameworks can be applied. It discusses the potential benefits and risks, provides best practices and strategies. Finally, the paper highlights the implications of these findings for libraries and information centres, and recommends future research and practice. Overall, this paper underscores the importance of taking a proactive and ethical approach to AI and ML in libraries and information centres to ensure technologies are used to align with their mission and values, and serves the best interests of their users and society as a whole.”
This chapter examines the ethical use of AI applications in academic libraries. After a general exploration of machine learning (ML), the chapter explains what implicit bias is, how it enters ML applications, and why the problem is insidious and challenging. The authors present an illustrative review of the ethical foundations of the work of academic libraries and draw analogies to other professional interfaces with AI and implicit bias. Possible scenarios of ethically problematic outcomes in academic libraries are explored.
22) Ngulube P, Vincent Mosha NF. Integrating artificial intelligence-based technologies ‘safely’ in academic libraries: An overview through a scoping review. Technical Services Quarterly. 2025 Jan 2;42(1):46-67.https://www.tandfonline.com/doi/abs/10.1080/07317131.2024.2432093
“...Academic libraries are increasingly integrating artificial intelligence (AI), but have limited understanding of how they can be “safely” integrated into their business model. Objective. This scoping review addressed the question on how much research has been conducted on ethical issues and perceived risks associated with the safe integration of AI technologies in academic libraries. Between December 2023 and March 2024, online databases and a search engine were used to identify sources of evidence published before 2024 that focused on ethical concerns and risks to integration of AI-based technologies. Eligibility criteria and a charting form guided data synthesis. Nigeria provided the bulk of the studies. Many studies used the quantitative methodology at the expense of qualitative and mixed methods research approaches. The use of theoretical underpinnings was limited to 18% of the studies. Ethical issues with an impact on the planet were not evident as matters that were covered related to trust and the society. The perceived risk of losing jobs was widely covered at the expense of other perceived risks. Conclusion. Research on the safe use of AI technologies in academic libraries is still in its infancy. More research is necessary to understand the phenomenon.”Summary: “...A scoping review examining AI integration in academic libraries reveals significant ethical and professional concerns, with a systematic methodology exploring risks and implications across multiple dimensions. The study addresses professional issues in academic library settings, focusing on ethical challenges and perceived risks associated with AI technologies, while highlighting the nascent state of research in this domain. While the abstract mentions "ethical issues" and "perceived risks," it doesn't explicitly specify coverage of copyright, privacy, or data protection. However, the focus on ethical concerns suggests these may be covered in the full paper.”
23) Olusipe AA, Adetayo AJ, Enamudu AI, Babalola OO. Safeguarding the digital economy: Librarians’ perspectives on data privacy and ethical use of public AI chatbots. Journal of Electronic Resources Librarianship. 2024 Oct 1;36(4):257-70.
This study surveyed librarians’ perspectives on data privacy and ethical use of AI chatbots by patrons. The survey of 34 librarians at private university libraries in Nigeria revealed gaps in specific regulatory knowledge. Ethical concerns emerged with strong agreement on needs for guidelines around privacy, transparency, consent, and mitigating bias. Major challenges include lack of uniform standards, resource constraints, rapid technology changes, staff training gaps, and difficulties verifying AI vendors’ data practices. Key recommendations emphasize developing policies prioritizing user consent and data transparency, verifying vendor practices, staff training, assessing risks, gathering patron feedback, and cross-institutional collaboration on best practices. Adequate resources and measures safeguarding patron privacy while benefiting from AI capabilities are vital for promoting ethical public AI chatbot use in libraries.
“...libraries should consider the study’s findings before implementing artificial intelligence, particularly concerning technology and facilities, librarians’ proficiency in artificial intelligence, and leadership positions in artificial intelligence initiatives. The research can be used as a resource by library boards and associations to develop policies for implementing artificial intelligence in academic libraries…”
25) Sikhakhane N, Mthombeni N. Intersection of artificial intelligence, legal frameworks and psychological dynamics in academic libraries. South African Journal of Libraries and Information Science. 2024 Oct 1;90(2):1-9. https://journals.co.za/doi/full/10.7553/90-2-2398
This literature study discusses the challenges and opportunities of AI in academic libraries, aiming to contribute insights for informed decision-making and ethical implementation. Following the qualitative approach, the review included 2125 hits, 45 of which were considered for study. The paper looks at AI's acceptance and ethical use, encompassing psychological impacts, AI legalities, skill set alignment and a futuristic perspective. Despite AI's potential in domains such as natural language processing and decision-making, ethical considerations and societal implications remain paramount. Questions regarding AI's impact on the labour market, its potential to replace human labour and ethical application of AI-driven decision-making underscore the necessity for ethical guidelines and policies. It is crucial to approach the integration of AI into society with caution, sensitivity, and adherence to ethical principles, ensuring that AI serves to enhance human capabilities while addressing potential risks and ethical dilemmas. A thorough environmental scan, legislation and continuous discussions with pertinent stakeholders are recommended if we were to harness AI. Quotes: “The study addresses at least one of the professional concerns of copyright, privacy, data protection, or professional values in librarianship, as the abstract mentions "ethical use" and "legalities," suggesting these topics may be covered in the full paper, even though they are not explicitly specified.”
This research investigates preparedness of higher education librarians to support ethical use of information within higher and tertiary education. A qualitative approach was used, and Interviews were done with thirty librarians and academics from universities in Zimbabwe. Findings indicate that many university libraries in Zimbabwe are still at the adoption stage of artificial intelligence. Institutions and libraries are not yet prepared for AI use and are still crafting policies. Libraries seem prepared to adopt AI, and prepared to offer training on how to protect intellectual property but have serious challenges in issues of transparency, data security, plagiarism detection and concerns about job losses. However, with no major ethical policies having been crafted on AI use, it becomes challenging for libraries to fully adopt its usage.
This paper examines Artificial Intelligence (AI) and Machine Learning (ML) through two lenses: 1) lens of an academic librarian, integrating Generative AI tools, ChatGPT, and Gemini in the classroom and need for academic libraries to adapt and support faculty, researchers, and students in AI literacy. Details creation of an AI task force, community of practice, and salon series within the George Mason University Libraries as a model for responding to challenges. 2) Through lens of researchers and perspective on uses and misuses of AI and ML, evaluates AI technologies, including strengths and weaknesses, and ethical considerations of use, including bias in training data, data ownership, and environmental impact. Sections offer overview of AI and ML and highlight opportunities and challenges for researchers and patrons.
Librarian with 12,000 X followers said: “...[I’m] starting to see LIS folks use phrase "responsibly implementing" when it comes to AI/genAI in libraries and archives; there is nothing responsible about a plagiarism machine that relies on stolen and personal data speeding up the climate crisis.”
From Facebook:
“...AI consumes enormous amounts of mostly fossil fuel generated energy, on a par with the biggest data centers that are mining bitcoin. Just sharing AI memes encourages people to create more of them. The proliferation of AI for non-research purposes has driven the demand for electricity by 8% at a time when we are struggling to produce additional CLEAN electricity for the transition to EVs. Google, because of its implementation of AI has removed its slogan from its home page that used to say “Carbon Neutral Since 2007”.
We have plenty of per-existing creative ways to support our candidates without using AI and causing further unnecessary harm to our planet...and that goes for non-political stuff too.”
White papers and library bill of rights
American Library Association (ALA):
Library Bill of Rights: Emphasizes equity of access and intellectual freedom.
Intellectual Freedom guidance on avoiding censorship and maintaining neutrality.
International Federation of Library Associations and Institutions (IFLA):
IFLA Statement on Libraries and Artificial Intelligence: Discusses ethical and policy considerations for AI use in libraries. Offers guidelines for protecting user privacy.
European Union:
Ethics Guidelines for Trustworthy AI: Outlines principles such as transparency, accountability, and fairness. Legal framework for data protection and privacy.
UNESCO:
Recommendation on Ethics of Artificial Intelligence: Highlights the importance of ethical AI use in education and cultural institutions, including libraries.
Section IV: Towards a philosophy of information and AI ethics
Floridi’s concept of human dignity in the digital age is tied to his Information Ethics (IE) framework. The framework emphasizes the moral value of informational entities, including humans and artificial agents. According to Floridi, human dignity is maintained when individuals retain autonomy, agency, and control over their information and decision-making. AI, including ChatGPT, challenges human dignity when it manipulates, misinforms, or diminishes human intellectual independence.
Other scholars include: Stahl (2021), Davenport (2018), Hagendorff (2020), van Otterlo (2018) or Etzioni and Etzioni (2017).
ChatGPT’s potential to provide misleading or biased information undermines users’ epistemic agency, limiting their ability to critically engage with knowledge. If students and researchers depend on AI-generated content without verification, they risk losing their intellectual autonomy, a fundamental aspect of human dignity. Floridi argues that humans must not be treated merely as data points in an AI-driven system but should remain active, responsible agents in information ecosystems.
To preserve dignity in academic librarianship, AI tools like ChatGPT must function as assistants and not replacements, supporting human inquiry without overshadowing critical thinking and interpretative skills that define scholarly work. Also, librarians should be able to use their own agency to use/not use AI if they so choose.
Ethical AI integration must prioritize transparency, accountability, and user empowerment, ensuring that technology enhances rather than diminishes human dignity.
In his 2023 book, Floridi argues humans must not be treated as data points in AI-driven systems but must remain active, responsible, ethical agents in infospheres. We should not be required to cede our intellectual autonomy in our work. The position or argument I've been developing as pushback in my own field is grounded in careful testing and evaluation of all AI but advocating for strategies and a framework where AI respects, supports, and enhances our work so that technology serves libraries.
My perspective, while nascent and evolving, is grounded in Floridi's work on the philosophy of information. To preserve our values as librarians, AI tools such as ChatGPT must function, if at all, as assistants, supporting academic inquiry without overshadowing the critical thinking and interpretative skills that define our scholarly work and our roles within the scholarly enterprise.
Section V: Broader ontological, epistomological and philosophical discussions and papers 2025
Abstract: ...Artificial Intelligence (AI) technologies are revolutionising key sectors such as healthcare, finance, and governance, while raising ethical challenges, including algorithmic bias, privacy violations, and environmental sustainability. Dominant Western ethical paradigms, such as Luciano Floridi’s Information Ethics, emphasise procedural integrity and transparency but often lack spiritual and metaphysical grounding prevalent in global traditions. Most approaches to Islamic ethics for AI have employed Maqasid al-Shariah (objectives of Islamic law) and Qawaid Fiqhiyya (legal maxims), that apply legal principles to ethical questions but face limitations in addressing the complexities of emerging technologies especially that they, initially, were developed to address problems in Islamic law and not in Ethics. This paper introduces the prospects of Taha Abdurrahman’s I’timāni (trusteeship) framework as a unified ethical model for AI technologies. Rooted in the concept of divine trust (amana), the framework integrates three foundational covenants—Ontological, Epistemological, and Existential—offering a comprehensive vision of human responsibility toward God, knowledge, and creation..."
Blackwell, A.F. (2024). Moral codes: Designing alternatives to AI. The MIT Press.
Note: Please use your critical reading skills while reading entries. No warranties, implied or actual, are granted for any health or medical search or AI information obtained while using these pages. Check with your librarian for more contextual, accurate information.