With the rapid integration of Generative AI in education, understanding students' ethical perspectives is crucial for effective AI ethics education. Five hundred and ninety four middle school students' agreement levels on five AI ethical principles (beneficence, non-maleficence, justice, autonomy, explicability) adapted from previous research, and the rationales underlying their choices were investigated using a questionnaire. Results showed that students expressed the highest agreement with “beneficence” and “autonomy,” though overall responses leaned toward neutrality. Independent AI use and family discussions predicted higher agreement; urban-rural differences were non-significant. Qualitative analysis identified themes in students' ethical reasoning. These findings offer evidence-based guidance for adolescent AI ethics education.
Artificial Intelligence (AI) has received unprecedented attention in recent years, raising ethical concerns about the development and use of AI technology. In the present article, we advocate that these concerns stem from a blurred understanding of AI, how it can be used, and how it has been interpreted in society. We explore the concept of AI based on three descriptive facets and consider ethical issues related to each facet. Finally, we propose a framework for the ethical assessment of the use of AI.
Harshini Sri Ramulu, Helen Schmitt, Bogdan Rerich
et al.
Ethical questions are discussed regularly in computer security. Still, researchers in computer security lack clear guidance on how to make, document, and assess ethical decisions in research when what is morally right or acceptable is not clear-cut. In this work, we give an overview of the discussion of ethical implications in current published work in computer security by reviewing all 1154 top-tier security papers published in 2024, finding inconsistent levels of ethics reporting with a strong focus of reporting institutional or ethics board approval, human subjects protection, and responsible disclosure, and a lack of discussion of balancing harms and benefits. We further report on the results of a semi-structured interview study with 24 computer security and privacy researchers (among whom were also: reviewers, ethics committee members, and/or program chairs) and their ethical decision-making both as authors and during peer review, finding a strong desire for ethical research, but a lack of consistency in considered values, ethical frameworks (if articulated), decision-making, and outcomes. We present an overview of the current state of the discussion of ethics and current de-facto standards in computer security research, and contribute suggestions to improve the state of ethics in computer security research.
W. Russell Neuman, Chad Coleman, Ali Dasdan
et al.
As generative AI models become increasingly integrated into high-stakes domains, the need for robust methods to evaluate their ethical reasoning becomes increasingly important. This paper introduces a five-dimensional audit model -- assessing Analytic Quality, Breadth of Ethical Considerations, Depth of Explanation, Consistency, and Decisiveness -- to evaluate the ethical logic of leading large language models (LLMs). Drawing on traditions from applied ethics and higher-order thinking, we present a multi-battery prompt approach, including novel ethical dilemmas, to probe the models' reasoning across diverse contexts. We benchmark seven major LLMs finding that while models generally converge on ethical decisions, they vary in explanatory rigor and moral prioritization. Chain-of-Thought prompting and reasoning-optimized models significantly enhance performance on our audit metrics. This study introduces a scalable methodology for ethical benchmarking of AI systems and highlights the potential for AI to complement human moral reasoning in complex decision-making contexts.
Sonja Rattay, Ville Vakkuri, Marco Rozendaal
et al.
A plethora of toolkits, checklists, and workshops have been developed to bridge the well-documented gap between AI ethics principles and practice. Yet little is known about effects of such interventions on practitioners. We conducted an ethnographic investigation in a major European city organization that developed and works to integrate an ethics toolkit into city operations. We find that the integration of ethics tools by technical teams destabilises their boundaries, roles, and mandates around responsibilities and decisions. This lead to emotional discomfort and feelings of vulnerability, which neither toolkit designers nor the organization had accounted for. We leverage the concept of moral stress to argue that this affective experience is a core challenge to the successful integration of ethics tools in technical practice. Even in this best case scenario, organisational structures were not able to deal with moral stress that resulted from attempts to implement responsible technology development practices.
While Medical Large Language Models (MedLLMs) have demonstrated remarkable potential in clinical tasks, their ethical safety remains insufficiently explored. This paper introduces $\textbf{MedEthicsQA}$, a comprehensive benchmark comprising $\textbf{5,623}$ multiple-choice questions and $\textbf{5,351}$ open-ended questions for evaluation of medical ethics in LLMs. We systematically establish a hierarchical taxonomy integrating global medical ethical standards. The benchmark encompasses widely used medical datasets, authoritative question banks, and scenarios derived from PubMed literature. Rigorous quality control involving multi-stage filtering and multi-faceted expert validation ensures the reliability of the dataset with a low error rate ($2.72\%$). Evaluation of state-of-the-art MedLLMs exhibit declined performance in answering medical ethics questions compared to their foundation counterparts, elucidating the deficiencies of medical ethics alignment. The dataset, registered under CC BY-NC 4.0 license, is available at https://github.com/JianhuiWei7/MedEthicsQA.
The rapid evolution of conversational artificial intelligence (AI) has sparked an ongoing debate regarding its ability to replicate, or even experience, human emotions. While early conversational chatbots such as Joseph Weizenbaum’s ELIZA (1966) relied on simple pattern recognition to create the illusion of understanding, modern AI systems like ChatGPT generate highly sophisticated, contextually appropriate responses that can convincingly mimic emotional engagement. This paper draws upon cinematic reflections, such as Spike Jonze’s Her (2013), to offer a critical examination of the question of whether AI is capable of genuine emotional experience or merely simulating such experiences through advanced language modelling. Utilising a theoretical framework grounded in philosophy, psychology and communication studies, this research critically assesses AI’s capacity for emotional experience, positing that while chatbots may convincingly simulate human emotional expression, they lack the subjective element that is integral to genuine emotional experience. This distinction, nowadays, has profound implications for human-AI interaction, ethics, and our understanding of artificial intelligence’s humanity in contemporary society.
Abstract Background In the translation of traditional Chinese medicine (TCM), it is crucial to preserve the authenticity of its philosophical, historical, and linguistic characteristics. This study aims to conduct a comprehensive analysis of translation strategies for adapting TCM terminology to foreign cultural contexts, based on the classical text The Emperor”s Canon of Eighty-One Difficult Issues. Methods A comparative analysis was performed to evaluate the effectiveness of foreignization and domestication strategies in translating TCM terminology into Russian, using an experimental study involving 84 respondents (42 in each group). The Student’s t-test was employed to assess medical accuracy, equivalence, pragmatic value, terminological precision, and cultural specificity. Results The results revealed a statistically significant advantage of the domestication strategy across most metrics, particularly in equivalence (p = 0.001) and pragmatic value (p = 0.008), where domestication achieved higher mean scores (4.12 and 4.03) compared to foreignization (3.48 and 3.55). Thus, it was established that domestication facilitates better adaptation of complex Chinese medical concepts for target audiences while maintaining sufficient medical accuracy. This is supported by higher scores in overcoming cultural barriers (48.8% versus 22.0%) and ensuring terminological precision (3.88 versus 3.41), making it a more effective strategy for translating medical terminology from Chinese into Russian. Conclusions The practical significance of this study lies in determining the effectiveness of translation approaches for TCM terminology into Russian through experimental research. These findings can be applied in the work of medical translators, the development of educational materials on Chinese medicine, and the creation of methodological guidelines for medical text translation. Consequently, the results hold the potential to improve the quality of educational materials on TCM and enhance intercultural medical communication.
Abdoul Jalil Djiberou Mahamadou, Aloysius Ochasi, Russ B. Altman
Data are essential in developing healthcare artificial intelligence (AI) systems. However, patient data collection, access, and use raise ethical concerns, including informed consent, data bias, data protection and privacy, data ownership, and benefit sharing. Various ethical frameworks have been proposed to ensure the ethical use of healthcare data and AI, however, these frameworks often align with Western cultural values, social norms, and institutional contexts emphasizing individual autonomy and well-being. Ethical guidelines must reflect political and cultural settings to account for cultural diversity, inclusivity, and historical factors such as colonialism. Thus, this paper discusses healthcare data ethics in the AI era in Africa from the Ubuntu philosophy perspective. It focuses on the contrast between individualistic and communitarian approaches to data ethics. The proposed framework could inform stakeholders, including AI developers, healthcare providers, the public, and policy-makers about healthcare data ethical usage in AI in Africa.
Social norms are standards of behaviour common in a society. However, when agents make decisions without considering how others are impacted, norms can emerge that lead to the subjugation of certain agents. We present RAWL-E, a method to create ethical norm-learning agents. RAWL-E agents operationalise maximin, a fairness principle from Rawlsian ethics, in their decision-making processes to promote ethical norms by balancing societal well-being with individual goals. We evaluate RAWL-E agents in simulated harvesting scenarios. We find that norms emerging in RAWL-E agent societies enhance social welfare, fairness, and robustness, and yield higher minimum experience compared to those that emerge in agent societies that do not implement Rawlsian ethics.
David Gray Widder, Laura Dabbish, James Herbsleb
et al.
Past work has sought to design AI ethics interventions--such as checklists or toolkits--to help practitioners design more ethical AI systems. However, other work demonstrates how these interventions may instead serve to limit critique to that addressed within the intervention, while rendering broader concerns illegitimate. In this paper, drawing on work examining how standards enact discursive closure and how power relations affect whether and how people raise critique, we recruit three corporate teams, and one activist team, each with prior context working with one another, to play a game designed to trigger broad discussion around AI ethics. We use this as a point of contrast to trigger reflection on their teams' past discussions, examining factors which may affect their "license to critique" in AI ethics discussions. We then report on how particular affordances of this game may influence discussion, and find that the hypothetical context created in the game is unlikely to be a viable mechanism for real world change. We discuss how power dynamics within a group and notions of "scope" affect whether people may be willing to raise critique in AI ethics discussions, and discuss our finding that games are unlikely to enable direct changes to products or practice, but may be more likely to allow members to find critically-aligned allies for future collective action.
AI systems may have transformative and long-term effects on individuals and society. To manage these impacts responsibly and direct the development of AI systems toward optimal public benefit, considerations of AI ethics and governance must be a first priority. In this workbook, we introduce and describe our PBG Framework, a multi-tiered governance model that enables project teams to integrate ethical values and practical principles into their innovation practices and to have clear mechanisms for demonstrating and documenting this.
This study is focused on the ethics of Artificial Intelligence and its application in the United States, the paper highlights the impact AI has in every sector of the US economy and multiple facets of the technological space and the resultant effect on entities spanning businesses, government, academia, and civil society. There is a need for ethical considerations as these entities are beginning to depend on AI for delivering various crucial tasks, which immensely influence their operations, decision-making, and interactions with each other. The adoption of ethical principles, guidelines, and standards of work is therefore required throughout the entire process of AI development, deployment, and usage to ensure responsible and ethical AI practices. Our discussion explores eleven fundamental 'ethical principles' structured as overarching themes. These encompass Transparency, Justice, Fairness, Equity, Non- Maleficence, Responsibility, Accountability, Privacy, Beneficence, Freedom, Autonomy, Trust, Dignity, Sustainability, and Solidarity. These principles collectively serve as a guiding framework, directing the ethical path for the responsible development, deployment, and utilization of artificial intelligence (AI) technologies across diverse sectors and entities within the United States. The paper also discusses the revolutionary impact of AI applications, such as Machine Learning, and explores various approaches used to implement AI ethics. This examination is crucial to address the growing concerns surrounding the inherent risks associated with the widespread use of artificial intelligence.
The lack of established rules and regulations in cyberspace is attributed to the absence of agreed-upon ethical principles, making it difficult to establish accountability, regulations, and laws. Addressing this challenge requires examining cyberspace from fundamental philosophical principles. This work focuses on the ethics of using defensive deception in cyberspace, proposing a doctrine of cyber effect that incorporates five ethical principles: goodwill, deontology, no-harm, transparency, and fairness. To guide the design of defensive cyber deception, we develop a reasoning framework, the game of ethical duplicity, which is consistent with the doctrine. While originally intended for cyber deception, this doctrine has broader applicability, including for ethical issues such as AI accountability and controversies related to YouTube recommendations. By establishing ethical principles, we can promote greater accountability, regulation, and protection in the digital realm.
Marcela Plascencia-Cruz, Arturo Plascencia-Hernández, Yaxsier De Armas-Rodríguez
et al.
The prevalence of colonization by <i>Pneumocystis jirovecii</i> (<i>P. jirovecii</i>) has not been studied in Mexico. We aimed to determine the prevalence of colonization by <i>P. jirovecii</i> using molecular detection in a population of Mexican patients with chronic obstructive pulmonary disease (COPD) and describe their clinical and sociodemographic profiles. We enrolled patients discharged from our hospital diagnosed with COPD and without pneumonia (<i>n</i> = 15). The primary outcome of this study was <i>P. jirovecii</i> colonization at the time of discharge, as detected by nested polymerase chain reaction (PCR) of oropharyngeal wash samples. The calculated prevalence of colonization for our study group was 26.66%. There were no statistically significant differences between COPD patients with and without colonization in our groups. Colonization of <i>P. jirovecii</i> in patients with COPD is frequent in the Mexican population; the clinical significance, if any, remains to be determined. Oropharyngeal wash and nested PCR are excellent cost-effective options to simplify sample collection and detection in developing countries and can be used for further studies.
The Czech Bar Association published a text which has the words “code of ethics” in its title. The aim of this paper is to determine whether the norms contained in the code are actually related to ethics or whether they concern different fields. The paper first explains the raison d’être of codes of ethics in general and briefly introduces the Czech Bar Association and the origin of its code of ethics. The principal section of the paper is dedicated to a detailed analysis of the text of the Czech Bar Association’s code of ethics applying a method used in England for similar purposes by Donald Nicolson. The analysis shows that the Czech Bar Association’s code of ethics deals with ethical issues only to a lesser extent and that it contains numerous provisions which do not deal with ethics at all. The paper proposes to remedy this unsuitable state by creating two separate codes. The first would primarily regulate ethically relevant situations in legal practice. The other code would contain “other” rules of the profession.
Artificial intelligence is the science of empowering machines to perform actions similar to human activities. In other words, artificial intelligence is considered a science and a set of computer technologies designed to think, reason and imitate human behavior.Artificial intelligence is considered a new technology that has influenced various aspects of human life, from the economy to health and employment.Activists in the field of artificial intelligence always talk about the capabilities of this technology. According to them, the development and expansion of artificial intelligence is a great tool to deal with human problems and dilemmas. For example, the increase in temperature, decrease in biodiversity, deforestation, floods, droughts, air pollution, and garbage accumulation are all among the environmental problems that have plagued humanity, problems that require immediate and effective solutions. For this purpose, resorting to artificial intelligence and its capabilities in environmental care has been proposed as one of the scientific and technical solutions to deal with these environmental challenges.The capabilities of artificial intelligence in agricultural management, measuring the amount of greenhouse gases, managing and monitoring the optimization of energy consumption, recycling waste, and strengthening and optimizing the public transportation system are all among the potential capabilities of artificial intelligence in the protection of the environment.But on the other hand, the process of designing, producing, supplying, and resorting to artificial intelligence has been associated with various challenges such as high energy consumption, extensive use of rare metals, and destruction of mineral resources, as well as increasing waste production and environmental pollution. These problems have caused serious doubts about the capabilities of this technology considering the growing trend to resort to artificial intelligence. This has led to environmental activists raising the question of whether this technology will provide a toolbox for a sustainable future for humans.Concerns regarding the performance of artificial intelligence and the widespread global support for this technology on the other hand prompted the world community to respond to these doubts, by regularizing the processes of research, development, production, and supply of artificial intelligence.One of these attempts is preparing the First Draft of the Recommendation on the Ethics of Artificial Intelligence in September 2020 By the United Nations Educational, Scientific and Cultural Organization (UNESCO).This draft, which was prepared in the form of 8 sections with the efforts of UNESCO international experts, with the aim of creating an international framework in the field of ethical and legal issues related to artificial intelligence systems, is approved at the 41st annual meeting of UNESCO, which was held in November 2021, with the votes of 193 member countries of this organization as the first international document that specifically considers the ethical norms and human rights of artificial intelligence..This document will not be binding but it is significant because it will be the first international document that specifically considers the ethical norms and human rights of artificial intelligence.The drafters of this recommendation talked about four human values which the 1st is respecting, encouraging and ensuring the basic principles of human rights, the second is , protecting the environment, the third is protecting biodiversity and the fourth, is living in peace and reconciliation.This draft demanded all the activists in the field of artificial intelligence to participate in the activities and adhere to principles such as proportionality, safety, fairness, responsibility, and accountability.But when looking at the draft text it seems that in some cases it contains ambiguities and defects, especially environmental discussions.These defects lead to several questions such as: “Has UNESCO's ethical draft been able to address the challenges in the environment sector, to provide effective regulations and solutions?” and “Considering the important and ever-increasing role of private companies active in the production and supply of artificial intelligence systems, have the authors of the draft been able to act successfully regarding attributing responsibility, methods of compensation for environmental damages, and commitment to observe the precautionary principle?” This article aims at working on these subjects, questions, and ambiguities with an analytical-descriptive method.
Introduction/Main Objectives: The study proposed is written based on the results of quantitative research and the analysis of the theory and practice of leadership. The study's main objective is to determine the essential traits of a leader for effective interaction with team members. Background Problems: Most research on this topic chose a leader's traits based on analyzing literary sources rather than on empirical research. Novelty: The traits for the most effective collaboration between leader and team members were chosen by potential and actual members of the leader's team, namely students and teachers of the University. Research Methods: We conducted a questionnaire survey of 103 teachers and 421 Bogomolets National Medical University (Kyiv) students. The statistical analysis was carried out using Wald Test. Finding/Results: The research confirmed that both respondent categories admitted the importance of all leadership traits. At the same time, such traits as passion, effectiveness, self-confidence, determination, and ability to take risks appeared to be more significant for the students than for the teachers. The teachers ranked such a trait as decency higher than the students did. Also, such issues as the importance of organizational culture, ethical aspects of leadership, and the most effective leadership style for productive interaction with team members were examined. Conclusion: This study proposed complex recommendations for creating the most productive model of the interaction between the leader and team members based on the data obtained.