Mona Baker
Hasil untuk "Ethics"
Menampilkan 20 dari ~999655 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar
Weilun Xu, Alexander Rusnak, Frederic Kaplan
When large language models make ethical judgments, do their internal representations distinguish between normative frameworks, or collapse ethics into a single acceptability dimension? We probe hidden representations across five ethical frameworks (deontology, utilitarianism, virtue, justice, commonsense) in six LLMs spanning 4B--72B parameters. Our analysis reveals differentiated ethical subspaces with asymmetric transfer patterns -- e.g., deontology probes partially generalize to virtue scenarios while commonsense probes fail catastrophically on justice. Disagreement between deontological and utilitarian probes correlates with higher behavioral entropy across architectures, though this relationship may partly reflect shared sensitivity to scenario difficulty. Post-hoc validation reveals that probes partially depend on surface features of benchmark templates, motivating cautious interpretation. We discuss both the structural insights these methods provide and their epistemological limitations.
Yik Chan Chin, David A. Raho, Hag-Min Kim et al.
This policy report draws on country studies from China, South Korea, Singapore, and the United Kingdom to identify effective tools and key barriers to interoperability in AI safety governance. It offers practical recommendations to support a globally informed yet locally grounded governance ecosystem. Interoperability is a central goal of AI governance, vital for reducing risks, fostering innovation, enhancing competitiveness, promoting standardization, and building public trust. However, structural gaps such as fragmented regulations and lack of global coordination, and conceptual gaps, including limited Global South engagement, continue to hinder progress. Focusing on three high-stakes domains - autonomous vehicles, education, and cross-border data flows - the report compares ethical, legal, and technical frameworks across the four countries. It identifies areas of convergence, divergence, and potential alignment, offering policy recommendations that support the development of interoperability mechanisms aligned with the Global Digital Compact and relevant UN resolutions. The analysis covers seven components: objectives, regulators, ethics, binding measures, targeted frameworks, technical standards, and key risks.
Javed I. Khan, Sharmila Rahman Prithula
The rapid advancements in large language models (LLMs) have revolutionized natural language processing, unlocking unprecedented capabilities in communication, automation, and knowledge generation. However, the ethical implications of LLM development, particularly in data harnessing, remain a critical challenge. Despite widespread discussion about the ethical compliance of LLMs -- especially concerning their data harnessing processes, there remains a notable absence of concrete frameworks to systematically guide or measure the ethical risks involved. In this paper we discuss a potential pathway for building an Ethical Risk Scoring (ERS) system to quantitatively assess the ethical integrity of the data harnessing process for AI systems. This system is based on a set of assessment questions grounded in core ethical principles, which are, in turn, supported by commanding ethical theories. By integrating measurable scoring mechanisms, this approach aims to foster responsible LLM development, balancing technological innovation with ethical accountability.
Jaemarie Solyst, Ruth Karen Nakigozi, Chloe Fong et al.
There is an increasing need for young people to become critically AI literate, understanding not only how AI works but also its limitations and ethical nuances. Yet, designing learning experiences that make such complex, serious topics engaging remains a challenge. This paper explores transformational games as a promising approach for supporting youth learning about generative AI (GenAI) and ethics. We designed and implemented two games, Diversity Duel and Secret Agent, that integrate GenAI tools with gameplay elements. This work investigates how the games' elements: (1) peer evaluation, (2) constraint-based creativity, and (3) social deduction supported socio-ethical reasoning about GenAI. Participants recognized and debated bias in GenAI outputs, connected these patterns to real-world inequities, and developed nuanced understandings of bias. Participants further came to see how prompt design shapes AI behavior. Our findings suggest that group-based games with these elements can support fostering critical AI literacy.
Jake Van Clief, Constantine Kyritsopoulos
As Large Language Models increasingly mediate human communication and decision-making, understanding their value expression becomes critical for research across disciplines. This work presents the Ethics Engine, a modular Python pipeline that transforms psychometric assessment of LLMs from a technically complex endeavor into an accessible research tool. The pipeline demonstrates how thoughtful infrastructure design can expand participation in AI research, enabling investigators across cognitive science, political psychology, education, and other fields to study value expression in language models. Recent adoption by University of Edinburgh researchers studying authoritarianism validates its research utility, processing over 10,000 AI responses across multiple models and contexts. We argue that such tools fundamentally change the landscape of AI research by lowering technical barriers while maintaining scientific rigor. As LLMs increasingly serve as cognitive infrastructure, their embedded values shape millions of daily interactions. Without systematic measurement of these value expressions, we deploy systems whose moral influence remains uncharted. The Ethics Engine enables the rigorous assessment necessary for informed governance of these influential technologies.
Austin Shouli, Ankur Barthwal, Molly Campbell et al.
The rapid expansion of Artificial Intelligence (AI) in digital platforms used by youth has created significant challenges related to privacy, autonomy, and data protection. While AI-driven personalization offers enhanced user experiences, it often operates without clear ethical boundaries, leaving young users vulnerable to data exploitation and algorithmic biases. This paper presents a call to action for ethical AI governance, advocating for a structured framework that ensures youth-centred privacy protections, transparent data practices, and regulatory oversight. We outline key areas requiring urgent intervention, including algorithmic transparency, privacy education, parental data-sharing ethics, and accountability measures. Through this approach, we seek to empower youth with greater control over their digital identities and propose actionable strategies for policymakers, AI developers, and educators to build a fairer and more accountable AI ecosystem.
Jiahao Wang, Songkai Xue, Jinghui Li et al.
Ensuring that Large Language Models (LLMs) align with the diverse and evolving human values across different regions and cultures remains a critical challenge in AI ethics. Current alignment approaches often yield superficial conformity rather than genuine ethical understanding, failing to address the complex, context-dependent nature of human values. In this paper, we propose a novel ethical reasoning paradigm for LLMs inspired by well-established ethical decision-making models, aiming at enhancing diverse human value alignment through deliberative ethical reasoning. Our framework consists of a structured five-step process, including contextual fact gathering, hierarchical social norm identification, option generation, multiple-lens ethical impact analysis, and reflection. This theory-grounded approach guides LLMs through an interpretable reasoning process that enhances their ability to understand regional specificities and perform nuanced ethical analysis, which can be implemented with either prompt engineering or supervised fine-tuning methods. We perform evaluations on the SafeWorld benchmark that specially designed for regional value alignment. Experimental results demonstrate our framework significantly improves LLM alignment with diverse human values compared to baseline methods, enabling more accurate social norm identification and more culturally appropriate reasoning. Our work provides a concrete pathway toward developing LLMs that align more effectively with the multifaceted values of global societies through interdisciplinary research.
Shalaleh Rismani, Renee Shelby, Leah Davis et al.
Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles - fairness, transparency, privacy, and trust - and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.
Vijayalaxmi Methuku, Praveen Kumar Myakala
The rapid advancement of generative AI has enabled the creation of pre-mortem digital twins, AI-driven replicas that mimic the behavior, personality, and knowledge of living individuals. These digital doppelgangers serve various functions, including enhancing productivity, enabling creative collaboration, and preserving personal legacies. However, their development raises critical ethical, legal, and societal concerns. Issues such as identity fragmentation, psychological effects on individuals and their social circles, and the risks of unauthorized cloning and data exploitation demand careful examination. Additionally, as these AI clones evolve into more autonomous entities, concerns about consent, ownership, and accountability become increasingly complex. This paper differentiates pre-mortem AI clones from post-mortem generative ghosts, examining their unique ethical and legal implications. We explore key challenges, including the erosion of personal identity, the implications of AI agency, and the regulatory gaps in digital rights and privacy laws. Through a research-driven approach, we propose a framework for responsible AI governance, emphasizing identity preservation, consent mechanisms, and autonomy safeguards. By aligning technological advancements with societal values, this study contributes to the growing discourse on AI ethics and provides policy recommendations for the ethical deployment of pre-mortem AI clones.
Junfeng Jiao, Saleh Afroogh, Abhejay Murali et al.
This study establishes a novel framework for systematically evaluating the moral reasoning capabilities of large language models (LLMs) as they increasingly integrate into critical societal domains. Current assessment methodologies lack the precision needed to evaluate nuanced ethical decision-making in AI systems, creating significant accountability gaps. Our framework addresses this challenge by quantifying alignment with human ethical standards through three dimensions: foundational moral principles, reasoning robustness, and value consistency across diverse scenarios. This approach enables precise identification of ethical strengths and weaknesses in LLMs, facilitating targeted improvements and stronger alignment with societal values. To promote transparency and collaborative advancement in ethical AI development, we are publicly releasing both our benchmark datasets and evaluation codebase at https://github.com/ The-Responsible-AI-Initiative/LLM_Ethics_Benchmark.git.
Shalini Chakraborty, Lola Burgueño, Nathalie Moreno et al.
Generative Artificial Intelligence (GenAI) is rapidly gaining momentum in software modeling education, embraced by both students and educators. As GenAI assists with interpreting requirements, formalizing models, and translating students' mental models into structured notations, it increasingly shapes core learning outcomes such as domain comprehension, diagrammatic thinking, and modeling fluency without clear ethical oversight or pedagogical guidelines. Yet, the ethical implications of this integration remain underexplored. In this paper, we conduct a systematic literature review across six major digital libraries in computer science (ACM Digital Library, IEEE Xplore, Scopus, ScienceDirect, SpringerLink, and Web of Science). Our aim is to identify studies discussing the ethical aspects of GenAI in software modeling education, including responsibility, fairness, transparency, diversity, and inclusion among others. Out of 1,386 unique papers initially retrieved, only three explicitly addressed ethical considerations. This scarcity highlights the critical absence of ethical discourse surrounding GenAI in modeling education and raises urgent questions about the responsible integration of AI in modeling curricula, as well as it evinces the pressing need for structured ethical frameworks in this emerging educational landscape. We examine these three studies and explore the emerging research opportunities as well as the challenges that have arisen in this field.
Fanfan Lin, Peter Wilson, Xinze Li et al.
Artificial intelligence (AI) is rapidly transforming power electronics, with AI-related publications in IEEE Power Electronics Society selected journals increasing more than fourfold from 2020 to 2025. However, the ethical dimensions of this transformation have received limited attention. This article underscores the urgent need for an ethical framework to guide responsible AI integration in power electronics, not only to prevent AI-related incidents but also to comply with legal and regulatory responsibilities. In this context, this article identifies four core pillars of AI ethics in power electronics: Security & Safety, Explainability & Transparency, Energy Sustainability, and Evolving Roles of Engineers. Each pillar is supported by practical and actionable insights to ensure that ethical principles are embedded in algorithm design, system deployment, and the preparation of an AI-ready engineering workforce. The authors advocate for power electronics engineers to lead the ethical discourse, given their deep technical understanding of both AI systems and power conversion technologies. The paper concludes by calling on the IEEE Power Electronics Society to spearhead the establishment of ethical standards, talent development initiatives, and best practices that ensure AI innovations are not only technically advanced but also oriented toward human and societal benefit.
Vanessa Utz
As generative AI systems become widely adopted, they enable unprecedented creation levels of synthetic data across text, images, audio, and video modalities. While research has addressed the energy consumption of model training and inference, a critical sustainability challenge remains understudied: digital waste. This term refers to stored data that consumes resources without serving a specific (and/or immediate) purpose. This paper presents this terminology in the AI context and introduces digital waste as an ethical imperative within (generative) AI development, positioning environmental sustainability as core for responsible innovation. Drawing from established digital resource management approaches, we examine how other disciplines manage digital waste and identify transferable approaches for the AI community. We propose specific recommendations encompassing re-search directions, technical interventions, and cultural shifts to mitigate the environmental consequences of in-definite data storage. By expanding AI ethics beyond immediate concerns like bias and privacy to include inter-generational environmental justice, this work contributes to a more comprehensive ethical framework that considers the complete lifecycle impact of generative AI systems.
John D. Rawls
Yu He, Fenghua Zhou, Xiangnan Yuan et al.
Introduction Osteoarthritis (OA) is the most common joint disorder among musculoskeletal conditions. Non-surgical treatment is the standard therapy for knee OA (KOA). Ultrasound therapy is recommended for alleviating pain and dysfunction from OA, but high-quality scientific evidence for its effectiveness in OA treatment is still lacking.Therefore, we want to analyse whether combining conventional physical therapy with low-intensity pulsed ultrasound (LIPUS) can enhance the efficacy of conventional therapy, thus improving symptoms in patients with KOA.Methods and analysis This randomised controlled trial aims to recruit 200 patients diagnosed with KOA, aged 38 years or above, who meet the clinical diagnostic criteria for KOA. Patients will be randomly assigned in a 1:1 ratio to either a LIPUS treatment group or a sham ultrasound treatment control group. The 2-week treatment will consist of five sessions per week and evaluations will take place at baseline, on the day of the last intervention and 1 month post intervention. The main outcome measures will be the Western Ontario and McMaster Universities’ scores. Secondary outcome indicators will be the Numerical Pain Rating Scale, the Lequesne scale, the time up and go test and the range of motion of the knee. An intention-to-treat analysis will be performed for dropouts and missing data.Ethics and dissemination The study was approved by the ethics committee of Shengjing Hospital of China Medical University (2023PS592K). Findings will be disseminated to participants and made available to peer-reviewed journals.Trial registration number The trial was registered on the Chinese Clinical Trial Registry platform (chictr.org.cn) on 22 March 2023, with the registration ID ChiCTR2300069643.
Elena N. Malyuga, Barry Tomalin
South African English economic discourse remains underexplored despite its significance in shaping public perception and policy in the region. One of its critical understudied facets are euphemisms, which are heavily influenced by historical and social background and play a crucial role in moderating sensitive issues and managing communication across diverse societal norms. This study aims to fill this gap by identifying how euphemisms reflect and respond to South Africa’s socio-cultural setting. The study involved compiling a corpus of approximately 500,000 words sourced from speeches, interviews, and publications by South African specialists with subsequent identification of euphemisms. As a results, 338 euphemisms were found in the corpus. Through continuous sampling, the study then identified, categorized, and quantitatively assessed the socio-cultural aspects of euphemisms across various economic discussions. According to the study results, euphemism in South African English economic discourse correspond to five main thematic groups: Economic and Racial Inequality, Corporate Governance and Ethics, Impact of Migration, Healthcare Economics, and Influence of Globalization. Each thematic group demonstrates patterns of euphemisms occurrence that reflect intentional communication efforts to address or mask sensitive socio-economic issues. The study results posit that euphemisms emerge as a frequently leveraged linguistic device moderating South African English economic discourse. They reflect an adaptive response to South Africa’s socio-cultural setting where managing the multifaceted societal norms and historical sensitivities is imperative for effective communication and policy dissemination. The study argues for closer examination of the linguistic composition of South African English economic discourse. The findings contribute to the fields of sociolinguistics and intercultural communication as they expose how euphemisms function as a tool for managing complex socio-economic processes.
Yung-Hsiang Hu
Ethical decision-making is challenging for most students. Values clarification exercises (VCEs) can help reduce decisional conflicts and feelings of regret. Scholars have suggested designing values deliberation exercises based on moral dilemma scenarios to help students to identify their values system. However, such exercises are challenging to complete for most teachers and students. Therefore, the development of artificial intelligence (AI)-supported decision aids is warranted. Studies have revealed that using a one-on-one interactive chatbot is a feasible learning strategy for improving the dialectic skills of students. Thus, this study proposed a human–machine learning framework that helps students to perform values clarification in the context of moral dilemmas. To assess the effectiveness of the framework, the present study incorporated the chatbot Chat Generative Pre-trained Transformer into the business ethics course of a university to develop a generative-AI-chatbot-assisted VCE (GAIC-VCE) system for university students. In total, 70 university students were recruited and divided into an experimental group and a control group. The experimental group completed GAIC-VCEs, whereas the control group completed conventional VCEs. The results revealed that the GAIC-VCE system effectively improved the experimental-group students’ ethical self-efficacy and ethical decision-making confidence and reduced their decisional conflicts.
Shumiao Ouyang, Hayong Yun, Xingjian Zheng
Large Language Models (LLMs) exhibit surprisingly diverse risk preferences when acting as AI decision makers, a crucial characteristic whose origins remain poorly understood despite their expanding economic roles. We analyze 50 LLMs using behavioral tasks, finding stable but diverse risk profiles. Alignment tuning for harmlessness, helpfulness, and honesty significantly increases risk aversion, causally increasing risk aversion confirmed via comparative difference analysis: a ten percent ethics increase cuts risk appetite two to eight percent. This induced caution persists against prompts and affects economic forecasts. Alignment enhances safety but may also suppress valuable risk taking, revealing a tradeoff risking suboptimal economic outcomes. With AI models becoming more powerful and influential in economic decisions while alignment grows increasingly critical, our empirical framework serves as an adaptable and enduring benchmark to track risk preferences and monitor this crucial tension between ethical alignment and economically valuable risk-taking.
Kyrie Zhixuan Zhou, Zachary Kilhoffer, Madelyn Rose Sanfilippo et al.
Large Language Models (LLMs) are advancing quickly and impacting people's lives for better or worse. In higher education, concerns have emerged such as students' misuse of LLMs and degraded education outcomes. To unpack the ethical concerns of LLMs for higher education, we conducted a case study consisting of stakeholder interviews (n=20) in higher education computer science. We found that students use several distinct mental models to interact with LLMs - LLMs serve as a tool for (a) writing, (b) coding, and (c) information retrieval, which differ somewhat in ethical considerations. Students and teachers brought up ethical issues that directly impact them, such as inaccurate LLM responses, hallucinations, biases, privacy leakage, and academic integrity issues. Participants emphasized the necessity of guidance and rules for the use of LLMs in higher education, including teaching digital literacy, rethinking education, and having cautious and contextual policies. We reflect on the ethical challenges and propose solutions.
Halaman 26 dari 49983