As AI becomes embedded in customer-facing systems, ethical scrutiny has largely focused on models, data, and governance. Far less attention has been paid to how AI is experienced through user-facing design. This commentary argues that many AI front-ends implicitly assume an 'ideal user body and mind', and that this becomes visible and ethically consequential when examined through the experiences of differently abled users. We explore this through retail AI front-ends for customer engagement - i.e., virtual assistants, virtual try-on systems, and hyper-personalised recommendations. Despite intuitive and inclusive framing, these systems embed interaction assumptions that marginalise users with vision, hearing, motor, cognitive, speech and sensory differences, as well as age-related variation in digital literacy and interaction norms. Drawing on practice-led insights, we argue that these failures persist not primarily due to technical limits, but due to the commercial, organisational, and procurement contexts in which AI front-ends are designed and deployed, where accessibility is rarely contractual. We propose front-end assurance as a practical complement to AI governance, aligning claims of intelligence and multimodality with the diversity of real users.
Moral cognition is a crucial yet underexplored aspect of decision-making in AI models. Regardless of the application domain, it should be a consideration that allows for ethically aligned decision-making. This paper presents a multifaceted contribution to this research space. Firstly, a comparative analysis of techniques to instill ethical competence into AI models has been presented to gauge them on multiple performance metrics. Second, a novel mathematical discretization of morality and a demonstration of its real-life application have been conveyed and tested against other techniques on two datasets. This value is modeled as the risk of loss incurred by the least moral cases, or an Expected Moral Shortfall (EMS), which we direct the AI model to minimize in order to maximize its performance while retaining ethical competence. Lastly, the paper discusses the tradeoff between preliminary AI decision-making metrics such as model performance, complexity, and scale of ethical competence to recognize the true extent of practical social impact.
As the innovative potential of quantum technologies comes into focus, so too does the urgent need to address their ethical implications. While many voices highlight the importance of ethical engagement, less attention has been paid to the conditions that make such engagement possible. In this article, I argue that technological understanding is a foundational capacity for meaningful ethical reflection on emerging technology like quantum technologies. Drawing on De Jong & De Haro's account of technological understanding (2025a; 2025b), I clarify what such understanding entails and how it enables ethical enquiry. I contend that ethical assessment, first and foremost, requires an understanding of what quantum technologies can do - their functional capacities and, by extension, their potential applications. Current efforts to build engagement capacities among broader audiences - within and beyond academic contexts - tend, however, to focus on explaining the underlying quantum mechanics. Instead, I advocate a shift from a physics-first to a functions-first approach: fostering an understanding of quantum technologies' capabilities as the basis for ethical reflection. Presenting technological understanding as an epistemic requirement for meaningful ethical engagement may appear to raise the bar for participation. However, by decoupling functional understanding from technical expertise, this condition becomes attainable for a broader group, contributing not only to a well-informed but also to a more inclusive ethical debate.
Vehicular Ad-hoc Networks (VANETs) have seen significant advancements in technology. Innovation in connectivity and communication has brought substantial capabilities to various components of VANETs such as vehicles, infrastructures, passengers, drivers and affiliated environmental sensors. Internet of Things (IoT) has brought the notion of Internet of Vehicles (IoV) to VANETs where each component of VANET is connected directly or indirectly to the Internet. Vehicles and infrastructures are key components of a VANET system that can greatly augment the overall experience of the network by integrating the competencies of Vehicle to Vehicle (V2V), Vehicle to Pedestrian (V2P), Vehicle to Sensor (V2S), Vehicle to Infrastructure (V2I) and Infrastructure to Infrastructure (I2I). Internet connectivity in Vehicles and Infrastructures has immensely expanded the potential of developing applications for VANETs under the broad spectrum of IoV. Advent in the use of technology in VANETs requires considerable efforts in scheming the ethical rules for autonomous systems. Currently, there is a gap in literature that focuses on the challenges involved in designing ethical rules or policies for infrastructures, sometimes referred to as Road Side Units (RSUs) for IoVs. This paper highlights the key challenges entailing the design of ethical rules for RSUs in IoV systems. Furthermore, the article also proposes major ethical principles for RSUs in IoV systems that would set foundation for modeling future IoV architectures.
As generative AI becomes increasingly integrated into higher education, understanding how students engage with these technologies is essential for responsible adoption. This study evaluates the Educational AI Hub, an AI-powered learning framework, implemented in undergraduate civil and environmental engineering courses at a large R1 public university. Using a mixed-methods design combining pre- and post-surveys, system usage logs, and qualitative analysis of students' AI interactions, the research examines perceptions of trust, ethics, usability, and learning outcomes. Findings show that students valued the AI assistant for its accessibility and comfort, with nearly half reporting greater ease using it than seeking help from instructors or teaching assistants. The tool was most helpful for completing homework and understanding concepts, though views on its instructional quality were mixed. Ethical uncertainty, particularly around institutional policy and academic integrity, emerged as a key barrier to full engagement. Overall, students regarded AI as a supplement rather than a replacement for human instruction. The study highlights the importance of usability, ethical transparency, and faculty guidance in promoting meaningful AI engagement. A total of 71 students participated across two courses, generating over 600 AI interactions and 100 survey responses that provided both quantitative and contextual insights into learning engagement.
Patrizio Migliarini, Mashal Afzal Memon, Marco Autili
et al.
Large Language Models (LLMs) are increasingly integrated into software engineering (SE) tools for tasks that extend beyond code synthesis, including judgment under uncertainty and reasoning in ethically significant contexts. We present a fully automated framework for assessing ethical reasoning capabilities across 16 LLMs in a zero-shot setting, using 30 real-world ethically charged scenarios. Each model is prompted to identify the most applicable ethical theory to an action, assess its moral acceptability, and explain the reasoning behind their choice. Responses are compared against expert ethicists' choices using inter-model agreement metrics. Our results show that LLMs achieve an average Theory Consistency Rate (TCR) of 73.3% and Binary Agreement Rate (BAR) on moral acceptability of 86.7%, with interpretable divergences concentrated in ethically ambiguous cases. A qualitative analysis of free-text explanations reveals strong conceptual convergence across models despite surface-level lexical diversity. These findings support the potential viability of LLMs as ethical inference engines within SE pipelines, enabling scalable, auditable, and adaptive integration of user-aligned ethical reasoning. Our focus is the Ethical Interpreter component of a broader profiling pipeline: we evaluate whether current LLMs exhibit sufficient interpretive stability and theory-consistent reasoning to support automated profiling.
Fahmi Hamdi, Kamel Ladraa, Mounir Benjammour
et al.
Water resource management is a critical component of environmental sustainability, directly influencing the availability and quality of water for human consumption and ecological systems. Indonesia, as the world's largest Muslim-majority country, faces significant water management challenges, including resource mismanagement, inefficient water systems, and weak institutional framework. However, this study is unique as it develops the comprehensive Sharia-based framework for water governance in Indonesia, integrating Islamic principles particularly Maqashid al-Sharia (Islamic legal objectives) and Fiqh al-Bi'ah (Islamic environmental jurisprudence), as well as Islamic ethics into water resource management to achieve sustainability. Using a normative-juridical and interdisciplinary qualitative approach, the research analyzes Islamic texts, legal documents, environmental data, and expert opinions through content and comparative analysis. The findings reveal that Indonesia’s Water Quality Index (WQI) slightly improved from 53.88 in 2022 to 54.59 in 2023, yet water pollution persisted in 11,019 villages/sub-districts as of 2024, with Central Java being the most affected (1,366 cases). The integrated Islamic framework emphasizes the preservation of life, intellect, wealth, progeny, and religion, aligned with the principles of maslahah (public interest), ‘adl (justice), and khalifah (stewardship). This model offers both normative direction and practical solutions for policymakers, religious authorities, and environmental institutions to address water-related challenges through an ethical and faith-based lens.
This article explains Kant’s critical analysis of the religious story of the Binding of Isaac in Religion within the Boundaries of Mere Reason (2001) and The Conflict of the Faculties (1798). From Kant’s perspective, a command to sacrifice one’s child—if it conflicts with the moral law—cannot originate from a divine source. In this critique, Kant’s concern extends beyond theology; he seeks to defend the authority of practical reason and moral conscience against any purported divine command that contradicts them. In his view, the uncritical and morally unreflective acceptance of religious imperatives leads to blind obedience and the erosion of individual moral responsibility within society. The article argues that Kant’s interpretation of the Abrahamic narrative forms part of his broader project of transforming religion into a rational, ethics-centered domain, free from superstition and the coercive authority of external institutions. By emphasizing conscience as a duty, Kant maintains that religion is justifiable only insofar as it remains subordinate to practical reason. Accordingly, the figure of Abraham portrayed in this religious tale is not to be admired uncritically; rather, it calls for a skeptical and morally vigilant reading of the narrative itself. The article thus underscores Kant’s deep commitment to moral autonomy as the foundation of both individual virtue and social order.
Amidst the growing interest in developing task-autonomous AI for automated mental health care, this paper addresses the ethical and practical challenges associated with the issue and proposes a structured framework that delineates levels of autonomy, outlines ethical requirements, and defines beneficial default behaviors for AI agents in the context of mental health support. We also evaluate fourteen state-of-the-art language models (ten off-the-shelf, four fine-tuned) using 16 mental health-related questionnaires designed to reflect various mental health conditions, such as psychosis, mania, depression, suicidal thoughts, and homicidal tendencies. The questionnaire design and response evaluations were conducted by mental health clinicians (M.D.s). We find that existing language models are insufficient to match the standard provided by human professionals who can navigate nuances and appreciate context. This is due to a range of issues, including overly cautious or sycophantic responses and the absence of necessary safeguards. Alarmingly, we find that most of the tested models could cause harm if accessed in mental health emergencies, failing to protect users and potentially exacerbating existing symptoms. We explore solutions to enhance the safety of current models. Before the release of increasingly task-autonomous AI systems in mental health, it is crucial to ensure that these models can reliably detect and manage symptoms of common psychiatric disorders to prevent harm to users. This involves aligning with the ethical framework and default behaviors outlined in our study. We contend that model developers are responsible for refining their systems per these guidelines to safeguard against the risks posed by current AI technologies to user mental health and safety. Trigger warning: Contains and discusses examples of sensitive mental health topics, including suicide and self-harm.
In this conceptual paper, we review existing literature on artificial intelligence/machine learning (AI/ML) education to identify three approaches to how learning and teaching ML could be conceptualized. One of them, a data-driven approach, emphasizes providing young people with opportunities to create data sets, train, and test models. A second approach, learning algorithm-driven, prioritizes learning about how the learning algorithms or engines behind how ML models work. In addition, we identify efforts within a third approach that integrates the previous two. In our review, we focus on how the approaches: (1) glassbox and blackbox different aspects of ML, (2) build on learner interests and provide opportunities for designing applications, (3) integrate ethics and justice. In the discussion, we address the challenges and opportunities of current approaches and suggest future directions for the design of learning activities.
Sonja Bjelobaba, Lorna Waddington, Mike Perkins
et al.
Background: The rapid development and use of generative AI (GenAI) tools in academia presents complex and multifaceted ethical challenges for its users. Earlier research primarily focused on academic integrity concerns related to students' use of AI tools. However, limited information is available on the impact of GenAI on academic research. This study aims to examine the ethical concerns arising from the use of GenAI across different phases of research and explores potential strategies to encourage its ethical use for research purposes. Methods: We selected one or more GenAI platforms applicable to various research phases (e.g. developing research questions, conducting literature reviews, processing data, and academic writing) and analysed them to identify potential ethical concerns relevant for that stage. Results: The analysis revealed several ethical concerns, including a lack of transparency, bias, censorship, fabrication (e.g. hallucinations and false data generation), copyright violations, and privacy issues. These findings underscore the need for cautious and mindful use of GenAI. Conclusions: The advancement and use of GenAI are continuously evolving, necessitating an ongoing in-depth evaluation. We propose a set of practical recommendations to support researchers in effectively integrating these tools while adhering to the fundamental principles of ethical research practices.
The present ethnographic research falls within the field of cultural anthropology; it delves into the concept of khidmat among newly married Muslim women in Patna, Bihar, and its impact on their daily lives and family relationships. Rooted in Islamic theology, khidmat encompasses labor, service, devotion, and care, and thus endows household duties with spiritual meaning. Highlighting these spiritual epistemologies of ordinary caregiving, I emphasize that khidmat is not merely a chore, but a sacred responsibility intertwined with religious meaning. Seen as a means of securing divine blessings, such ordinary care practices at home foster mutual understanding (apsī samajhdārī) among spouses and in-laws, aiding in the prevention and resolution of household and marital conflicts. Through khidmat, women infuse their everyday lives with spiritual depth, transforming the mundane into the sacred. In doing so, this study challenges conventional approaches to ordinary ethics that divide the everyday from the transcendental; instead, it situates khidmat within Islamic ethics as a framework for piety in which relationality is central. Aligning with contemporary scholarship on the anthropology of care and ethics, this ethnography reveals how khidmat operates as a complex ontological concept, offering new perspectives on ordinary acts of care in gendered Muslim households in contemporary India.
COVID-19 is a global pandemic that has unmasked the underlying and once-ignored challenges in public health, especially in Africa. The pandemic has adversely disrupted people’s lives where systemic and structural inequalities have taken root owing to the interaction among religious, political, economic, socio-cultural, environmental and other influential factors, resulting in adverse outcomes. These interactions affected not only the psychological, physical, emotional and social wellbeing of all humanity but also their ethical way of thinking. Adherence to the local government ministry of health’s stringent measures, such as voluntary self-quarantine or forced quarantine, may be unattainable. This raises several ethical issues that are not new but which become intensified in pressing situations. Ethically, legitimate public health measures and conservative environmental efforts are easier to voluntarily comply with than being enforced. In this article, a phenomenological methodology was employed to not only debunk the ethical difficulties in adhering to the pandemic’s preventive protocols, but also to reason on the entwinement between the public health and environmental concerns. The article foregrounded that the COVID-19 pandemic is both a healthcare crisis and an environmental ethics challenge. In focussing on how systemic and structural inequalities influence social life, the article argued that public health ethics informs environmental conservation towards a more holistic approach to health and wealth that flows from environmental health ethics.
Contribution: The article advanced ongoing discussions on environmental health ethics. Environmental health ethics is a transdisciplinary and integrated approach that upholds sustainable balance and optimisation of the health of people, animals and ecosystems. A sensitisation and realisation of our inter-webbed relatedness to all, is a major step towards sustainable health and wealth.
Sjors Ligthart, Marcello Ienca, Gerben Meynen
et al.
The rise of neurotechnologies, especially in combination with AI-based methods for brain data analytics, has given rise to concerns around the protection of mental privacy, mental integrity and cognitive liberty - often framed as 'neurorights' in ethical, legal and policy discussions. Several states are now looking at including 'neurorights' into their constitutional legal frameworks and international institutions and organizations, such as UNESCO and the Council of Europe, are taking an active interest in developing international policy and governance guidelines on this issue. However, in many discussions of 'neurorights' the philosophical assumptions, ethical frames of reference and legal interpretation are either not made explicit or are in conflict with each other. The aim of this multidisciplinary work here is to provide conceptual, ethical and legal foundations that allow for facilitating a common minimalist conceptual understanding of mental privacy, mental integrity and cognitive liberty to facilitate scholarly, legal and policy discussions.
The metaverse and digital, virtual environments have been part of recent history as places in which people can socialize, work and spend time playing games. However, the infancy of the development of these digital, virtual environments brings some challenges that are still not fully depicted. With this article, we seek to identify and map the currently available knowledge and scientific effort to discover what principles, guidelines, laws, policies, and practices are currently in place to allow for the design of digital, virtual environments, and the metaverse. Through a scoping review, we aimed to systematically survey the existing literature and discern gaps in knowledge within the domain of metaverse research from sociological, anthropological, cultural, and experiential perspectives. The objective of this review was twofold: (1) to examine the focus of the literature studying the metaverse from various angles and (2) to formulate a research agenda for the design and development of ethical digital, virtual environments. With this paper, we identified several works and articles detailing experiments and research on the design of digital, virtual environments and metaverses. We found an increased number of publications in the year 2022. This finding, together with the fact that only a few articles were focused on the domain of ethics, culture and society shows that there is still a vast amount of work to be done to create awareness, principles and policies that could help to design safe, secure and inclusive digital, virtual environments and metaverses.
This scoping review explores the ethical challenges of using ChatGPT in higher education. By reviewing recent academic articles in English, Chinese, and Japanese, we aimed to provide a deep dive review and identify gaps in the literature. Drawing on Arksey and O'Malley's (2005) scoping review framework, we defined search terms and identified relevant publications from four databases in the three target languages. The research results showed that the majority of the papers were discussion papers, but there was some early empirical work. The ethical issues highlighted in these works mainly concern academic integrity, assessment issues, and data protection. Given the rapid deployment of generative artificial intelligence, it is imperative for educators to conduct more empirical studies to develop sound ethical policies for its use.
This article aims to compare two interpretations of the emergence of new religious and moral concepts and beliefs in the period between the Shang (1600‒1046 BC) and the Western Zhou (1046‒771 BC) dynasties. It critically compares the theories of Xu Fuguan (1903‒1982) and Li Zehou (1930‒2021) on the process of humanization of Chinese religion. By emphasizing religious concepts such as Heaven, the Mandate of Heaven, the Way of Heaven on the one hand, and moral concepts such as virtue, reverence, and rituality on the other, the author illuminates the differences in each author’s interpretation of the era in which Chinese culture moved away from religion and into the realm of humanism and ethics. This article reveals the reasons for these differences, which stem from the profound divergences in the basic methods of Li and Xu. While Li’s elaboration is based on philosophical approaches, Xu Fuguan’s understanding is based on philological and cultural analyses of the Chinese history of ideas. The author argues that these mutual differences between their interpretations demonstrate the importance of understanding different methodological approaches, which in turn allows for a deeper multi-layered understanding of the process of humanization of Chinese religion.
Social sciences and state - Asia (Asian studies only)
Islamic Religious Education is expected to be able to produce human beings who are always trying to perfect faith, piety, and have noble character, noble character includes ethics, character, or morals as a manifestation of Education. Within the family there is also a process of internalizing Islamic Education Values, namely the process of transferring behavior that is controlled externally to behavior that is controlled internally. Where everything can be done through the habituation process. So habituation does not only stop at school, but is also applied at home. If at school the teacher is the controller, then when at home the task is transferred to the parents. Even when the child has not yet entered school, this task has become the obligation of the parents. With the inductive method, parents place more emphasis on understanding than coercion without reason and focus children's attention on the consequences that can impact themselves, others, and the environment, so parents have been able to provide moral nutrition to help contribute to the success of character education.
This research aims to identify Quranic verses that discuss ethics in accordance with the local wisdom values of 'Ziki Guru Bura' within the Bima community. 'Ziki Guru Bura' is a form of poetry known as 'kapatu' among the Bima people. In this context, 'Ziki Guru Bura' has been conveyed with the purpose of enhancing the religious awareness of the community while upholding local wisdom values (within the Bima community). 'Ziki Guru Bura' consists of teachings that serve as a life philosophy and guidance for the community. These teachings are conveyed by an individual who has received the title of 'mursyid' and is considered an exemplary role model for the community in their daily lives. This research employs a qualitative research method with a library research approach to obtain the necessary data. Additionally, it utilizes a thematic interpretation approach to explain the Quranic verses related to the collected data. The results of this research provide insight that behavior originates from the subject and is displayed to others, with others serving as the object of the exhibited behavior. There are three (3) main elements of ideal human behavior in applying the ethical principles of the Quran, as reflected in 'Ziki Guru Bura,' namely: a) ethics towards Allah and His Messenger, b) ethics towards parents, and c) ethics towards teachers."
ABSTRACTIn a Jewish context, it seems, it is a naïve consensus view that in praying liturgically one aims to express to God, in the manner of ordinary, interpersonal conversation, those thoughts stated by the text. But on this ordinary conversation model (OCM), a problem of insincerity arises when, as commonly happens, the text states a claim the practitioner does not believe. The idea of redeeming one's prayer by reinterpretation is, I argue, incompatible with OCM. Another strategy, which finds some encouragement in Jewish tradition, is to try inducing the missing belief. I further argue, however, that for one's expression of a belief to be proper, in the sense of being authentic, this belief must be corroborated by the evolving, diachronically largely‐coherent understanding distinctive of one's person—a requirement which an induced, otherwise‐missing belief cannot fulfill. This, I suggest, provides some reason to seek a model of liturgical prayer different from OCM.