Just Peace or Just War? Theological, Ethical and Technological Reflections on Armed Conflict
Nándor Birher, Avraham Weber, Nándor Péter Birher
et al.
Armed conflict management increasingly demands new normative and strategic frameworks that preserve human life while maintaining effective deterrence capabilities. This study develops a multidisciplinary framework for rethinking armed conflict through the concept of just peace, integrating theology, ethics, law, technology, and empirical communication analysis. The research analyzes 7957 YouTube videos from NATO, the United Nations, and the Vatican, published over two years, employing semantic network analysis, modularity-based community detection, and sentiment analysis to identify emerging discourse patterns around peace, technology, and regulatory complexity. The findings suggest that contemporary socio-technological conditions are increasingly framed in ways that open a discursive space for rethinking conflict management beyond exclusive reliance on large-scale lethal force. Positive messaging correlates with higher audience engagement, while concepts such as law, ethics, religion, and technical standards emerge as interconnected regulatory domains. The study concludes that just peace is not naïve pacifism but a strategic, normatively grounded reorientation in contemporary deterrence thinking. Effective implementation requires integrated regulatory frameworks combining legal norms, ethical principles, religious values, and technical standards. The evolving technological landscape may allow deterrence systems to move beyond exclusive reliance on lethal force toward more humane and efficient conflict-management mechanisms.
Religions. Mythology. Rationalism
A Framework for Ethical Judgment of Smart City Applications
Weichen Shi
As modern cities increasingly adopt a variety of sensors and Internet of Things (IoT) technologies to collect and analyze data about residents, environments, and public services, they are fostering greater interactions among smart city applications, residents, governments, and businesses. This trend makes it essential for regulators to focus on these interactions to manage smart city practices effectively and prevent unethical outcomes. To facilitate ethical analysis for smart city applications, this paper introduces a judgment framework that examines various scenarios where ethical issues may arise. Employing a multi-agent approach, the framework incorporates diverse social entities and applies logic-based ethical rules to identify potential violations. Through a rights-based analysis, we developed a set of 13 ethical principles and rules to guide ethical practices in smart cities. We utilized two specification languages, Prototype Verification System (PVS) and Alloy, to model our multi-agent system. Our analysis suggests that Alloy may be more efficient for formalizing smart cities and conducting ethical rule checks, particularly with the assistance of a human evaluator. Simulations of a real-world smart city application demonstrate that our ethical judgment framework effectively detects unethical outcomes and can be extended for practical use.
PenTest++: Elevating Ethical Hacking with AI and Automation
Haitham S. Al-Sinani, Chris J. Mitchell
Traditional ethical hacking relies on skilled professionals and time-intensive command management, which limits its scalability and efficiency. To address these challenges, we introduce PenTest++, an AI-augmented system that integrates automation with generative AI (GenAI) to optimise ethical hacking workflows. Developed in a controlled virtual environment, PenTest++ streamlines critical penetration testing tasks, including reconnaissance, scanning, enumeration, exploitation, and documentation, while maintaining a modular and adaptable design. The system balances automation with human oversight, ensuring informed decision-making at key stages, and offers significant benefits such as enhanced efficiency, scalability, and adaptability. However, it also raises ethical considerations, including privacy concerns and the risks of AI-generated inaccuracies (hallucinations). This research underscores the potential of AI-driven systems like PenTest++ to complement human expertise in cybersecurity by automating routine tasks, enabling professionals to focus on strategic decision-making. By incorporating robust ethical safeguards and promoting ongoing refinement, PenTest++ demonstrates how AI can be responsibly harnessed to address operational and ethical challenges in the evolving cybersecurity landscape.
The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach
Chad Coleman, W. Russell Neuman, Ali Dasdan
et al.
As large language models (LLMs) are increasingly deployed in consequential decision-making contexts, systematically assessing their ethical reasoning capabilities becomes a critical imperative. This paper introduces the Priorities in Reasoning and Intrinsic Moral Evaluation (PRIME) framework--a comprehensive methodology for analyzing moral priorities across foundational ethical dimensions including consequentialist-deontological reasoning, moral foundations theory, and Kohlberg's developmental stages. We apply this framework to six leading LLMs through a dual-protocol approach combining direct questioning and response analysis to established ethical dilemmas. Our analysis reveals striking patterns of convergence: all evaluated models demonstrate strong prioritization of care/harm and fairness/cheating foundations while consistently underweighting authority, loyalty, and sanctity dimensions. Through detailed examination of confidence metrics, response reluctance patterns, and reasoning consistency, we establish that contemporary LLMs (1) produce decisive ethical judgments, (2) demonstrate notable cross-model alignment in moral decision-making, and (3) generally correspond with empirically established human moral preferences. This research contributes a scalable, extensible methodology for ethical benchmarking while highlighting both the promising capabilities and systematic limitations in current AI moral reasoning architectures--insights critical for responsible development as these systems assume increasingly significant societal roles.
Information-Theoretic Aggregation of Ethical Attributes in Simulated-Command
Taylan Akay, Harrison Tolley, Hussein Abbass
In the age of AI, human commanders need to use the computational powers available in today's environment to simulate a very large number of scenarios. Within each scenario, situations occur where different decision design options could have ethical consequences. Making these decisions reliant on human judgement is both counter-productive to the aim of exploring very large number of scenarios in a timely manner and infeasible when considering the workload needed to involve humans in each of these choices. In this paper, we move human judgement outside the simulation decision cycle. Basically, the human will design the ethical metric space, leaving it to the simulated environment to explore the space. When the simulation completes its testing cycles, the testing environment will come back to the human commander with a few options to select from. The human commander will then exercise human-judgement to select the most appropriate course of action, which will then get executed accordingly. We assume that the problem of designing metrics that are sufficiently granular to assess the ethical implications of decisions is solved. Subsequently, the fundamental problem we look at in this paper is how to weight ethical decisions during the running of these simulations; that is, how to dynamically weight the ethical attributes when agents are faced with decision options with ethical implications during generative simulations. The multi-criteria decision making literature has started to look at nearby problems, where the concept of entropy has been used to determine the weights during aggregation. We draw from that literature different approaches to automatically calculate the weights for ethical attributes during simulation-based testing and evaluation.
Beyond Technocratic XAI: The Who, What & How in Explanation Design
Ruchira Dhar, Stephanie Brandl, Ninell Oldenburg
et al.
The field of Explainable AI (XAI) offers a wide range of techniques for making complex models interpretable. Yet, in practice, generating meaningful explanations is a context-dependent task that requires intentional design choices to ensure accessibility and transparency. This paper reframes explanation as a situated design process -- an approach particularly relevant for practitioners involved in building and deploying explainable systems. Drawing on prior research and principles from design thinking, we propose a three-part framework for explanation design in XAI: asking Who needs the explanation, What they need explained, and How that explanation should be delivered. We also emphasize the need for ethical considerations, including risks of epistemic inequality, reinforcing social inequities, and obscuring accountability and governance. By treating explanation as a sociotechnical design process, this framework encourages a context-aware approach to XAI that supports effective communication and the development of ethically responsible explanations.
The Ethical Implications of AI in Creative Industries: A Focus on AI-Generated Art
Prerana Khatiwada, Joshua Washington, Tyler Walsh
et al.
As Artificial Intelligence (AI) continues to grow daily, more exciting (and somewhat controversial) technology emerges every other day. As we see the advancements in AI, we see more and more people becoming skeptical of it. This paper explores the complications and confusion around the ethics of generative AI art. We delve deep into the ethical side of AI, specifically generative art. We step back from the excitement and observe the impossible conundrums that this impressive technology produces. Covering environmental consequences, celebrity representation, intellectual property, deep fakes, and artist displacement. Our research found that generative AI art is responsible for increased carbon emissions, spreading misinformation, copyright infringement, unlawful depiction, and job displacement. In light of this, we propose multiple possible solutions for these problems. We address each situation's history, cause, and consequences and offer different viewpoints. At the root of it all, though, the central theme is that generative AI Art needs to be correctly legislated and regulated.
O chamado pentecostal e o espírito do capitalismo: desejo e ética do trabalho de Deus em Gana = The pentecostal calling and the spirit of capitalism: desire and religious labor ethics in Ghana = La llamada pentecostal y el espíritu del capitalismo: el deseo y la ética de la obra de Dios en Ghana
Reinhardt, Bruno Mafra Ney
Neste artigo, examino a relação entre valor e virtude, utilidade e convicção, no pentecostalismo global por meio de uma análise econômico-teológica do chamado para o ministério. A avaliação clássica de Max Weber sobre Beruf (chamado, vocação, profissão) se preocupou com a transvaloração do ascetismo religioso em uma ética econômica secular no despontar da modernidade. Hoje, observamos um processo inverso: a transvaloração do terceiro espírito do capitalismo em uma ética do trabalho de Deus que se quer virtuosa, escatológica e profissional. Baseado em pesquisa etnográfica com o seminário de uma denominação transnacional de Gana, mostro como esse englobamento teológico do econômico é impulsionado por uma noção específica de chamado: fluida, desmistificada e associada a uma pedagogia do desejo
Social sciences (General)
Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM Chatbots Using an Ethics-Based Audit to Assess Moral Reasoning and Normative Values
Jon Chun, Katherine Elkins
With the rise of individual and collaborative networks of autonomous agents, AI is deployed in more key reasoning and decision-making roles. For this reason, ethics-based audits play a pivotal role in the rapidly growing fields of AI safety and regulation. This paper undertakes an ethics-based audit to probe the 8 leading commercial and open-source Large Language Models including GPT-4. We assess explicability and trustworthiness by a) establishing how well different models engage in moral reasoning and b) comparing normative values underlying models as ethical frameworks. We employ an experimental, evidence-based approach that challenges the models with ethical dilemmas in order to probe human-AI alignment. The ethical scenarios are designed to require a decision in which the particulars of the situation may or may not necessitate deviating from normative ethical principles. A sophisticated ethical framework was consistently elicited in one model, GPT-4. Nonetheless, troubling findings include underlying normative frameworks with clear bias towards particular cultural norms. Many models also exhibit disturbing authoritarian tendencies. Code is available at https://github.com/jonchun/llm-sota-chatbots-ethics-based-audit.
Business and ethical concerns in domestic Conversational Generative AI-empowered multi-robot systems
Rebekah Rousi, Hooman Samani, Niko Mäkitalo
et al.
Business and technology are intricately connected through logic and design. They are equally sensitive to societal changes and may be devastated by scandal. Cooperative multi-robot systems (MRSs) are on the rise, allowing robots of different types and brands to work together in diverse contexts. Generative artificial intelligence has been a dominant topic in recent artificial intelligence (AI) discussions due to its capacity to mimic humans through the use of natural language and the production of media, including deep fakes. In this article, we focus specifically on the conversational aspects of generative AI, and hence use the term Conversational Generative artificial intelligence (CGI). Like MRSs, CGIs have enormous potential for revolutionizing processes across sectors and transforming the way humans conduct business. From a business perspective, cooperative MRSs alone, with potential conflicts of interest, privacy practices, and safety concerns, require ethical examination. MRSs empowered by CGIs demand multi-dimensional and sophisticated methods to uncover imminent ethical pitfalls. This study focuses on ethics in CGI-empowered MRSs while reporting the stages of developing the MORUL model.
Authorship and the Politics and Ethics of LLM Watermarks
Tim Räz
Recently, watermarking schemes for large language models (LLMs) have been proposed to distinguish text generated by machines and by humans. The present paper explores philosophical, political, and ethical ramifications of implementing and using watermarking schemes. A definition of authorship that includes both machines (LLMs) and humans is proposed to serve as a backdrop. It is argued that private watermarks may provide private companies with sweeping rights to determine authorship, which is incompatible with traditional standards of authorship determination. Then, possible ramifications of the so-called entropy dependence of watermarking mechanisms are explored. It is argued that entropy may vary for different, socially salient groups. This could lead to group dependent rates at which machine generated text is detected. Specifically, groups more interested in low entropy text may face the challenge that it is harder to detect machine generated text that is of interest to them.
Introduction to AI Safety, Ethics, and Society
Dan Hendrycks
Artificial Intelligence is rapidly embedding itself within militaries, economies, and societies, reshaping their very foundations. Given the depth and breadth of its consequences, it has never been more pressing to understand how to ensure that AI systems are safe, ethical, and have a positive societal impact. This book aims to provide a comprehensive approach to understanding AI risk. Our primary goals include consolidating fragmented knowledge on AI risk, increasing the precision of core ideas, and reducing barriers to entry by making content simpler and more comprehensible. The book has been designed to be accessible to readers from diverse backgrounds. You do not need to have studied AI, philosophy, or other such topics. The content is skimmable and somewhat modular, so that you can choose which chapters to read. We introduce mathematical formulas in a few places to specify claims more precisely, but readers should be able to understand the main points without these.
How Far Artificial Intelligence influenced Mu'allim, Murabbi, and Mudarris? Transhumanism and Diffusion of Innovation Theory's Perspective
Titis Thoriquttyas, Nita Rohmawati
The rapid growth of Artificial Intelligence (AI) has begun to change many elements of education, including Islamic religious education. Traditionally, Mu'allim, Murabbi, and Mudarris have played important roles in teaching, ethics, and spiritual guidance. However, little is known about the impact of AI on the shifting existence of these terminologies. This research delves into the impact of AI on the responsibilities and perspectives of Mu'allim, Murabbi, and Mudarris, especially through the lens of transhumanism and the diffusion of innovation theory. This study uses qualitative methods to examine data, exploring the variations in AI integration in Islamic education across diverse cultural and regional backgrounds while also recognizing the specific best practices and challenges in each setting. The research seeks to close the gap in understanding between contemporary technology and Islamic educational customs by examining the evolving responsibilities of educators in the age of artificial intelligence. The findings raise significant issues regarding the preservation of human-centered values in religious education while also highlighting the potential of AI to improve educational methods. This study bridges the gap between traditional educational philosophies and technology breakthroughs, contributing to the increasing body of literature on AI in education and providing insightful information for Islamic studies researchers.
Reviewing the Fatwa of Digital Da'wah in Indonesia Based on the Paradigm of Contemporary Islamic Law
Athoillah Islamy, Muhammad Abduh, Eko Siswanto
et al.
The abundance of ideologically charged digital da'wah content often has implications for integration problems amidst the plurality of social life, not only among Muslims themselves, but also social relations with other religious communities. The polemic calls for the importance of digital da'wah paradigms and ethics, both regulated by state norms and religious norms. In this regard, this qualitative study with a normative-philosophical approach aims to identify the paradigm of contemporary maqasid sharia in the fatwa of the Indonesian Ulema Council of East Java Province No. 06 of 2022 on Digital Da'wah Ethics.. The maqasid sharia theory developed by Jasser Auda was used as the theory of analysis. Reduction, presentation, and verification techniques used in data analysis. The results of this study found the development of maqasid sharia value as the basis for the paradigm and ethics of digital da'wah in the provisions of the fatwa, namely the value of hifz 'ird in the importance of humanist and pluralist da'wah content, and The value of hifz waton in the importance of da'wah content that prioritises public conduciveness and nationalism. the importance of developing maqasid fiqh for preachers in the paradigm and ethics of da'wah in the midst of the plurality of social, religious and state life. However, this study has not examined the effectiveness of the fatwa in the field. It is important to study this further.
Beyond Fairness: Alternative Moral Dimensions for Assessing Algorithms and Designing Systems
Kimi Wenzel, Geoff Kaufman, Laura Dabbish
The ethics of artificial intelligence (AI) systems has risen as an imminent concern across scholarly communities. This concern has propagated a great interest in algorithmic fairness. Large research agendas are now devoted to increasing algorithmic fairness, assessing algorithmic fairness, and understanding human perceptions of fairness. We argue that there is an overreliance on fairness as a single dimension of morality, which comes at the expense of other important human values. Drawing from moral psychology, we present five moral dimensions that go beyond fairness, and suggest three ways these alternative dimensions may contribute to ethical AI development.
A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
Kai He, Rui Mao, Qika Lin
et al.
The utilization of large language models (LLMs) in the Healthcare domain has generated both excitement and concern due to their ability to effectively respond to freetext queries with certain professional knowledge. This survey outlines the capabilities of the currently developed LLMs for Healthcare and explicates their development process, with the aim of providing an overview of the development roadmap from traditional Pretrained Language Models (PLMs) to LLMs. Specifically, we first explore the potential of LLMs to enhance the efficiency and effectiveness of various Healthcare applications highlighting both the strengths and limitations. Secondly, we conduct a comparison between the previous PLMs and the latest LLMs, as well as comparing various LLMs with each other. Then we summarize related Healthcare training data, training methods, optimization strategies, and usage. Finally, the unique concerns associated with deploying LLMs in Healthcare settings are investigated, particularly regarding fairness, accountability, transparency and ethics. Our survey provide a comprehensive investigation from perspectives of both computer science and Healthcare specialty. Besides the discussion about Healthcare concerns, we supports the computer science community by compiling a collection of open source resources, such as accessible datasets, the latest methodologies, code implementations, and evaluation benchmarks in the Github. Summarily, we contend that a significant paradigm shift is underway, transitioning from PLMs to LLMs. This shift encompasses a move from discriminative AI approaches to generative AI approaches, as well as a shift from model-centered methodologies to data-centered methodologies. Also, we determine that the biggest obstacle of using LLMs in Healthcare are fairness, accountability, transparency and ethics.
Ethical ChatGPT: Concerns, Challenges, and Commandments
Jianlong Zhou, Heimo Müller, Andreas Holzinger
et al.
Large language models, e.g. ChatGPT are currently contributing enormously to make artificial intelligence even more popular, especially among the general population. However, such chatbot models were developed as tools to support natural language communication between humans. Problematically, it is very much a ``statistical correlation machine" (correlation instead of causality) and there are indeed ethical concerns associated with the use of AI language models such as ChatGPT, such as Bias, Privacy, and Abuse. This paper highlights specific ethical concerns on ChatGPT and articulates key challenges when ChatGPT is used in various applications. Practical commandments for different stakeholders of ChatGPT are also proposed that can serve as checklist guidelines for those applying ChatGPT in their applications. These commandment examples are expected to motivate the ethical use of ChatGPT.
A method for the ethical analysis of brain-inspired AI
Michele Farisco, Gianluca Baldassarre, Emilio Cartoni
et al.
Despite its successes, to date Artificial Intelligence (AI) is still characterized by a number of shortcomings with regards to different application domains and goals. These limitations are arguably both conceptual (e.g., related to underlying theoretical models, such as symbolic vs. connectionist), and operational (e.g., related to robustness and ability to generalize). Biologically inspired AI, and more specifically brain-inspired AI, promises to provide further biological aspects beyond those that are already traditionally included in AI, making it possible to assess and possibly overcome some of its present shortcomings. This article examines some conceptual, technical, and ethical issues raised by the development and use of brain-inspired AI. Against this background, the paper asks whether there is anything ethically unique about brain-inspired AI. The aim of the paper is to introduce a method that has a heuristic nature and that can be applied to identify and address the ethical issues arising from brain-inspired AI. The conclusion resulting from the application of this method is that, compared to traditional AI, brain-inspired AI raises new foundational ethical issues and some new practical ethical issues, and exacerbates some of the issues raised by traditional AI.
The ethics of Enlightenment in the foundations of modern science
Kurakina Olga D.
In the Age of Enlightenment, when each person, in the opinion of Kant, was called upon to “think independently”, a transition from the medieval “cult of faith” to the enlightened “cult of reason” was finally formed and the ethical foundations of modern science were laid. The ethos of modern science, as a set of moral imperatives of the scientific community, was reduced in the middle of the twentieth century to a specific set of norms which are currently being challenged in view of the transformation of science into a technological industry, removing the personal responsibility of a scientist for the results of his creativity. The institutionalisation of science in the context of the global world of universal competition leaves the scientist with a choice of “thinking for himself/herself” only through the moral feat of overcoming the evolving corporate system of abandonment of the ethical standards on which the foundations of science were once erected. In place of the ethos of the scientific era of Enlightenment must come the socially responsible ethos of the science of our day, followed by the ethos of Anthropos, which received its most significant development in the theonomic ethics of Russian religious thinkers. The ethics of Enlightenment, in particular the ethics of Kant’s categorical imperative, not only formed the image of modern technological civilisation, thus shaping the moral foundations of modern science, but still remains in demand owing to the boldness of scientific research, approaching the transcendental boundaries of local life which Kant so innovatively substantiated.
Karl Popper and the production of scientific knowledge through the non-recognition of the sacred
Christian Onuorah Agbo, Ndidiamaka Vivian Ugwu, Kanayochukwu M. Okoye
et al.
Africa is a geographical space where the “impossibilities” are given sacred status. Some occurrences are attributed to one or more sacred or spiritual entities whose intervention or presence can never be questioned. Whoever interrogates such a force is often seen as either abnormal or irresponsible. More often than not, one is bound to ask: Where are the intellectuals whose ideas should be able to remove these biases or veils from the minds of people? But the unfortunate thing is that they too are involved in this despondency. The fundamental problem here is that there is underdevelopment everywhere especially as it relates to science. But Popper had a different idea in mind. His idea is that science flourishes more where nothing is sacred. So, what has Popper done to ensure that sacred entities are overlooked while espousing scientific ideals? Leveraging on the critical method, which is an exercise of careful judgment or evaluation, this work demonstrates that scientific progress is a product of deconstruction of the spiritual aspect of reality. This work will be of benefit for humanity because it will, with instances, show that progress is a product of falsification of the products/processes of the sacred.
Religious ethics, Social sciences (General)