Jacob von Uexküll, H. Plessner, Ernst Cassirer et al.
Hasil untuk "Moral theology"
Menampilkan 20 dari ~1306292 hasil · dari CrossRef, arXiv, Semantic Scholar, DOAJ
Nan Li, Bo Kang, Tijl De Bie
When LLMs judge moral dilemmas, do they reach different conclusions in different languages, and if so, why? Two factors could drive such differences: the language of the dilemma itself, or the language in which the model reasons. Standard evaluation conflates these by testing only matched conditions (e.g., English dilemma with English reasoning). We introduce a methodology that separately manipulates each factor, covering also mismatched conditions (e.g., English dilemma with Chinese reasoning), enabling decomposition of their contributions. To study \emph{what} changes, we propose an approach to interpret the moral judgments in terms of Moral Foundations Theory. As a side result, we identify evidence for splitting the Authority dimension into a family-related and an institutional dimension. Applying this methodology to English-Chinese moral judgment with 13 LLMs, we demonstrate its diagnostic power: (1) the framework isolates reasoning-language effects as contributing twice the variance of input-language effects; (2) it detects context-dependency in nearly half of models that standard evaluation misses; and (3) a diagnostic taxonomy translates these patterns into deployment guidance. We release our code and datasets at https://anonymous.4open.science/r/CrossCulturalMoralJudgement.
Rahulrajan Karthikeyan, Moses Boudourides
Contemporary debates in AI ethics increasingly foreground the prospective moral status of artificial intelligence and the possibility of extending moral or legal rights to artificial agents. While such discussions raise substantive philosophical questions, they often proceed alongside a comparatively limited engagement with the empirically documented harms generated by algorithmic systems already embedded within social, legal, and economic institutions. We conceptualize this asymmetry as an algorithmic blind spot: a discursive-structural pattern in which disproportionate ethical investment in speculative future artificial agents marginalizes empirically documented and asymmetrically distributed harms affecting human populations. The paper analyzes prominent strands of the robot rights literature and juxtaposes them with empirical evidence of algorithmic bias and harm across domains including employment, criminal justice, surveillance, and facial recognition. It demonstrates how ethical preoccupation with hypothetical future entities can obscure existing injustices, diffuse responsibility, and impede mechanisms of accountability and redress. Without rejecting philosophical inquiry into the moral status of artificial systems, the paper instead emphasizes the importance of ethical prioritization and temporal ordering within AI ethics. Addressing the algorithmic blind spot, we argue, requires re-centering ethical evaluation on human impacts, institutional responsibility, and the governance of algorithmic systems currently in operation. In doing so, the paper introduces a conceptual framework for critically assessing ethical discourse in AI and underscores the need to align ethical reflection more closely with its immediate social consequences.
Gianluca De Ninno, Paola Inverardi, Francesca Belotti
This study investigates a novel approach to eliciting users' moral decision-making by combining immersive roleplaying games with LLM analysis capabilities. Building on the distinction introduced by Floridi between hard ethics inspiring and shaping laws-and soft ethics-moral preferences guiding individual behavior within the free space of decisions compliant to laws-we focus on capturing the latter through contextrich, narrative-driven interactions. Grounded in anthropological methods, the role-playing game exposes participants to ethically charged scenarios in the domain of digital privacy. Data collected during the sessions were interpreted by a customized LLM ("GPT Anthropologist"). Evaluation through a cross-validation process shows that both the richness of the data and the interpretive framing significantly enhance the model's ability to predict user behavior. Results show that LLMs can be effectively employed to automate and enhance the understanding of user moral preferences and decision-making process in the early stages of software development.
Vijay Keswani, Cyrus Cousins, Breanna Nguyen et al.
Alignment methods in moral domains seek to elicit moral preferences of human stakeholders and incorporate them into AI. This presupposes moral preferences as static targets, but such preferences often evolve over time. Proper alignment of AI to dynamic human preferences should ideally account for "legitimate" changes to moral reasoning, while ignoring changes related to attention deficits, cognitive biases, or other arbitrary factors. However, common AI alignment approaches largely neglect temporal changes in preferences, posing serious challenges to proper alignment, especially in high-stakes applications of AI, e.g., in healthcare domains, where misalignment can jeopardize the trustworthiness of the system and yield serious individual and societal harms. This work investigates the extent to which people's moral preferences change over time, and the impact of such changes on AI alignment. Our study is grounded in the kidney allocation domain, where we elicit responses to pairwise comparisons of hypothetical kidney transplant patients from over 400 participants across 3-5 sessions. We find that, on average, participants change their response to the same scenario presented at different times around 6-20% of the time (exhibiting "response instability"). Additionally, we observe significant shifts in several participants' retrofitted decision-making models over time (capturing "model instability"). The predictive performance of simple AI models decreases as a function of both response and model instability. Moreover, predictive performance diminishes over time, highlighting the importance of accounting for temporal changes in preferences during training. These findings raise fundamental normative and technical challenges relevant to AI alignment, highlighting the need to better understand the object of alignment (what to align to) when user preferences change significantly over time.
Black Sun, Ge Kacy Fu, Shichao Guo
Traditional approaches to teaching moral dilemmas often rely on abstract, disembodied scenarios that limit emotional engagement and reflective depth. To address this gap, we developed \textit{Ashes or Breath}, a Mixed Reality game delivered via head-mounted displays(MR-HMDs). This places players in an ethical crisis: they must save a living cat or a priceless cultural artifact during a museum fire. Designed through an iterative, values-centered process, the experience leverages embodied interaction and spatial immersion to heighten emotional stakes and provoke ethical reflection. Players face irreversible, emotionally charged choices followed by narrative consequences in a reflective room, exploring diverse perspectives and societal implications. Preliminary evaluations suggest that embedding moral dilemmas into everyday environments via MR-HMDs intensifies empathy, deepens introspection, and encourages users to reconsider their moral assumptions. This work contributes to ethics-based experiential learning in HCI, positioning augmented reality not merely as a medium of interaction but as a stage for ethical encounter.
Junchen Ding, Penghao Jiang, Zihao Xu et al.
As large language models (LLMs) increasingly mediate ethically sensitive decisions, understanding their moral reasoning processes becomes imperative. This study presents a comprehensive empirical evaluation of 14 leading LLMs, both reasoning enabled and general purpose, across 27 diverse trolley problem scenarios, framed by ten moral philosophies, including utilitarianism, deontology, and altruism. Using a factorial prompting protocol, we elicited 3,780 binary decisions and natural language justifications, enabling analysis along axes of decisional assertiveness, explanation answer consistency, public moral alignment, and sensitivity to ethically irrelevant cues. Our findings reveal significant variability across ethical frames and model types: reasoning enhanced models demonstrate greater decisiveness and structured justifications, yet do not always align better with human consensus. Notably, "sweet zones" emerge in altruistic, fairness, and virtue ethics framings, where models achieve a balance of high intervention rates, low explanation conflict, and minimal divergence from aggregated human judgments. However, models diverge under frames emphasizing kinship, legality, or self interest, often producing ethically controversial outcomes. These patterns suggest that moral prompting is not only a behavioral modifier but also a diagnostic tool for uncovering latent alignment philosophies across providers. We advocate for moral reasoning to become a primary axis in LLM alignment, calling for standardized benchmarks that evaluate not just what LLMs decide, but how and why.
Yukun Zhang, Tianyang Zhang
Information asymmetry often leads to adverse selection and moral hazard in economic markets, causing inefficiencies and welfare losses. Traditional methods to address these issues, such as signaling and screening, are frequently insufficient. This research investigates how Generative Artificial Intelligence (AI) can create detailed informational signals that help principals better understand agents' types and monitor their actions. By incorporating these AI-generated signals into a principal-agent model, the study aims to reduce inefficiencies and improve contract designs. Through theoretical analysis and simulations, we demonstrate that Generative AI can effectively mitigate adverse selection and moral hazard, resulting in more efficient market outcomes and increased social welfare. Additionally, the findings offer practical insights for policymakers and industry stakeholders on the responsible implementation of Generative AI solutions to enhance market performance.
Takumi Ohashi, Tsubasa Nakagawa, Hitoshi Iyatomi
Rapid advancements in artificial intelligence (AI) have made it crucial to integrate moral reasoning into AI systems. However, existing models and datasets often overlook regional and cultural differences. To address this shortcoming, we have expanded the JCommonsenseMorality (JCM) dataset, the only publicly available dataset focused on Japanese morality. The Extended JCM (eJCM) has grown from the original 13,975 sentences to 31,184 sentences using our proposed sentence expansion method called Masked Token and Label Enhancement (MTLE). MTLE selectively masks important parts of sentences related to moral judgment and replaces them with alternative expressions generated by a large language model (LLM), while re-assigning appropriate labels. The model trained using our eJCM achieved an F1 score of 0.857, higher than the scores for the original JCM (0.837), ChatGPT one-shot classification (0.841), and data augmented using AugGPT, a state-of-the-art augmentation method (0.850). Specifically, in complex moral reasoning tasks unique to Japanese culture, the model trained with eJCM showed a significant improvement in performance (increasing from 0.681 to 0.756) and achieved a performance close to that of GPT-4 Turbo (0.787). These results demonstrate the validity of the eJCM dataset and the importance of developing models and datasets that consider the cultural context.
Carlos Carrasco-Farre
Large Language Models (LLMs) are already as persuasive as humans. However, we know very little about how they do it. This paper investigates the persuasion strategies of LLMs, comparing them with human-generated arguments. Using a dataset of 1,251 participants in an experiment, we analyze the persuasion strategies of LLM-generated and human-generated arguments using measures of cognitive effort (lexical and grammatical complexity) and moral-emotional language (sentiment and moral analysis). The study reveals that LLMs produce arguments that require higher cognitive effort, exhibiting more complex grammatical and lexical structures than human counterparts. Additionally, LLMs demonstrate a significant propensity to engage more deeply with moral language, utilizing both positive and negative moral foundations more frequently than humans. In contrast with previous research, no significant difference was found in the emotional content produced by LLMs and humans. These findings contribute to the discourse on AI and persuasion, highlighting the dual potential of LLMs to both enhance and undermine informational integrity through communication strategies for digital persuasion.
Nicolas Lazzari, Stefano De Giorgis, Aldo Gangemi et al.
This work explores the integration of ontology-based reasoning and Machine Learning techniques for explainable value classification. By relying on an ontological formalization of moral values as in the Moral Foundations Theory, relying on the DnS Ontology Design Pattern, the \textit{sandra} neuro-symbolic reasoner is used to infer values (fomalized as descriptions) that are \emph{satisfied by} a certain sentence. Sentences, alongside their structured representation, are automatically generated using an open-source Large Language Model. The inferred descriptions are used to automatically detect the value associated with a sentence. We show that only relying on the reasoner's inference results in explainable classification comparable to other more complex approaches. We show that combining the reasoner's inferences with distributional semantics methods largely outperforms all the baselines, including complex models based on neural network architectures. Finally, we build a visualization tool to explore the potential of theory-based values classification, which is publicly available at http://xmv.geomeaning.com/.
Utkarsh Agarwal, Kumar Tanmay, Aditi Khandelwal et al.
Ethical reasoning is a crucial skill for Large Language Models (LLMs). However, moral values are not universal, but rather influenced by language and culture. This paper explores how three prominent LLMs -- GPT-4, ChatGPT, and Llama2-70B-Chat -- perform ethical reasoning in different languages and if their moral judgement depend on the language in which they are prompted. We extend the study of ethical reasoning of LLMs by Rao et al. (2023) to a multilingual setup following their framework of probing LLMs with ethical dilemmas and policies from three branches of normative ethics: deontology, virtue, and consequentialism. We experiment with six languages: English, Spanish, Russian, Chinese, Hindi, and Swahili. We find that GPT-4 is the most consistent and unbiased ethical reasoner across languages, while ChatGPT and Llama2-70B-Chat show significant moral value bias when we move to languages other than English. Interestingly, the nature of this bias significantly vary across languages for all LLMs, including GPT-4.
Andrew Staron
A review of Jessica Coblentz, Dust in Blood: A Theology of Life with Depression.
Yelena Mejova, Kyrieki Kalimeri, Gianmarco De Francisci Morales
Face masks are one of the cheapest and most effective non-pharmaceutical interventions available against airborne diseases such as COVID-19. Unfortunately, they have been met with resistance by a substantial fraction of the populace, especially in the U.S. In this study, we uncover the latent moral values that underpin the response to the mask mandate, and paint them against the country's political backdrop. We monitor the discussion about masks on Twitter, which involves almost 600k users in a time span of 7 months. By using a combination of graph mining, natural language processing, topic modeling, content analysis, and time series analysis, we characterize the responses to the mask mandate of both those in favor and against them. We base our analysis on the theoretical frameworks of Moral Foundation Theory and Hofstede's cultural dimensions. Our results show that, while the anti-mask stance is associated with a conservative political leaning, the moral values expressed by its adherents diverge from the ones typically used by conservatives. In particular, the expected emphasis on the values of authority and purity is accompanied by an atypical dearth of in-group loyalty. We find that after the mandate, both pro- and anti-mask sides decrease their emphasis on care about others, and increase their attention on authority and fairness, further politicizing the issue. In addition, the mask mandate reverses the expression of Individualism-Collectivism between the two sides, with an increase of individualism in the anti-mask narrative, and a decrease in the pro-mask one. We argue that monitoring the dynamics of moral positioning is crucial for designing effective public health campaigns that are sensitive to the underlying values of the target audience.
Carlos Mougan, Joshua Brand
Deontological ethics, specifically understood through Immanuel Kant, provides a moral framework that emphasizes the importance of duties and principles, rather than the consequences of action. Understanding that despite the prominence of deontology, it is currently an overlooked approach in fairness metrics, this paper explores the compatibility of a Kantian deontological framework in fairness metrics, part of the AI alignment field. We revisit Kant's critique of utilitarianism, which is the primary approach in AI fairness metrics and argue that fairness principles should align with the Kantian deontological framework. By integrating Kantian ethics into AI alignment, we not only bring in a widely-accepted prominent moral theory but also strive for a more morally grounded AI landscape that better balances outcomes and procedures in pursuit of fairness and justice.
Ethan Perez, Robert Long
As AI systems become more advanced and widely deployed, there will likely be increasing debate over whether AI systems could have conscious experiences, desires, or other states of potential moral significance. It is important to inform these discussions with empirical evidence to the extent possible. We argue that under the right circumstances, self-reports, or an AI system's statements about its own internal states, could provide an avenue for investigating whether AI systems have states of moral significance. Self-reports are the main way such states are assessed in humans ("Are you in pain?"), but self-reports from current systems like large language models are spurious for many reasons (e.g. often just reflecting what humans would say). To make self-reports more appropriate for this purpose, we propose to train models to answer many kinds of questions about themselves with known answers, while avoiding or limiting training incentives that bias self-reports. The hope of this approach is that models will develop introspection-like capabilities, and that these capabilities will generalize to questions about states of moral significance. We then propose methods for assessing the extent to which these techniques have succeeded: evaluating self-report consistency across contexts and between similar models, measuring the confidence and resilience of models' self-reports, and using interpretability to corroborate self-reports. We also discuss challenges for our approach, from philosophical difficulties in interpreting self-reports to technical reasons why our proposal might fail. We hope our discussion inspires philosophers and AI researchers to criticize and improve our proposed methodology, as well as to run experiments to test whether self-reports can be made reliable enough to provide information about states of moral significance.
Edward A. David
A review of Kate Ward, _Wealth, Virtue, and Moral Luck: Christian Ethics in an Age of Inequality._
Uğur Kaya, Fatma Kaya
Bu çalışmanın konusu Türkiye’de 2005-2015 yılları arasında yazılan ve YÖK Ulusal Tez Merkezinde bulunan Din Sosyolojisi alanındaki yüksek lisans ve doktora tezlerinin konusuna, ele aldıklarını ana kavrama, türüne, danışman unvanına, yazıldığı yıla ve üniversiteye göre incelenmesidir. Bu araştırmamız, din sosyolojisi alanındaki tezlere hızlı, kolay ulaşım ve alanı bütüncül görme problemine çözüm arayışının bir ürünü olarak nitelenebilir. Bu araştırma sayesinde, araştırılmak istenen konunun, din sosyolojisi alan yazınındaki mevcut durumu daha net görülecek; tekrarlardan arınık, yeni ve özgün çalışmalar için imkân sağlanmış olacaktır. Verilerin çözümlenmesinde betimsel analiz kullanılmıştır. Yapılan bu araştırmanın sonuçlarına göre; Türkiye’de Din Sosyolojisi alanında yapılan çalışmaların büyük kısmını yüksek lisans tezleri oluşturmaktadır. 2005-2015 yılları arasında yazılan tezlerde en çok “dini hayat” (n=32) kavramı ele alınmıştır. Daha sonra sırasıyla “kadın” (n=23), “din görevlileri” (n=21), “dindarlık” (n=13) ve “Alevilik” (n=12) kavramları üzerinde araştırmalar yapılmıştır. Tezlerin teorik/ampirik olarak kategorik dağılımına bakıldığında yüksek lisans düzeyindeki bilimsel çalışmaların 114’ü kuramsal din sosyolojisi araştırması, 134’ü ise ampirik çalışmadır. Doktora tezlerinde ise ampirik çalışmaların ağırlığının daha fazla olduğu tespit edilmiştir. Tezlerin konu edilen Din Sosyolojisi genel konusuna göre dağılımına bakıldığında daha çok “toplumsal yapı ve kurumlar üzerine” çalışmalar yapıldığı görülmüştür.
Lütfü Cengiz, Sümeyra Şermet
Kelâm ilmi Allah’ın varlığı ve sıfatlarını ispatlamayı amaçlamaktadır. Bu amaç etrafında kelâmcılar, duyulur âlemden gaybî alana yönelen bir yöntem benimsemişlerdir. Kıyâsü’l-ğāib alâ’ş-şâhid şeklinde isimlendirilen bu yöntem, muhaliflerle yapılan tartışmalarda ortak bir zemin oluşturmuştur. Çünkü duyulur âlem inkâra mahal vermeyecek şekilde insan algısına açıktır. Bundan hareketle Eş’arî kelâmında “Allah dışındaki her şey” şeklinde tanımlanan âlem hem yapı hem işleyiş itibariyle ele alınmaya çalışılmıştır. Kelâmcıların belirledikleri ilkeler etrafında âlemin cevher, araz ve cisimlerden oluştuğu söylenmiştir. Bahsi geçen varlıklar en temelde birbirine, ayrıca daha üst bir iradeye bağımlı bir şekilde kurgulanmıştır. Eş‘arî gelenek bu kurgudan hareketle âlemin işleyişini âdet teorisi ile açıklamayı tercih etmiştir. Hatta onların âleme ve Allah’a yönelik temel kabulleri bu tercihi zorunlu kılmıştır. Çünkü onlar Allah’ı mutlak irade ve kudret sahibi fâil-i muhtar bir ilah olarak tasavvur etmektedir. Âlemde akıl tarafından imkânsız görülen olay ya da durumlar haricindeki her şey, O’nun kudreti dâhilindedir. Bu yönüyle âlem mümkünler mecrası şeklinde nitelenmektedir. Özellikle arazın bâkî olmayan yapısından hareket eden Eş‘arî gelenek, arazların her an yeniden yaratılması gerektiği görüşünü ısrarla savunmaktadır. Bu temel savunu da âlemin Allah’a olan bağımlılığını tümüyle gözler önüne sermektedir. Bu temel kabuller üzerine inşa edilen âdet teorisi, mutlak irade ve kudret sahibi Allah’a herhangi bir sınırlama getirmeden âlemin işleyişini özgün bir şekilde açıklamaktadır. Eş‘arî geleneği birebir yansıtan bu teori, onların başta Allah-âlem ilişkisi olmak üzere tüm kelâmî meseleleri tutarlılık içinde açıklamalarına olanak sağlamaktadır. İşte bu çalışma ilk dönem Eş’arî geleneği âdet teorisini kabule sevk eden temel ilkeleri başlıklar halinde ele alarak irdelemeyi amaçlamaktadır. Bu amaç etrafında literatür taraması yapılarak tespit edilen ilk dönem Eş’arî kelâmcıların görüşleri incelenmiş, konu bağlamında yorumlanmıştır.
Halaman 16 dari 65315