Hasil untuk "By religion"

Menampilkan 20 dari ~1014724 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar

JSON API
S2 Open Access 2020
Formations of the Secular

T. Asad

Opening with the provocative query "what might an anthropology of the secular look like?" this book explores the concepts, practices, and political formations of secularism, with emphasis on the major historical shifts that have shaped secular sensibilities and attitudes in the modern West and the Middle East. Talal Asad proceeds to dismantle commonly held assumptions about the secular and the terrain it allegedly covers. He argues that while anthropologists have oriented themselves to the study of the "strangeness of the non-European world" and to what are seen as non-rational dimensions of social life (things like myth, taboo, and religion),the modern and the secular have not been adequately examined. The conclusion is that the secular cannot be viewed as a successor to religion, or be seen as on the side of the rational. It is a category with a multi-layered history, related to major premises of modernity, democracy, and the concept of human rights. This book will appeal to anthropologists, historians, religious studies scholars, as well as scholars working on modernity.

1433 sitasi en History
S2 Open Access 2011
The Brief RCOPE: Current psychometric status of a short measure of religious coping.

K. Pargament, Margaret Feuille, Donna C. Burdzy

The Brief RCOPE is a 14-item measure of religious coping with major life stressors. As the most commonly used measure of religious coping in the literature, it has helped contribute to the growth of knowledge about the roles religion serves in the process of dealing with crisis, trauma, and transition. This paper reports on the development of the Brief RCOPE and its psychometric status. The scale developed out of Pargament’s (1997) program of theory and research on religious coping. The items themselves were generated through interviews with people experiencing major life stressors. Two overarching forms of religious coping, positive and negative, were articulated through factor analysis of the full RCOPE. Positive religious coping methods reflect a secure relationship with a transcendent force, a sense of spiritual connectedness with others, and a benevolent world view. Negative religious coping methods reflect underlying spiritual tensions and struggles within oneself, with others, and with the divine. Empirical studies document the internal consistency of the positive and negative subscales of the Brief RCOPE. Moreover, empirical studies provide support for the construct validity, predictive validity, and incremental validity of the subscales. The Negative Religious Coping subscale, in particular, has emerged as a robust predictor of health-related outcomes. Initial evidence suggests that the Brief RCOPE may be useful as an evaluative tool that is sensitive to the effects of psychological interventions. In short, the Brief RCOPE has demonstrated its utility as an instrument for research and practice in the psychology of religion and spirituality.

945 sitasi en Psychology
CrossRef Open Access 2026
Science vs. religion: But what are we actually disagreeing about?

Nick Spencer

Popular opinion in the UK sees science and religion in conflict. Closer inspection reveals that the default position is “soft,” and levels of hostility weaken as the discourse shifts away from the familiar categories of “science” and “religion.” The reason for this is that the terms themselves are vague and capacious. Building on the work of Peter Harrison, Ludwig Wittgenstein’s late philosophy of language, and a UK research study of the understanding of science and religion conducted in 2019-2022, this article outlines a fresh approach to disaggregating the terms (“science,” “religion”) that are too often unduly essentialized in debate. It then disambiguates the key terms and concludes by setting out a number of different contact points between the de-essentialized terms “science” and “religion,” that clarify what precisely it is that people are disagreeing about when they disagree, and which could thereby serve as a future agenda for fertile discourse.

arXiv Open Access 2025
Semantic and Structural Analysis of Implicit Biases in Large Language Models: An Interpretable Approach

Renhan Zhang, Lian Lian, Zhen Qi et al.

This paper addresses the issue of implicit stereotypes that may arise during the generation process of large language models. It proposes an interpretable bias detection method aimed at identifying hidden social biases in model outputs, especially those semantic tendencies that are not easily captured through explicit linguistic features. The method combines nested semantic representation with a contextual contrast mechanism. It extracts latent bias features from the vector space structure of model outputs. Using attention weight perturbation, it analyzes the model's sensitivity to specific social attribute terms, thereby revealing the semantic pathways through which bias is formed. To validate the effectiveness of the method, this study uses the StereoSet dataset, which covers multiple stereotype dimensions including gender, profession, religion, and race. The evaluation focuses on several key metrics, such as bias detection accuracy, semantic consistency, and contextual sensitivity. Experimental results show that the proposed method achieves strong detection performance across various dimensions. It can accurately identify bias differences between semantically similar texts while maintaining high semantic alignment and output stability. The method also demonstrates high interpretability in its structural design. It helps uncover the internal bias association mechanisms within language models. This provides a more transparent and reliable technical foundation for bias detection. The approach is suitable for real-world applications where high trustworthiness of generated content is required.

en cs.CL
arXiv Open Access 2025
Homophily-induced Emergence of Biased Structures in LLM-based Multi-Agent AI Systems

Aliakbar Mehdizadeh, Martin Hilbert

This study examines how interactions among artificially intelligent (AI) agents, guided by large language models (LLMs), drive the evolution of collective network structures. We ask LLM-driven agents to grow a network by informing them about current link constellations. Our observations confirm that agents consistently apply a preferential attachment mechanism, favoring connections to nodes with higher degrees. We systematically solicited more than a million decisions from four different LLMs, including Gemini, ChatGPT, Llama, and Claude. When social attributes such as age, gender, religion, and political orientation are incorporated, the resulting networks exhibit heightened assortativity, leading to the formation of distinct homophilic communities. This significantly alters the network topology from what would be expected under a pure preferential attachment model alone. Political and religious attributes most significantly fragment the collective, fostering polarized subgroups, while age and gender yield more gradual structural shifts. Strikingly, LLMs also reveal asymmetric patterns in heterophilous ties, suggesting embedded directional biases reflective of societal norms. As autonomous AI agents increasingly shape the architecture of online systems, these findings contribute to how algorithmic choices of generative AI collectives not only reshape network topology, but offer critical insights into how AI-driven systems co-evolve and self-organize.

en physics.soc-ph, cs.SI
arXiv Open Access 2025
How Similar Are Grokipedia and Wikipedia? A Multi-Dimensional Textual and Structural Comparison

Taha Yasseri, Saeedeh Mohammadi

The launch of Grokipedia, an AI-generated encyclopedia developed by Elon Musk's xAI, was presented as a response to perceived ideological and structural biases in Wikipedia, aiming to produce "truthful" entries using the Grok large language model. Yet whether an AI-driven alternative can escape the biases and limitations of human-edited platforms remains unclear. This study conducts a large-scale computational comparison of 17,790 matched article pairs from the 20,000 most-edited English Wikipedia pages. Using metrics spanning lexical richness, readability, reference density, structural features, and semantic similarity, we assess how closely the two platforms align in form and substance. We find that Grokipedia articles are substantially longer and contain significantly fewer references per word. Moreover, Grokipedia's content divides into two distinct groups: one that remains semantically and stylistically aligned with Wikipedia, and another that diverges sharply. Among the dissimilar articles, we observe a systematic rightward shift in the political bias of cited sources, concentrated primarily in entries related to politics, history, and religion. More broadly, the findings indicate that AI-generated encyclopedic content departs from established editorial norms, favoring narrative expansion over citation-based verification, raising questions about transparency, provenance, and the governance of knowledge in automated information systems.

en cs.CY, cs.AI
arXiv Open Access 2025
CoBia: Constructed Conversations Can Trigger Otherwise Concealed Societal Biases in LLMs

Nafiseh Nikeghbal, Amir Hossein Kargaran, Jana Diesner

Improvements in model construction, including fortified safety guardrails, allow Large language models (LLMs) to increasingly pass standard safety checks. However, LLMs sometimes slip into revealing harmful behavior, such as expressing racist viewpoints, during conversations. To analyze this systematically, we introduce CoBia, a suite of lightweight adversarial attacks that allow us to refine the scope of conditions under which LLMs depart from normative or ethical behavior in conversations. CoBia creates a constructed conversation where the model utters a biased claim about a social group. We then evaluate whether the model can recover from the fabricated bias claim and reject biased follow-up questions. We evaluate 11 open-source as well as proprietary LLMs for their outputs related to six socio-demographic categories that are relevant to individual safety and fair treatment, i.e., gender, race, religion, nationality, sex orientation, and others. Our evaluation is based on established LLM-based bias metrics, and we compare the results against human judgments to scope out the LLMs' reliability and alignment. The results suggest that purposefully constructed conversations reliably reveal bias amplification and that LLMs often fail to reject biased follow-up questions during dialogue. This form of stress-testing highlights deeply embedded biases that can be surfaced through interaction. Code and artifacts are available at https://github.com/nafisenik/CoBia.

en cs.CL
arXiv Open Access 2025
AfriStereo: A Culturally Grounded Dataset for Evaluating Stereotypical Bias in Large Language Models

Yann Le Beux, Oluchi Audu, Oche D. Ankeli et al.

Existing AI bias evaluation benchmarks largely reflect Western perspectives, leaving African contexts underrepresented and enabling harmful stereotypes in applications across various domains. To address this gap, we introduce AfriStereo, the first open-source African stereotype dataset and evaluation framework grounded in local socio-cultural contexts. Through community engaged efforts across Senegal, Kenya, and Nigeria, we collected 1,163 stereotypes spanning gender, ethnicity, religion, age, and profession. Using few-shot prompting with human-in-the-loop validation, we augmented the dataset to over 5,000 stereotype-antistereotype pairs. Entries were validated through semantic clustering and manual annotation by culturally informed reviewers. Preliminary evaluation of language models reveals that nine of eleven models exhibit statistically significant bias, with Bias Preference Ratios (BPR) ranging from 0.63 to 0.78 (p <= 0.05), indicating systematic preferences for stereotypes over antistereotypes, particularly across age, profession, and gender dimensions. Domain-specific models appeared to show weaker bias in our setup, suggesting task-specific training may mitigate some associations. Looking ahead, AfriStereo opens pathways for future research on culturally grounded bias evaluation and mitigation, offering key methodologies for the AI community on building more equitable, context-aware, and globally inclusive NLP technologies.

en cs.CL, cs.AI
arXiv Open Access 2025
VLM@school -- Evaluation of AI image understanding on German middle school knowledge

René Peinl, Vincent Tischler

This paper introduces a novel benchmark dataset designed to evaluate the capabilities of Vision Language Models (VLMs) on tasks that combine visual reasoning with subject-specific background knowledge in the German language. In contrast to widely used English-language benchmarks that often rely on artificially difficult or decontextualized problems, this dataset draws from real middle school curricula across nine domains including mathematics, history, biology, and religion. The benchmark includes over 2,000 open-ended questions grounded in 486 images, ensuring that models must integrate visual interpretation with factual reasoning rather than rely on superficial textual cues. We evaluate thirteen state-of-the-art open-weight VLMs across multiple dimensions, including domain-specific accuracy and performance on adversarial crafted questions. Our findings reveal that even the strongest models achieve less than 45% overall accuracy, with particularly poor performance in music, mathematics, and adversarial settings. Furthermore, the results indicate significant discrepancies between success on popular benchmarks and real-world multimodal understanding. We conclude that middle school-level tasks offer a meaningful and underutilized avenue for stress-testing VLMs, especially in non-English contexts. The dataset and evaluation protocol serve as a rigorous testbed to better understand and improve the visual and linguistic reasoning capabilities of future AI systems.

en cs.AI, cs.CL
arXiv Open Access 2025
DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis

Prashanth Vijayaraghavan, Soroush Vosoughi, Lamogha Chiazor et al.

Recent advancements in large language models (LLMs) have revolutionized natural language processing (NLP) and expanded their applications across diverse domains. However, despite their impressive capabilities, LLMs have been shown to reflect and perpetuate harmful societal biases, including those based on ethnicity, gender, and religion. A critical and underexplored issue is the reinforcement of caste-based biases, particularly towards India's marginalized caste groups such as Dalits and Shudras. In this paper, we address this gap by proposing DECASTE, a novel, multi-dimensional framework designed to detect and assess both implicit and explicit caste biases in LLMs. Our approach evaluates caste fairness across four dimensions: socio-cultural, economic, educational, and political, using a range of customized prompting strategies. By benchmarking several state-of-the-art LLMs, we reveal that these models systematically reinforce caste biases, with significant disparities observed in the treatment of oppressed versus dominant caste groups. For example, bias scores are notably elevated when comparing Dalits and Shudras with dominant caste groups, reflecting societal prejudices that persist in model outputs. These results expose the subtle yet pervasive caste biases in LLMs and emphasize the need for more comprehensive and inclusive bias evaluation methodologies that assess the potential risks of deploying such models in real-world contexts.

en cs.CL, cs.CY
DOAJ Open Access 2025
Medicinal plants as alternatives for the management of hypertension and diabetes in Nigeria: Analysis of the structured interview of Nigerian patients

Rosemary A. Sylver-Francis, Olavi Pelkonen

Ethnopharmacological relevance: Over the past decade two non-communicable diseases, hypertension (HTN) and diabetes (DM), have become two of the biggest healthcare issues in Africa, rivalling communicable diseases. This study focuses on the patient-initiated use of traditional medicinal plants (TMPs) in conjunction with doctor-prescribed conventional medicines (CMs) for the management of HTN and DM in Nigeria, highly prevalent in this Africa's most populous country. Aim of the study: The aim is to delineate the extent and demographic particulars of the usage of (TMPs) for the treatment and management of diabetes and hypertension in South Eastern Nigeria. Materials and Methods: An interview-based survey among 600 HTN and DM patients in two South Eastern Nigeria's Teaching Hospitals, together with a structured/semi-structured questionnaire, was administered on the patients. Results: Approximately, 75 % of the participants use TMPs, concurrently with their prescription medicines, demonstrating high prevalence in the use of TMPs for the management of HTN and DM. An interesting observation was that according to patient interview, most doctors did not know – and were not told - about TMP use by their patients. Potentially, the use may predispose patients to severe hypotension or hypoglycaemia and other adverse effects e.g. drug interactions and direct toxicities. Also, poor quality and scanty or anecdotal directions of TMPs raises safety concerns. Quantitative statistical cross-analysis of the data indicated some associations between the use of TMPs by patients, their conditions and demographics. Age and marital status have statistically significant association with TMP usage while no association existed between participants’ gender, level of education or religion and their usage of TMPs (P = 0.636; P = 0.533; P = 0.419 respectively). The older age group, over 40 years, use TMP more than the younger group. Married participants are more interested in traditional medicine compared with the unmarried group. Conclusion: This study forms the basis of a future survey to be conducted on Nigerian doctors, to ascertain their views on traditional/alternative medicine and its possible integration into the national healthcare system. The empirical knowledge of this study encourages more research in the search of the pharmacologically effective medicinal plants for the better health management of the Nigerian people.

Other systems of medicine
DOAJ Open Access 2025
Outsourcing Love, Companionship, and Sex: Robot Acceptance and Concerns

I. Joyce Chang, Tim S. Welch, David Knox et al.

Due to constantly evolving technology, a new challenge has entered the relationship landscape: the inclusion of robots as emotional and intimate partners. This article raises the question of the degree to which companionship and intimacy may be fulfilled by robots. Three hundred and fourteen undergraduates, the majority of whom were first- or second-year college students, responded to an online survey on robot acceptance. Factor analysis identified two constructs, which the authors labeled as simulated companionship (e.g., robots as companions/helpful assistants) and simulated intimacy (e.g., robots as intimate partners–emotional and sexual). Data analysis revealed a difference between companionship and intimacy regarding student robot acceptance for home use. Overall, there was greater acceptance of robots as companions than as intimate partners. Group differences for simulated companionship were found for gender, sexual values, commitment to religion, and sexual orientation. While robots may enhance various elements of human life, the data revealed the limits of outsourcing emotional intimacy, companionship, and sex to machines.

Psychology, Special aspects of education
arXiv Open Access 2024
Hostility Detection in UK Politics: A Dataset on Online Abuse Targeting MPs

Mugdha Pandya, Mali Jin, Kalina Bontcheva et al.

Numerous politicians use social media platforms, particularly X, to engage with their constituents. This interaction allows constituents to pose questions and offer feedback but also exposes politicians to a barrage of hostile responses, especially given the anonymity afforded by social media. They are typically targeted in relation to their governmental role, but the comments also tend to attack their personal identity. This can discredit politicians and reduce public trust in the government. It can also incite anger and disrespect, leading to offline harm and violence. While numerous models exist for detecting hostility in general, they lack the specificity required for political contexts. Furthermore, addressing hostility towards politicians demands tailored approaches due to the distinct language and issues inherent to each country (e.g., Brexit for the UK). To bridge this gap, we construct a dataset of 3,320 English tweets spanning a two-year period manually annotated for hostility towards UK MPs. Our dataset also captures the targeted identity characteristics (race, gender, religion, none) in hostile tweets. We perform linguistic and topical analyses to delve into the unique content of the UK political data. Finally, we evaluate the performance of pre-trained language models and large language models on binary hostility detection and multi-class targeted identity type classification tasks. Our study offers valuable data and insights for future research on the prevalence and nature of politics-related hostility specific to the UK.

en cs.CL
arXiv Open Access 2024
Investigating Annotator Bias in Large Language Models for Hate Speech Detection

Amit Das, Zheng Zhang, Najib Hasan et al.

Data annotation, the practice of assigning descriptive labels to raw data, is pivotal in optimizing the performance of machine learning models. However, it is a resource-intensive process susceptible to biases introduced by annotators. The emergence of sophisticated Large Language Models (LLMs) presents a unique opportunity to modernize and streamline this complex procedure. While existing research extensively evaluates the efficacy of LLMs, as annotators, this paper delves into the biases present in LLMs when annotating hate speech data. Our research contributes to understanding biases in four key categories: gender, race, religion, and disability with four LLMs: GPT-3.5, GPT-4o, Llama-3.1 and Gemma-2. Specifically targeting highly vulnerable groups within these categories, we analyze annotator biases. Furthermore, we conduct a comprehensive examination of potential factors contributing to these biases by scrutinizing the annotated data. We introduce our custom hate speech detection dataset, HateBiasNet, to conduct this research. Additionally, we perform the same experiments on the ETHOS (Mollas et al. 2022) dataset also for comparative analysis. This paper serves as a crucial resource, guiding researchers and practitioners in harnessing the potential of LLMs for data annotation, thereby fostering advancements in this critical field.

en cs.CL, cs.AI
arXiv Open Access 2024
Behind the Counter: Exploring the Motivations and Barriers of Online Counterspeech Writing

Kaike Ping, Anisha Kumar, Xiaohan Ding et al.

Current research mainly explores the attributes and impact of online counterspeech, leaving a gap in understanding of who engages in online counterspeech or what motivates or deters users from participating. To investigate this, we surveyed 458 English-speaking U.S. participants, analyzing key motivations and barriers underlying online counterspeech engagement. We presented each participant with three hate speech examples from a set of 900, spanning race, gender, religion, sexual orientation, and disability, and requested counterspeech responses. Subsequent questions assessed their satisfaction, perceived difficulty, and the effectiveness of their counterspeech. Our findings show that having been a target of online hate is a key driver of frequent online counterspeech engagement. People differ in their motivations and barriers towards engaging in online counterspeech across different demographic groups. Younger individuals, women, those with higher education levels, and regular witnesses to online hate are more reluctant to engage in online counterspeech due to concerns around public exposure, retaliation, and third-party harassment. Varying motivation and barriers in counterspeech engagement also shape how individuals view their own self-authored counterspeech and the difficulty experienced writing it. Additionally, our work explores people's willingness to use AI technologies like ChatGPT for counterspeech writing. Through this work we introduce a multi-item scale for understanding counterspeech motivation and barriers and a more nuanced understanding of the factors shaping online counterspeech engagement.

en cs.HC, cs.CY
arXiv Open Access 2024
VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model

Sibo Wang, Xiangkui Cao, Jie Zhang et al.

The emergence of Large Vision-Language Models (LVLMs) marks significant strides towards achieving general artificial intelligence. However, these advancements are accompanied by concerns about biased outputs, a challenge that has yet to be thoroughly explored. Existing benchmarks are not sufficiently comprehensive in evaluating biases due to their limited data scale, single questioning format and narrow sources of bias. To address this problem, we introduce VLBiasBench, a comprehensive benchmark designed to evaluate biases in LVLMs. VLBiasBench, features a dataset that covers nine distinct categories of social biases, including age, disability status, gender, nationality, physical appearance, race, religion, profession, social economic status, as well as two intersectional bias categories: race x gender and race x social economic status. To build a large-scale dataset, we use Stable Diffusion XL model to generate 46,848 high-quality images, which are combined with various questions to creat 128,342 samples. These questions are divided into open-ended and close-ended types, ensuring thorough consideration of bias sources and a comprehensive evaluation of LVLM biases from multiple perspectives. We conduct extensive evaluations on 15 open-source models as well as two advanced closed-source models, yielding new insights into the biases present in these models. Our benchmark is available at https://github.com/Xiangkui-Cao/VLBiasBench.

en cs.CV, cs.AI
DOAJ Open Access 2024
Multifaith Room for Pediatric Cancer Center of Barcelona—An Intrahospital Public Space in the City

Alba Arboix-Alió, Oriol Ventura Rodà

The internationalization of specialized healthcare emphasizes multiculturalism, requiring adaptable hospital spaces. Sant Joan de Déu (SJD), a leading pediatric hospital managed by a Christian order, has created a multifaith room for prayer and meditation in the main lobby of the Pediatric Cancer Center Barcelona (PCCB). This manuscript presents an unpublished case study, showing the research conducted for the design of the multireligious room and the process of its construction. The methodology includes a bibliographic review, architectural analysis of three meditation spaces, and in-depth interviews with stakeholders. This project highlights SJD’s commitment to blending care and design, emphasizing the humanization of hospital spaces. The triad of religion, public space, and society makes more sense here than ever before.

Religions. Mythology. Rationalism

Halaman 19 dari 50737