Evaluating Artificial Intelligence Through a Christian Understanding of Human Flourishing
Nicholas Skytland, Lauren Parsons, Alicia Llewellyn
et al.
Artificial intelligence (AI) alignment is fundamentally a formation problem, not only a safety problem. As Large Language Models (LLMs) increasingly mediate moral deliberation and spiritual inquiry, they do more than provide information; they function as instruments of digital catechesis, actively shaping and ordering human understanding, decision-making, and moral reflection. To make this formative influence visible and measurable, we introduce the Flourishing AI Benchmark: Christian Single-Turn (FAI-C-ST), a framework designed to evaluate Frontier Model responses against a Christian understanding of human flourishing across seven dimensions. By comparing 20 Frontier Models against both pluralistic and Christian-specific criteria, we show that current AI systems are not worldview-neutral. Instead, they default to a Procedural Secularism that lacks the grounding necessary to sustain theological coherence, resulting in a systematic performance decline of approximately 17 points across all dimensions of flourishing. Most critically, there is a 31-point decline in the Faith and Spirituality dimension. These findings suggest that the performance gap in values alignment is not a technical limitation, but arises from training objectives that prioritize broad acceptability and safety over deep, internally coherent moral or theological reasoning.
Writing literature reviews with AI: principles, hurdles and some lessons learned
Saadi Lahlou, Annabelle Gouttebroze, Atrina Oraee
et al.
We qualitatively compared literature reviews produced with varying degrees of AI assistance. The same LLM, given the same corpus of 280 papers but different selections, produced dramatically different reviews, from mainstream and politically neutral to critical and post-colonial, though neither orientation was intended. LLM outputs always appear at first glance to be well written, well informed and thought out, but closer reading reveals gaps, biases and lack of depth. Our comparison of six versions shows a series of pitfalls and suggests precautions necessary when using AI assistance to make a literature review. Main issues are: (1) The bias of ignorance (you do not know what you do not get) in the selection of relevant papers. (2) Alignment and digital sycophancy: commercial AI models slavishly take you further in the direction they understand you give them, reinforcing biases. (3) Mainstreaming: because of their statistical nature, LLM productions tend to favor mainstream perspectives and content; in our case there was only 20% overlap between paper selections by humans and the LLM. (4) Limited capacity for creative restructuring, with vague and ambiguous statements. (5) Lack of critical perspective, coming from distant reading and political correctness. Most pitfalls can be addressed by prompting, but only if the user knows the domain well enough to detect them. There is a paradox: producing a good AI-assisted review requires expertise that comes from reading the literature, which is precisely what AI was meant to reduce. Overall, AI can improve the span and quality of the review, but the gain of time is not as massive as one would expect, and a press-button strategy leaving AI to do the work is a recipe for disaster. We conclude with recommendations for those who write, or assess, such LLM-augmented reviews.
A blessing or a burden? Exploring worker perspectives of using a social robot in a church
Andrew Blair, Peggy Gregory, Mary Ellen Foster
Recent technological advances have allowed robots to assist in the service sector, and consequently accelerate job and sector transformation. Less attention has been paid to the use of robots in real-world organisations where social benefits, as opposed to profits, are the primary motivator. To explore these opportunities, we have partnered with a working church and visitor attraction. We conducted interviews with 15 participants from a range of stakeholder groups within the church to understand worker perspectives of introducing a social robot to the church and analysed the results using reflexive thematic analysis. Findings indicate mixed responses to the use of a robot, with participants highlighting the empathetic responsibility the church has towards people and the potential for unintended consequences. However, information provision and alleviation of menial or mundane tasks were identified as potential use cases. This highlights the need to consider not only the financial aspects of robot introduction, but also how social and intangible values shape what roles a robot should take on within an organisation.
Are You There God? Lightweight Narrative Annotation of Christian Fiction with LMs
Rebecca M. M. Hicke, Brian W. Haggard, Mia Ferrante
et al.
In addition to its more widely studied cultural movements, American Evangelicalism has a well-developed but less externally visible literary side. Christian Fiction, however, has been little studied, and what scholarly attention there is has focused on the explosively popular Left Behind series. In this work, we use computational tools to provide both a broad topical overview of Christian Fiction as a genre and a more directed exploration of how its authors depict divine acts. Working with human annotators, we first developed a codebook for identifying "acts of God." We then adapted the codebook for use by a recent, lightweight LM with the assistance of a much larger model. The laptop-scale LM is largely capable of matching human annotations, even when the task is subtle and challenging. Using these annotations, we show that significant and meaningful differences exist between divine acts depicted by the Left Behind books and Christian Fiction more broadly.
Trust and Trustworthiness from Human-Centered Perspective in HRI -- A Systematic Literature Review
Debora Firmino de Souza, Sonia Sousa, Kadri Kristjuhan-Ling
et al.
The Industry 5.0 transition highlights EU efforts to design intelligent devices that can work alongside humans to enhance human capabilities, and such vision aligns with user preferences and needs to feel safe while collaborating with such systems take priority. This demands a human-centric research vision and requires a societal and educational shift in how we perceive technological advancements. To better understand this perspective, we conducted a systematic literature review focusing on understanding how trust and trustworthiness can be key aspects of supporting this move towards Industry 5.0. This review aims to overview the most common methodologies and measurements and collect insights about barriers and facilitators for fostering trustworthy HRI. After a rigorous quality assessment following the Systematic Reviews and Meta-Analyses guidelines, using rigorous inclusion criteria and screening by at least two reviewers, 34 articles were included in the review. The findings underscores the significance of trust and safety as foundational elements for promoting secure and trustworthy human-machine cooperation. Confirm that almost 30% of the revised articles do not present a definition of trust, which can be problematic as this lack of conceptual clarity can undermine research efforts in addressing this problem from a central perspective. It highlights that the choice of domain and area of application should influence the choice of methods and approaches to fostering trust in HRI, as those choices can significantly affect user preferences and their perceptions and assessment of robot capabilities. Additionally, this lack of conceptual clarity can be a potential barrier to fostering trust in HRI and explains the sometimes contradictory findings or choice of methods and instruments used to investigate trust in robots and other autonomous systems in the literature.
Context in object detection: a systematic literature review
Mahtab Jamali, Paul Davidsson, Reza Khoshkangini
et al.
Context is an important factor in computer vision as it offers valuable information to clarify and analyze visual data. Utilizing the contextual information inherent in an image or a video can improve the precision and effectiveness of object detectors. For example, where recognizing an isolated object might be challenging, context information can improve comprehension of the scene. This study explores the impact of various context-based approaches to object detection. Initially, we investigate the role of context in object detection and survey it from several perspectives. We then review and discuss the most recent context-based object detection approaches and compare them. Finally, we conclude by addressing research questions and identifying gaps for further studies. More than 265 publications are included in this survey, covering different aspects of context in different categories of object detection, including general object detection, video object detection, small object detection, camouflaged object detection, zero-shot, one-shot, and few-shot object detection. This literature review presents a comprehensive overview of the latest advancements in context-based object detection, providing valuable contributions such as a thorough understanding of contextual information and effective methods for integrating various context types into object detection, thus benefiting researchers.
Automated Literature Review Using NLP Techniques and LLM-Based Retrieval-Augmented Generation
Nurshat Fateh Ali, Md. Mahdi Mohtasim, Shakil Mosharrof
et al.
This research presents and compares multiple approaches to automate the generation of literature reviews using several Natural Language Processing (NLP) techniques and retrieval-augmented generation (RAG) with a Large Language Model (LLM). The ever-increasing number of research articles provides a huge challenge for manual literature review. It has resulted in an increased demand for automation. Developing a system capable of automatically generating the literature reviews from only the PDF files as input is the primary objective of this research work. The effectiveness of several Natural Language Processing (NLP) strategies, such as the frequency-based method (spaCy), the transformer model (Simple T5), and retrieval-augmented generation (RAG) with Large Language Model (GPT-3.5-turbo), is evaluated to meet the primary objective. The SciTLDR dataset is chosen for this research experiment and three distinct techniques are utilized to implement three different systems for auto-generating the literature reviews. The ROUGE scores are used for the evaluation of all three systems. Based on the evaluation, the Large Language Model GPT-3.5-turbo achieved the highest ROUGE-1 score, 0.364. The transformer model comes in second place and spaCy is at the last position. Finally, a graphical user interface is created for the best system based on the large language model.
Recommendations for Early Definition Science with the Nancy Grace Roman Space Telescope
Robyn E. Sanderson, Ryan Hickox, Christopher M. Hirata
et al.
The Nancy Grace Roman Space Telescope (Roman), NASA's next flagship observatory, has significant mission time to be spent on surveys for general astrophysics in addition to its three core community surveys. We considered what types of observations outside the core surveys would most benefit from early definition, given 700 hours of mission time in the first two years of Roman's operation. We recommend that a survey of the Galactic plane be defined early, based on the broad range of stakeholders for such a survey, the added scientific value of a first pass to obtain a baseline for proper motions complementary to Gaia's, and the significant potential synergies with ground-based surveys, notably the Legacy Survey of Space and Time (LSST) on Rubin. We also found strong motivation to follow a community definition process for ultra-deep observations with Roman.
The PLATO Mission
Heike Rauer, Conny Aerts, Juan Cabrera
et al.
PLATO (PLAnetary Transits and Oscillations of stars) is ESA's M3 mission designed to detect and characterise extrasolar planets and perform asteroseismic monitoring of a large number of stars. PLATO will detect small planets (down to <2 R_(Earth)) around bright stars (<11 mag), including terrestrial planets in the habitable zone of solar-like stars. With the complement of radial velocity observations from the ground, planets will be characterised for their radius, mass, and age with high accuracy (5 %, 10 %, 10 % for an Earth-Sun combination respectively). PLATO will provide us with a large-scale catalogue of well-characterised small planets up to intermediate orbital periods, relevant for a meaningful comparison to planet formation theories and to better understand planet evolution. It will make possible comparative exoplanetology to place our Solar System planets in a broader context. In parallel, PLATO will study (host) stars using asteroseismology, allowing us to determine the stellar properties with high accuracy, substantially enhancing our knowledge of stellar structure and evolution. The payload instrument consists of 26 cameras with 12cm aperture each. For at least four years, the mission will perform high-precision photometric measurements. Here we review the science objectives, present PLATO's target samples and fields, provide an overview of expected core science performance as well as a description of the instrument and the mission profile at the beginning of the serial production of the flight cameras. PLATO is scheduled for a launch date end 2026. This overview therefore provides a summary of the mission to the community in preparation of the upcoming operational phases.
en
astro-ph.IM, astro-ph.EP
Early Career Perspectives For the NASA SMD Bridge Program
Jenna M. Cann, Arturo O. Martinez, Amethyst Barnes
et al.
In line with the Astro2020 Decadal Report State of the Profession findings and the NASA core value of Inclusion, the NASA Science Mission Directorate (SMD) Bridge Program was created to provide financial and programmatic support to efforts that work to increase the representation and inclusion of students from under-represented minorities in the STEM fields. To ensure an effective program, particularly for those who are often left out of these conversations, the NASA SMD Bridge Program Workshop was developed as a way to gather feedback from a diverse group of people about their unique needs and interests. The Early Career Perspectives Working Group was tasked with examining the current state of bridge programs, academia in general, and its effect on students and early career professionals. The working group, comprised of 10 early career and student members, analyzed the discussions and responses from workshop breakout sessions and two surveys, as well as their own experiences, to develop specific recommendations and metrics for implementing a successful and supportive bridge program. In this white paper, we will discuss the key themes that arose through our work, and highlight select recommendations for the NASA SMD Bridge Program to best support students and early career professionals.
en
astro-ph.IM, physics.ed-ph
Completeness Thresholds for Memory Safety of Array Traversing Programs: Early Technical Report
Tobias Reinhard
In this early technical report on an ongoing project, we present -- to the best of our knowledge -- the first study of completeness thresholds for memory safety proofs. Specifically we consider heap-manipulating programs that iterate over arrays without allocating or freeing memory. We present the first notion of completeness thresholds for program verification which reduce unbounded memory safety proofs to bounded ones. Moreover, we present some preliminary ideas on how completeness thresholds can be computed for concrete programs.
Christophera B. Butlera koncept mariologii patrystycznej
Kazimierz Pek
Podczas pierwszej sesji Soboru Watykańskiego II w 1962 roku rozstrzygnięto niewielką liczbą głosów, że nauka o Matce Pana zostanie zamieszczona w konstytucji o Kościele. Basil Christopher Butler (1902 – 1986), opat benedyktyński Anglii był jednym z tych, którzy przygotowali projekt dokumentu mariologicznego. Otrzymał on poparcie episkopatu angielskiego. Ch. Butler wykazał w oparciu o dane biblijne i nade wszystko patrystyczne, że Matkę Chrystusa należy ukazywać w perspektywie historii zbawienia. Jego koncept patrystyczny został oparty na starożytnej figurze nowej Ewy, która stanowi pierwowzór Kościoła. Wskazał wiele pism ojców Kościoła. Zostały one wykorzystane w ósmy rozdziale Lumen gentium. Tymczasem we wstępnych projektach mariologii soborowej były nieliczne odniesienia patrystyczne. Należy zauważyć, że Ch. Butler skorzystał z pism J. H. Newmana. Podobnie jak on przeszedł z anglikanizmu na katolicyzm. Tradycja patrystyczna anglikanizmu XIX i początku XX wieku podawała wystarczająco dużo argumentów teologicznych, aby Bogurodzicy nie izolować z wykładu o Kościele.
Early Christian literature. Fathers of the Church, etc., Philosophy of religion. Psychology of religion. Religion in relation to other subjects
Early Risk Detection of Pathological Gambling, Self-Harm and Depression Using BERT
Ana-Maria Bucur, Adrian Cosma, Liviu P. Dinu
Early risk detection of mental illnesses has a massive positive impact upon the well-being of people. The eRisk workshop has been at the forefront of enabling interdisciplinary research in developing computational methods to automatically estimate early risk factors for mental issues such as depression, self-harm, anorexia and pathological gambling. In this paper, we present the contributions of the BLUE team in the 2021 edition of the workshop, in which we tackle the problems of early detection of gambling addiction, self-harm and estimating depression severity from social media posts. We employ pre-trained BERT transformers and data crawled automatically from mental health subreddits and obtain reasonable results on all three tasks.
Church Synthesis on Register Automata over Linearly Ordered Data Domains
Léo Exibard, Emmanuel Filiot, Ayrat Khalimov
In a Church synthesis game, two players, Adam and Eve, alternately pick some element in a finite alphabet, for an infinite number of rounds. The game is won by Eve if the omega-word formed by this infinite interaction belongs to a given language S, called the specification. It is well-known that for omega-regular specifications, it is decidable whether Eve has a strategy to enforce the specification no matter what Adam does. We study the extension of Church synthesis games to the linearly ordered data domains (Q, <) and (N, <). In this setting, the infinite interaction between Adam and Eve results in an omega-data word, i.e., an infinite sequence of elements in the domain. We study this problem when specifications are given as register automata. Those automata consist in finite automata equipped with a finite set of registers in which they can store data values, that they can then compare with incoming data values with respect to the linear order. Church games over (N, <) are however undecidable, even for deterministic register automata. Thus, we introduce one-sided Church games, where Eve instead operates over a finite alphabet, while Adam still manipulates data. We show that they are determined, and that deciding the existence of a winning strategy is in ExpTime, both for Q and N. This follows from a study of constraint sequences, which abstract the behaviour of register automata, and allow us to reduce Church games to omega-regular games. We present an application of one-sided Church games to a transducer synthesis problem. In this application, a transducer models a reactive system (Eve) which outputs data stored in its registers, depending on its interaction with an environment (Adam) which inputs data to the system.
Learning Anatomical Segmentations for Tractography from Diffusion MRI
Christian Ewert, David Kügler, Anastasia Yendiki
et al.
Deep learning approaches for diffusion MRI have so far focused primarily on voxel-based segmentation of lesions or white-matter fiber tracts. A drawback of representing tracts as volumetric labels, rather than sets of streamlines, is that it precludes point-wise analyses of microstructural or geometric features along a tract. Traditional tractography pipelines, which do allow such analyses, can benefit from detailed whole-brain segmentations to guide tract reconstruction. Here, we introduce fast, deep learning-based segmentation of 170 anatomical regions directly on diffusion-weighted MR images, removing the dependency of conventional segmentation methods on T 1-weighted images and slow pre-processing pipelines. Working natively in diffusion space avoids non-linear distortions and registration errors across modalities, as well as interpolation artifacts. We demonstrate consistent segmentation results between 0 .70 and 0 .87 Dice depending on the tissue type. We investigate various combinations of diffusion-derived inputs and show generalization across different numbers of gradient directions. Finally, integrating our approach to provide anatomical priors for tractography pipelines, such as TRACULA, removes hours of pre-processing time and permits processing even in the absence of high-quality T 1-weighted scans, without degrading the quality of the resulting tract estimates.
Índice de términos geográficos
Antonio Ignacio Molina Marín
*
Early Christian literature. Fathers of the Church, etc.
Christian Horrebow's Sunspot Observations -- I. Life and Published Writings
Carsten Sønderskov Jørgensen, Christoffer Karoff, V. Senthamizh Pavai
et al.
Between 1761 and 1776, Christian Horrebow made regular observations of sunspots from Rundetaarn in Copenhagen. Based on these observations he writes in 1775 that "it appears that after the course of a certain number of years, the appearance of the Sun repeats itself with respect to the number and size of the spots". Thus, Horrebow hypothesized the idea of a cyclic Sun several decades before Heinrich Schwabe discovered the solar cycle and estimated its period. This proves the ability of Horrebow as a sunspot observer. In this article, we present a general overview of the work of Christian Horrebow, including a brief biography and a complete bibliography. We also present a translation from Danish to English of his writings on sunspots in the Dansk Historisk Almanak. These writings include tables of daily sunspot measurements of which we discuss the completeness.
100+ Metrics for Software Startups - A Multi-Vocal Literature Review
Kai-Kristian Kemell, Xiaofeng Wang, Anh Nguyen-Duc
et al.
Metrics can be used by businesses to make more objective decisions based on data. Software startups in particular are characterized by the uncertain or even chaotic nature of the contexts in which they operate. Using data in the form of metrics can help software startups to make the right decisions amidst uncertainty and limited resources. However, whereas conventional business metrics and software metrics have been studied in the past, metrics in the spe-cific context of software startup are not widely covered within academic literature. To promote research in this area and to create a starting point for it, we have conducted a multi-vocal literature review focusing on practitioner literature in order to compile a list of metrics used by software startups. Said list is intended to serve as a basis for further research in the area, as the metrics in it are based on suggestions made by practitioners and not empirically verified.
Relation algebras of Sugihara, Belnap, Meyer, and Church
Richard L. Kramer, Roger D. Maddux
Algebras introduced by, or attributed to, Sugihara, Belnap, Meyer, and Church are representable as algebras of binary relations with set-theoretically defined operations. They are definitional reducts or subreducts of proper relation algebras. The representability of Sugihara matrices yields sound and complete set-theoretical semantics for R-mingle.
An Empirical Survey on the Early Adoption of DNS Certification Authority Authorization
Jukka Ruohonen
A new certification authority authorization (CAA) resource record for the domain name system (DNS) was standardized in 2013. Motivated by the later 2017 decision to enforce mandatory CAA checking for most certificate authorities, this paper surveys the early adoption of CAA by using an empirical sample collected from the Alexa's top-million domains. According to the results, (i) the adoption of CAA is still at a modest level; only a little below two percent of the popular domains sampled have adopted CAA. Among the domains that have adopted CAA, (ii) authorizations dealing with wildcard certificates are rare compared to conventional certificates. Interestingly, (iii) the results only partially reflect the market structure of the global certificate business. With these timely results, the paper contributes to the ongoing large-scale empirical research on the use of encryption technologies.