W zbiorze Listów austrazyjskich uwagę czytelnika zwraca pięć listów napisanych w imieniu królowej austrazyjskiej Brunhildy. Pisma te należały do dwóch odrębnych pakietów korespondencji dyplomatycznej, która przygotowana została na potrzeby posłów udających się do Konstantynopola pomiędzy rokiem 585 a 593. Dwie podjęte wówczas misje dotyczyły dość trudnej i zarazem napiętej sytuacji politycznej, która pojawiła się w relacjach pomiędzy Merowingami i Konstantynopolem od 582 r. Chodziło przede wszystkim o załagodzenie niezadowolenia cesarza bizantyńskiego Maurycjusza z powodu niewypełnienia wcześniejszych zobowiązań, które polegały na czynnym włączeniu Franków do wojny z Longobardami. Kartą przetargową pomiędzy dworami stał się uprowadzony do Konstantynopola wnuk Brunhildy - Atanagild. Dołączone do pakietu dyplomatycznego listy królowej wyraźnie odznaczają się na tle całej korespondencji. Wyróżnia je osobista treść i emocjonalny język, które nie odpowiadały przyjętym komunikatom i formułom dyplomatycznym. Drobiazgowa analiza epistoł Brunhildy pozwala stwierdzić jednak, że stanowiły one niezwykle precyzyjnie i przemyślane narzędzie komunikacyjne. Zastosowana w nich osobista ekspresja oraz emocjonalne scenariusze stały się swoistego rodzaju taktyką mającą na celu wywarcie pośredniego wpływu na cesarza. Listy potwierdzają nie tylko gruntowne wykształcenie królowej, ale także pozycję oraz wpływy polityczne które posiadała na królewskim dworze. Realizowana przez nią chłodna taktyka polityczna oraz strategie dowodzą również posiadanych przez nią kompetencji przywódczych i autorytetu.
Early Christian literature. Fathers of the Church, etc., Philosophy of religion. Psychology of religion. Religion in relation to other subjects
Duarte Sampaio de Almeida, Fernando Brito e Abreu, Inês Boavida-Portugal
Purpose: This systematic literature review (SLR) characterizes the current state of the art on digital twinning (DT) technology in tourism-related applications. We aim to evaluate the types of DTs described in the literature, identifying their purposes, the areas of tourism where they have been proposed, their main components, and possible future directions based on current work. Design/methodology/approach: We conducted this SLR with bibliometric analysis based on an existing, validated methodology. Thirty-four peer-reviewed studies from three major scientific databases were selected for review. They were categorized using a taxonomy that included tourism type, purpose, spatial scale, data sources, data linkage, visualization, and application. Findings: The topic is at an early, evolving stage, as the oldest study found dates back to 2021. Most reviewed studies deal with cultural tourism, focusing on digitising cultural heritage. Destination management is the primary purpose of these DTs, with mainly site-level spatial scales. In many studies, the physical-digital data linkage is unilateral, lacking twin synchronization. In most DTs considered bilateral, the linkage is indirect. There are more applied than theoretical studies, suggesting progress in applying DTs in the field. Finally, there is an extensive research gap regarding DT technology in tourism, which is worth filling. Originality/Value: This paper presents a novel SLR with a bibliometric analysis of DTs' applied and theoretical application in tourism. Each reviewed publication is assessed and characterized, identifying the current state of the topic, possible research gaps, and future directions.
Identifying the main contributions related to the Automated Test Production (ATP) of Computer Programs and providing an overview about models, methodologies and tools used for this purpose is the aim of this Systematic Literature Review (SLR). The results will enable a comprehensive analysis and insight to evaluate their applicability. A previously produced Systematic Literature Mapping (SLM) contributed to the formulation of the ``Research Questions'' and parameters for the definition of the qualitative analysis protocol of this review.
Jessica L. Ross, Arman Sabbaghi, Run Zhuang
et al.
Clinical trials are critical in advancing medical treatments but often suffer from immense time and financial burden. Advances in statistical methodologies and artificial intelligence (AI) present opportunities to address these inefficiencies. Here we introduce Prognostic Covariate-Adjusted Mixed Models for Repeated Measures (PROCOVA-MMRM) as an advantageous combination of prognostic covariate adjustment (PROCOVA) and Mixed Models for Repeated Measures (MMRM). PROCOVA-MMRM utilizes time-matched prognostic scores generated from AI models to enhance the precision of treatment effect estimators for longitudinal continuous outcomes, enabling reductions in sample size and enrollment times. We first provide a description of the background and implementation of PROCOVA-MMRM, followed by two case study reanalyses where we compare the performance of PROCOVA-MMRM versus the unadjusted MMRM. These reanalyses demonstrate significant improvements in statistical power and precision in clinical indications with unmet medical need, specifically Alzheimer's Disease (AD) and Amyotrophic Lateral Sclerosis (ALS). We also explore the potential for sample size reduction with the prospective implementation of PROCOVA-MMRM, finding that the same or better results could have been achieved with fewer participants in these historical trials if the enhanced precision provided by PROCOVA-MMRM had been prospectively leveraged. We also confirm the robustness of the statistical properties of PROCOVA-MMRM in a variety of realistic simulation scenarios. Altogether, PROCOVA-MMRM represents a rigorous method of incorporating advances in the prediction of time-matched prognostic scores generated by AI into longitudinal analysis, potentially reducing both the cost and time required to bring new treatments to patients while adhering to regulatory standards.
As part of the Euclid Early Release Observations (ERO) programme, we analyse deep, wide-field imaging from the VIS and NISP instruments of two Milky Way globular clusters (GCs), namely NGC 6254 (M10) and NGC 6397, to look for observational evidence of their dynamical interaction with the Milky Way. We search for such an interaction in the form of structural and morphological features in the clusters' outermost regions, which are suggestive of the development of tidal tails on scales larger than those sampled by the ERO programme. Our multi-band photometric analysis results in deep and well-behaved colour-magnitude diagrams that, in turn, enable an accurate membership selection. The surface brightness profiles built from these samples of member stars are the deepest ever obtained for these two Milky Way GCs, reaching down to $\sim30.0$ mag~arcsec$^{-2}$, which is about $1.5$ mag arcsec$^{-2}$ below the current limit. The investigation of the two-dimensional density map of NGC 6254 reveals an elongated morphology of the cluster peripheries in the direction and with the amplitude predicted by $N$-body simulations of the cluster's dynamical evolution, at high statistical significance. We interpret this as strong evidence for the first detection of tidally induced morphological distortion around this cluster. The density map of NGC 6397 reveals a slightly elliptical morphology, in agreement with previous studies, which requires further investigation on larger scales to be properly interpreted. This ERO project thus demonstrates the power of Euclid in studying the outer regions of GCs at an unprecedented level of detail, thanks to the combination of large field of view, high spatial resolution, and depth enabled by the telescope. Our results highlight the future Euclid survey as the ideal data set to investigate GC tidal tails and stellar streams.
We present the first analysis of the Euclid Early Release Observations (ERO) program that targets fields around two lensing clusters, Abell 2390 and Abell 2764. We use VIS and NISP imaging to produce photometric catalogs for a total of $\sim 500\,000$ objects. The imaging data reach a $5\,σ$ typical depth in the range 25.1-25.4 AB in the NISP bands, and 27.1-27.3 AB in the VIS band. Using the Lyman-break method in combination with photometric redshifts, we identify $30$ Lyman-break galaxy (LBG) candidates at $z>6$ and 139 extremely red sources (ERSs), most likely at lower redshift. The deeper VIS imaging compared to NISP means we can routinely identify high-redshift Lyman breaks of the order of $3$ magnitudes, which reduces contamination by brown dwarf stars and low-redshift galaxies. Spectroscopic follow-up campaigns of such bright sources will help constrain both the bright end of the ultraviolet galaxy luminosity function and the quasar luminosity function at $z>6$, and constrain the physical nature of these objects. Additionally, we have performed a combined strong lensing and weak lensing analysis of A2390, and demonstrate how Euclid will contribute to better constraining the virial mass of galaxy clusters. From these data, we also identify optical and near-infrared counterparts of known $z>0.6$ clusters, which exhibit strong lensing features, establishing the ability of Euclid to characterize high-redshift clusters. Finally, we provide a glimpse of Euclid's ability to map the intracluster light out to larger radii than current facilities, enabling a better understanding of the cluster assembly history and mapping of the dark matter distribution. This initial dataset illustrates the diverse spectrum of legacy science that will be enabled by the Euclid survey.
Scientific literature review generation aims to extract and organize important information from an abundant collection of reference papers and produces corresponding reviews while lacking a clear and logical hierarchy. We observe that a high-quality catalogue-guided generation process can effectively alleviate this problem. Therefore, we present an atomic and challenging task named Hierarchical Catalogue Generation for Literature Review as the first step for review generation, which aims to produce a hierarchical catalogue of a review paper given various references. We construct a novel English Hierarchical Catalogues of Literature Reviews Dataset with 7.6k literature review catalogues and 389k reference papers. To accurately assess the model performance, we design two evaluation metrics for informativeness and similarity to ground truth from semantics and structure.Our extensive analyses verify the high quality of our dataset and the effectiveness of our evaluation metrics. We further benchmark diverse experiments on state-of-the-art summarization models like BART and large language models like ChatGPT to evaluate their capabilities. We further discuss potential directions for this task to motivate future research.
Tetsu Kasanishi, Masaru Isonuma, Junichiro Mori
et al.
Automatic literature review generation is one of the most challenging tasks in natural language processing. Although large language models have tackled literature review generation, the absence of large-scale datasets has been a stumbling block to the progress. We release SciReviewGen, consisting of over 10,000 literature reviews and 690,000 papers cited in the reviews. Based on the dataset, we evaluate recent transformer-based summarization models on the literature review generation task, including Fusion-in-Decoder extended for literature review generation. Human evaluation results show that some machine-generated summaries are comparable to human-written reviews, while revealing the challenges of automatic literature review generation such as hallucinations and a lack of detailed information. Our dataset and code are available at https://github.com/tetsu9923/SciReviewGen.
Abel Goedegebuure, Indika Kumara, Stefan Driessen
et al.
Data mesh is an emerging domain-driven decentralized data architecture that aims to minimize or avoid operational bottlenecks associated with centralized, monolithic data architectures in enterprises. The topic has picked the practitioners' interest, and there is considerable gray literature on it. At the same time, we observe a lack of academic attempts at defining and building upon the concept. Hence, in this article, we aim to start from the foundations and characterize the data mesh architecture regarding its design principles, architectural components, capabilities, and organizational roles. We systematically collected, analyzed, and synthesized 114 industrial gray literature articles. The review provides insights into practitioners' perspectives on the four key principles of data mesh: data as a product, domain ownership of data, self-serve data platform, and federated computational governance. Moreover, due to the comparability of data mesh and SOA (service-oriented architecture), we mapped the findings from the gray literature into the reference architectures from the SOA academic literature to create the reference architectures for describing three key dimensions of data mesh: organization of capabilities and roles, development, and runtime. Finally, we discuss open research issues in data mesh, partially based on the findings from the gray literature.
Early dark energy has emerged as one of the more promising approaches to address the Hubble tension - the statistically significant disparity between measurements of the Hubble constant made using data from different epochs in cosmic history. However, the idea is not without its own set of challenges, both from the data, in the effects it has on other measurements, such as the large-scale structure tension, and from theoretical concerns such as technical naturalness and the introduction of a new coincidence problem in cosmology. In this brief note, delivered as an invited plenary lecture at the {\it 15th Frontiers of Fundamental Physics conference}, I discuss how some of the fine-tuning problems of early dark energy can be ameliorated by using couplings to other fields already present in cosmology, and for which the epoch of matter-radiation equality is already a special one. The resulting models - neutrino assisted early dark energy, and chameleon early dark energy - provide testable, theoretically robust implementations of this general idea. I will discuss the formulation and the cosmology of such approaches, including some constraints arising from both observational and theoretical considerations.
Christian Toth, Lars Lorch, Christian Knoll
et al.
Causal discovery and causal reasoning are classically treated as separate and consecutive tasks: one first infers the causal graph, and then uses it to estimate causal effects of interventions. However, such a two-stage approach is uneconomical, especially in terms of actively collected interventional data, since the causal query of interest may not require a fully-specified causal model. From a Bayesian perspective, it is also unnatural, since a causal query (e.g., the causal graph or some causal effect) can be viewed as a latent quantity subject to posterior inference -- other unobserved quantities that are not of direct interest (e.g., the full causal model) ought to be marginalized out in this process and contribute to our epistemic uncertainty. In this work, we propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning, which jointly infers a posterior over causal models and queries of interest. In our approach to ABCI, we focus on the class of causally-sufficient, nonlinear additive noise models, which we model using Gaussian processes. We sequentially design experiments that are maximally informative about our target causal query, collect the corresponding interventional data, and update our beliefs to choose the next experiment. Through simulations, we demonstrate that our approach is more data-efficient than several baselines that only focus on learning the full causal graph. This allows us to accurately learn downstream causal queries from fewer samples while providing well-calibrated uncertainty estimates for the quantities of interest.
Fuzzy rule based systems (FRBSs) is a rule-based system which uses linguistic fuzzy variables as antecedents and consequent to represent human understandable knowledge. They have been applied to various applications and areas throughout the soft computing literature. However, FRBSs suffers from many drawbacks such as uncertainty representation, high number of rules, interpretability loss, high computational time for learning etc. To overcome these issues with FRBSs, there exists many extensions of FRBSs. This paper presents an overview and literature review of recent trends on various types and prominent areas of fuzzy systems (FRBSs) namely genetic fuzzy system (GFS), hierarchical fuzzy system (HFS), neuro fuzzy system (NFS), evolving fuzzy system (eFS), FRBSs for big data, FRBSs for imbalanced data, interpretability in FRBSs and FRBSs which use cluster centroids as fuzzy rules. The review is for years 2010-2021. This paper also highlights important contributions, publication statistics and current trends in the field. The paper also addresses several open research areas which need further attention from the FRBSs research community.
Compliance of organizations with internal and external norms is a highly relevant topic for both practitioners and academics nowadays. However, the substantive, elementary compliance tactics that organizations can use for achieving internal compliance have been described in a fragmented manner and in the literatures of distinct academic disciplines. Using a multidisciplinary structured literature review of 134 publications, this study offers three contributions. First, we present a typology of 45 compliance tactics, which constitutes a comprehensive and rich overview of elementary ways for bringing the organization into compliance. Secondly, we provide an overview of fundamental concepts in the theory of compliance, which forms the basis for the framework we developed for positioning compliance tactics and for analyzing or developing compliance strategies. Thirdly, we present insights for moving from compliance tactics to compliance strategies. In the process, and using the multidisciplinary literature review to take a bird's-eye view, we demonstrate that compliance strategies need to be regarded as a richer concept than perceived hitherto. We also show that opportunities for innovation exist.
Enrique Orduna-Malea, Alberto Martin-Martin, Emilio Delgado Lopez-Cozar
Recently, a review concluded that Google Scholar (GS) is not a suitable source of information "for identifying recent conference papers or other gray literature publications". The goal of this letter is to demonstrate that GS can be an effective tool to search and find gray literature, as long as appropriate search strategies are used. To do this, we took as examples the same two case studies used by the original review, describing first how GS processes original's search strategies, then proposing alternative search strategies, and finally generalizing each case study to compose a general search procedure aimed at finding gray literature in Google Scholar for two wide selected case studies: a) all contributions belonging to a congress (the ASCO Annual Meeting); and b) indexed guidelines as well as gray literature within medical institutions (National Institutes of Health) and governmental agencies (U.S. Department of Health & Human Services). The results confirm that original search strategies were undertrained offering misleading results and erroneous conclusions. Google Scholar lacks many of the advanced search features available in other bibliographic databases (such as Pubmed), however, it is one thing to have a friendly search experience, and quite another to find gray literature. We finally conclude that Google Scholar is a powerful tool for searching gray literature, as long as the users are familiar with all the possibilities it offers as a search engine. Poorly formulated searches will undoubtedly return misleading results.
Licia Verde, Emilio Bellini, Cassio Pigozzo
et al.
We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the $Λ$CDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95\% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter $Ω_{\rm MR} < 0.006$ and extra radiation parameterised as extra effective neutrino species $2.3 < N_{\rm eff} < 3.2$ when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond $Λ$CDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is $r_{\rm s} = 147.4 \pm 0.7$ Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to $r_{\rm s} = 150 \pm 5$ Mpc.
Cloud Computing Datacenters host millions of virtual machines (VMs) on real world scenarios. In this context, Virtual Machine Placement (VMP) is one of the most challenging problems in cloud infrastructure management, considering also the large number of possible optimization criteria and different formulations that could be studied. VMP literature include relevant topics such as energy-efficiency, Service Level Agreements (SLA), cloud service markets, Quality of Service (QoS) and carbon dioxide emissions, all of them with high economical and ecological impact. This work presents an extensive up-to-date review of the most relevant VMP literature in order to identify research opportunities.