This Delphi study investigates whether linguists from diverse theoretical backgrounds can reach consensus on core metaconcepts at the syntax–semantics interface, and how these metaconcepts are perceived as interconnected within linguistic theory and education. Expanding on a previous study conducted primarily with Dutch experts, this research draws on an international sample of 58 linguists across generative, cognitive and functional traditions. Through iterative Delphi rounds and Perceived Causal Network analysis, participants evaluated and refined a shared set of metaconcepts and their perceived relationships. The study identifies a stable core of foundational metaconcepts valued across theoretical traditions and shows that the perceived importance of 23 metaconcepts is largely reproduced from the earlier study, despite the broader linguistic and theoretical diversity of the current expert group. Structural metaconcepts continue to be rated as the most important ones for both theoretical linguistics and language education. The network analyses further illuminate how experts view interdependencies among key metaconcepts, revealing several that function as threshold concepts and may need to be acquired before others can be fully understood. These findings strengthen the validity of a metaconceptual approach to grammar teaching, in which the school grammar curriculum is enriched with metaconcepts that are relevant in linguistic theory. Ultimately, the study helps bridge divisions between theoretical schools of thought and between linguistics and grammar education, offering a shared and empirically grounded foundation for developing learners’ metalinguistic understanding.
Research on underrepresented populations is essential for fostering greater diversity within the software industry. Team diversity is important for reasons that go beyond ethics. Diversity contributes to greater innovation and productivity, helping decrease turnover rates and reduce team conflicts. Within this context, LGBTQIA+ software engineering professionals face unique challenges, e.g., self-isolation and invisibility feeling. Developer Experience (DX) encompasses cognitive, emotional, and motivational considerations, supporting the idea that improving how DX can enhance team performance, strengthen collaboration, and lead to more successful software projects. This study aimed to examine traditional and grey literature data through a Multivocal Literature Review focused on the DX of LGBTQIA+ professionals in agile teams. Our findings reveal that issues such as invisibility, prejudice, and discrimination adversely affect their experiences, compounded by the predominance of heterosexual males in the field. Conversely, professionals who feel welcomed by their teams and organizations, especially in processes tailored to their needs, report more positive team dynamics and engagement.
Jungsoo Park, Junmo Kang, Gabriel Stanovsky
et al.
The surge of LLM studies makes synthesizing their findings challenging. Analysis of experimental results from literature can uncover important trends across studies, but the time-consuming nature of manual data extraction limits its use. Our study presents a semi-automated approach for literature analysis that accelerates data extraction using LLMs. It automatically identifies relevant arXiv papers, extracts experimental results and related attributes, and organizes them into a structured dataset, LLMEvalDB. We then conduct an automated literature analysis of frontier LLMs, reducing the effort of paper surveying and data extraction by more than 93% compared to manual approaches. We validate LLMEvalDB by showing that it reproduces key findings from a recent manual analysis of Chain-of-Thought (CoT) reasoning and also uncovers new insights that go beyond it, showing, for example, that in-context examples benefit coding & multimodal tasks but offer limited gains in math reasoning tasks compared to zero-shot CoT. Our automatically updatable dataset enables continuous tracking of target models by extracting evaluation studies as new data becomes available. Through LLMEvalDB and empirical analysis, we provide insights into LLMs while facilitating ongoing literature analyses of their behavior.
Aleksandra Walczyńska, Merel Braeckman, Nuno Capela
et al.
Abstract The aim of the study was to collect data on plant phenology, density of flowers and production of floral resources in European countries, using published and grey literature written in local languages. The search was conducted in 11 European languages (Danish, Dutch, French, German, Greek, Italian, Norwegian Bokmål, Portuguese, Romanian, Spanish and Swedish) and included published and unpublished data from local journals, books, databases or master and doctoral theses. The collection contains 2382 records for 1132 plant species from 113 families. Most of the data collected is on flowering phenology, with a relatively large amount of data on nectar/sugar production and less on pollen production and floral density (1474, 1141, 325 and 152 records, respectively). Our study is unique in collecting data on floral resource traits in local European languages. The data collected are a valuable addition to existing floral resource trait databases and can help to quantify the floral resources available to pollinators and other organisms that depend on them for food in different habitats and ecosystems. At the same time, our collection, in combination with other databases on floral resource traits, allows the identification of plant genera and families for which information is scarce, as well as the best‐studied plants and countries where research on floral resource traits has a long tradition. Synthesis and applications. Our research shows the value of using data published in local languages, often as grey literature, especially when building ecological databases of various kinds. Much of the basic data we collected on floral resource traits is available in literature published long ago, making it even more difficult for the research community to access. We discussed several technical issues that can be encountered when collecting such floristic data, especially if the data are to be further used in modelling.
Kristian Espeland, Eidi Christensen, Astrid Aandahl
et al.
Background/Objectives: With the increasing prevalence of Crohn’s disease (CD), treatment options for patients who fail conventional and advanced therapy are highly needed. Therefore, we explored the safety and efficacy of extracorporeal photopheresis (ECP) using 5-aminolevulinic acid (ALA) and blue light (405 nm). Methods: Patients with active CD who failed or were intolerant to biological therapy were eligible. Mononuclear cells (90 mL) were collected from each patient using a Spectra Optia® apheresis system and diluted with 100 mL of 0.9% sodium chloride in a collection bag. The cells were incubated with ALA at a concentration of 3 millimolar (mM) for 60 min ex vivo and illumination with an LED blue light (405 nm) source (BLUE-PIT®) before reinfusion to the patient. Recording of vital signs and adverse events were regularly performed. At week 13, we assessed the patients with colonoscopy, the Harvey Bradshaw Index (HBI), the Inflammatory Bowel disease Health Related Quality of Life Questionnaire, and the measurement of serum C-reactive protein and fecal calprotectin (FC) levels. Biopsies of the intestines were taken for immunohistochemistry. Results: Seven patients were included. Four patients completed the treatments, with a total of 24 treatments. Three of the four patients achieved a favorable response, including a lower HBI, lower FC levels, and/or endoscopic improvement. No significant adverse events were observed. The remaining three patients received only one, three, or five treatments due to technical difficulties, medical reasons, or the withdrawal of informed consent. Conclusions: ALA-based ECP appears safe and seems to give some clinical improvement for the patients with active CD who failed to respond to conventional and advanced therapies.
Roman Bögli, Leandro Lerena, Christos Tsigkanos
et al.
TLA+ is a formal specification language used for designing, modeling, documenting, and verifying systems through model checking. Despite significant interest from the research community, knowledge about usage of the TLA+ ecosystem in practice remains scarce. Industry reports suggest that software engineers could benefit from insights, innovations, and solutions to the practical challenges of TLA+. This paper explores this development by conducting a systematic literature review of TLA+'s industrial usage over the past decade. We analyze the trend in industrial application, characterize its use, examine whether its promised benefits resonate with practitioners, and identify challenges that may hinder further adoption.
Augmented Reality (AR) is an emerging technology that ranks among the top innovations in interactive media. With the emergence of new technologies, the question about the factors influencing user acceptance arises. Many research models on the user acceptance of technologies were developed and extended to answer this question in the last decades. This research paper provides an overview of the current state in the scientific literature on user acceptance factors of AR in training and education. We conducted a systematic literature review, identifying 45 scientific papers on technology acceptance of augmented reality. Twenty-two papers refer more specifically to the field of training and education. Overall, 33 different technology acceptance models and 34 acceptance variables were identified. Based on the results, there is a great potential for further research.
Scientific progress depends on researchers' ability to synthesize the growing body of literature. Can large language models (LMs) assist scientists in this task? We introduce OpenScholar, a specialized retrieval-augmented LM that answers scientific queries by identifying relevant passages from 45 million open-access papers and synthesizing citation-backed responses. To evaluate OpenScholar, we develop ScholarQABench, the first large-scale multi-domain benchmark for literature search, comprising 2,967 expert-written queries and 208 long-form answers across computer science, physics, neuroscience, and biomedicine. On ScholarQABench, OpenScholar-8B outperforms GPT-4o by 5% and PaperQA2 by 7% in correctness, despite being a smaller, open model. While GPT4o hallucinates citations 78 to 90% of the time, OpenScholar achieves citation accuracy on par with human experts. OpenScholar's datastore, retriever, and self-feedback inference loop also improves off-the-shelf LMs: for instance, OpenScholar-GPT4o improves GPT-4o's correctness by 12%. In human evaluations, experts preferred OpenScholar-8B and OpenScholar-GPT4o responses over expert-written ones 51% and 70% of the time, respectively, compared to GPT4o's 32%. We open-source all of our code, models, datastore, data and a public demo.
Gauri Shankar, Md Raihan Uddin, Saddam Mukta
et al.
Blockchain technology is an emerging digital innovation that has gained immense popularity in enhancing individual security and privacy within Information Systems (IS). This surge in interest is reflected in the exponential increase in research articles published on blockchain technology, highlighting its growing significance in the digital landscape. However, the rapid proliferation of published research presents significant challenges for manual analysis and synthesis due to the vast volume of information. The complexity and breadth of topics, combined with the inherent limitations of human data processing capabilities, make it difficult to comprehensively analyze and draw meaningful insights from the literature. To this end, we adopted the Computational Literature Review (CLR) to analyze pertinent literature impact and topic modelling using the Latent Dirichlet Allocation (LDA) technique. We identified 10 topics related to security and privacy and provided a detailed description of each topic. From the critical analysis, we have observed several limitations, and several future directions are provided as an outcome of this review.
<p>Understanding carbon exchange processes between land reservoirs and the atmosphere is essential for predicting carbon–climate feedbacks. Still, considerable uncertainty remains in the representation of the terrestrial carbon cycle in Earth system models. An emerging strategy to constrain these uncertainties is to include the role of different microbial groups explicitly. Following this approach, we extend the framework of the MIcrobial-MIneral Carbon Stabilization (MIMICS) model with additional mycorrhizal groups and a nitrogen cycle that includes a novel representation of inorganic nitrogen sorption to particles via a Langmuir isotherm. MIMICS+ v1.0 is designed to capture and quantify relationships between soil microorganisms and their environment, with a particular emphasis on boreal ecosystems. We evaluated MIMICS+ against podzolic soil profiles in Norwegian forests as well as the conventional Community Land Model (CLM). MIMICS+ matched observed carbon stocks better than CLM and gave a broader range of <span class="inline-formula">C:N</span> ratios, more in line with observations. This is mainly explained by a higher directly plant-derived fraction into the soil organic matter (SOM) pools. The model produces microbial biomass estimates in line with numbers reported in the literature. MIMICS+ also showed better representation of climate gradients than CLM, especially in terms of temperature. To investigate responses to changes in nutrient availability, we performed an N enrichment experiment and found that nitrogen sorbed to particles through the sorption algorithm served as a long-term storage of nutrients for the microbes. Furthermore, although the microbial groups responded considerably to the nitrogen enrichment, we only saw minor responses for carbon storage and respiration. Together, our results present MIMICS+ as an attractive tool for further investigations of interactions between microbial functioning and their (changing) environment.</p>
ABSTRACT Background: Lack of information about economic burden of COPD is a major cause of lack of attention to this chronic condition from governments and policymakers. Objective: To find the economic burden of COPD in Asia, USA and Europe, and to identify the key cost driving factors in management of COPD patients. Methodology: Relevant studies assessing the cost of COPD from patient perspective or societal perspective were retrieved by thoroughly searching PUBMED, SCIENCE DIRECT, GOOGLE SCHOLAR, SCOPUS, and SAGE Premier Databases. Results: In the USA annual per patient direct medical cost and hospitalization cost were reported as $10,367 and $6852, respectively. In Asia annual per patient direct medical cost in Iran, Korea and Singapore was reported as $1544, $3077, and $2335, respectively. However, annual per patient hospitalization cost in Iran, Korea, Singapore, India, China, and Turkey was reported as $865, $1371, $1868, $296, $1477 and $1031, respectively. In Europe annual per patient direct medical cost was reported as $11,787, $10,552, $8644, $8203, $7760, $3190, $1889, $2162, and $2254 in Norway, Denmark, Germany, Italy, Sweden, Greece, Spain, Belgium, and Serbia, respectively. Conclusion: Limiting the disease to early stage and preventing exacerbations may reduce the cost of management of COPD.
Saara Tenhunen, Tomi Männistö, Matti Luukkainen
et al.
Tertiary education institutions aim to prepare their computer science and software engineering students for working life. While much of the technical principles are covered in lower-level courses, team-based capstone projects are a common way to provide students with hands-on experience and teach soft skills. This paper explores the characteristics of software engineering capstone courses presented in the literature. The goal of this work is to understand the pros and cons of different approaches by synthesising the various aspects of software engineering capstone courses and related experiences. In a systematic literature review for 2007-2022, we identified 127 primary studies. These studies were analysed based on their presented course characteristics and the reported course outcomes. The characteristics were synthesised into a taxonomy consisting of duration, team sizes, client and project sources, project implementation, and student assessment. We found out that capstone courses generally last one semester and divide students into groups of 4-5 where they work on a project for a client. For a slight majority of courses, the clients are external to the course staff and students are often expected to produce a proof-of-concept level software product as the main end deliverable. The courses also offer versatile assessments for students throughout the project. This paper provides researchers and educators with a classification of characteristics of software engineering capstone courses based on previous research. We further synthesise insights on the reported outcomes of capstone courses. Our review study aims to help educators to identify various ways of organising capstones and effectively plan and deliver their own capstone courses. The characterisation also helps researchers to conduct further studies on software engineering capstones.
Federico Quin, Danny Weyns, Matthias Galster
et al.
In A/B testing two variants of a piece of software are compared in the field from an end user's point of view, enabling data-driven decision making. While widely used in practice, no comprehensive study has been conducted on the state-of-the-art in A/B testing. This paper reports the results of a systematic literature review that analyzed 141 primary studies. The results shows that the main targets of A/B testing are algorithms and visual elements. Single classic A/B tests are the dominating type of tests. Stakeholders have three main roles in the design of A/B tests: concept designer, experiment architect, and setup technician. The primary types of data collected during the execution of A/B tests are product/system data and user-centric data. The dominating use of the test results are feature selection, feature rollout, and continued feature development. Stakeholders have two main roles during A/B test execution: experiment coordinator and experiment assessor. The main reported open problems are enhancement of proposed approaches and their usability. Interesting lines for future research include: strengthen the adoption of statistical methods in A/B testing, improving the process of A/B testing, and enhancing the automation of A/B testing.
Lucas Francisco da Matta Vegi, Marco Tulio Valente
Elixir is a new functional programming language whose popularity is rising in the industry. However, there are few works in the literature focused on studying the internal quality of systems implemented in this language. Particularly, to the best of our knowledge, there is currently no catalog of code smells for Elixir. Therefore, in this paper, through a grey literature review, we investigate whether Elixir developers discuss code smells. Our preliminary results indicate that 11 of the 22 traditional code smells cataloged by Fowler and Beck are discussed by Elixir developers. We also propose a list of 18 new smells specific for Elixir systems and investigate whether these smells are currently identified by Credo, a well-known static code analysis tool for Elixir. We conclude that only two traditional code smells and one Elixir-specific code smell are automatically detected by this tool. Thus, these early results represent an opportunity for extending tools such as Credo to detect code smells and then contribute to improving the internal quality of Elixir systems.
Information Architecture (IA) is a blueprint for the information system in websites or other information-rich environments. It corresponds to how we organize, label and structure information. The importance of Information Architecture and its influence on a system's usability is vastly discussed in literature. Because of the inherent connection between Information Architecture concepts and the Human Computer Interaction (HCI) field, we decided to investigate how previous research has used Information Architecture in the context of Human Computer Interaction (IAinHCI). In order to do that, we followed a two phase process. First, we conducted a Systematic Literature Review (SLR). We queried both the ACM and IEEE databases. We filtered and assessed 311 papers that spanned a decade of research on Information Architecture. We found 25 papers that utilized Information Architecture in the context of Human Computer Interaction. Then, we followed a Background Reference Search process using the SLR resulting papers as a starting set. We assessed the eligibility of the reference list of all 25 papers and found eight additional papers that were relevant to our research question. Results of our review show that, IAinHCI papers fall under seven main categories, from IoT to the semantic web and ubiquitous technology. The website category, however, was both the most consistent over the years and the most prevalent category accounting for 67% of the papers. Our findings suggest that IA has not yet uncovered its full potential and there is still room for research to leverage and expend the IA knowledge base promising a prosperous future for Information Architecture.
Anderson Yoshiaki Iwazaki, Vinicius dos Santos, Katia Romero Felizardo
et al.
Graduate courses can provide specialized knowledge for Ph.D. and Master's students and contribute to develop their hard and soft skills. At the same time, Systematic Literature Review (SLR) has been increasingly adopted in the computing area as a valuable technique to synthesize the state of the art of a given research topic. However, there is still a poor understanding of the real benefits and drawbacks of offering the SLR course for graduate students. This paper reports an experience that examines such benefits and drawbacks, the difficulties for professors (i.e., educators), and the essential SLR topics to be taught as well as a way to better teach them. We also surveyed computer science graduate students who attended the SLR course, which we have offered for almost ten years for Ph.D. and Master's students in our institution. We found the attendance to the SLR course is a valuable opportunity for graduate students to conduct the required deep literature review of their research topic, improve their research skills, and increase their formation. Hence, we recommend that Ph.D. and Masters' programs offer the SLR course to contribute to their academic achievement.
A software pattern is a reusable solution to address a commonly occurring problem within a given context when designing software. Using patterns is a common practice for software architects to ensure software quality. Many pattern collections have been proposed for a large number of application domains. However, because of the technology's recentness, there are only a few available collections with a lack of extensive testing in industrial blockchain applications. It is also difficult for software architects to adequately apply blockchain patterns in their applications, as it requires deep knowledge of blockchain technology. Through a systematic literature review, this paper has identified 120 unique blockchain-related patterns and proposes a pattern taxonomy composed of multiple categories, built from the extracted pattern collection. The purpose of this collection is to map, classify, and describe all available patterns across the literature to help readers make adequate decisions regarding blockchain pattern selection. This study also shows potential applications of those patterns and identifies the relationships between blockchain patterns and other non-blockchain software patterns.
Like the Russian Federation, the United States is a multilingual, multicultural society. A nation of immigrants and indigenous peoples, it has produced a rich body of literature in dozens of languages in addition to English that scholars have only in recent decades begun to pay attention to. Of particular note are texts in Spanish, Yiddish, Chinese, French, Hebrew, German, Arabic, Norwegian, Welsh, Greek, Turkish, Italian, Korean, Polish, Portuguese, Russian, Vietnamese and numerous American Indian languages. In this paper we observe the most significant texts of multilingual American literature. The corpus of literary works shows us, that despite Americans pervasive and enduring xenolinguaphobia - aversion to other languages - the United States, like other large countries, is a heterogeneous amalgam. Ignoring the variety of works written in languages other than English impoverishes the national culture and handicaps serious readers.
M. García-Llorente, Radha Rubio-Olivar, Inés Gutiérrez-Briceño
Green care is an innovative approach that combines simultaneously caring for people and caring for land through three elements that have not been previously connected: (1) multifunctional agriculture and recognition of the plurality of agricultural system values; (2) social services and health care; and (3) the possibility of strengthening the farming sector and local communities. The current research provides a comprehensive overview of green care in Europe as a scientific discipline through a literature review (n = 98 studies). According to our results, the Netherlands, the UK, Norway and Sweden followed by Italy have led the scientific studies published in English. Green care research comprises a wide range of perspectives and frameworks (social farming, care farming, nature-based solutions, etc.) with differences in their specificities. Green care studies have mainly focused on measuring the effectiveness of therapeutic interventions. Studies that evaluate its relevance in socio-economic and environmental terms are still limited. According to our results, the most common users studied were people suffering from psychological and mental ill health, while the most common activities were horticulture, animal husbandry and gardening. Finally, we discuss the potential of green care to reconnect people with nature and to diversify the farming sector providing new public services associated with the relational values society obtains from the contact with agricultural systems.