Powder bed fusion (PBF) is recognized as one of the most common additive manufacturing technologies because of its attractive capability of fabricating complex geometries using many possible materials. However, the quality and reliability of parts produced by this technology are observed to be crucial aspects. In addition, the challenges of PBF-produced parts are hot issues among stakeholders because parts are still insufficient to meet the strict requirements of high-tech industries. This paper discusses the present state of the art in PBF and technological challenges, with a focus on selective laser melting (SLM). The review work focuses mainly on articles that emphasize the status and challenges of PBF metal-based AM, and the study is primarily limited to open-access sources, with special attention given to the process parameters and flaws as a determining factor for printed part quality and reliability. Moreover, the common defects due to an unstrained process parameter of SLM and those needed to monitor and sustain the quality and reliability of components are encompassed. From this review work, it has been observed that there are several factors, such as laser parameters, powder characteristics, material properties of powder and the printing chamber environments, that affect the SLM printing process and the mechanical properties of printed parts. It is also concluded that the SLM process is not only expensive and slow compared with conventional manufacturing processes, but it also suffers from key drawbacks, such as its reliability and quality in terms of dimensional accuracy, mechanical strength and surface roughness.
Contemporary automated scientific discovery has focused on agents for generating scientific experiments, while systems that perform higher-level scientific activities such as theory building remain underexplored. In this work, we formulate the problem of synthesizing theories consisting of qualitative and quantitative laws from large corpora of scientific literature. We study theory generation at scale, using 13.7k source papers to synthesize 2.9k theories, examining how generation using literature-grounding versus parametric knowledge, and accuracy-focused versus novelty-focused generation objectives change theory properties. Our experiments show that, compared to using parametric LLM memory for generation, our literature-supported method creates theories that are significantly better at both matching existing evidence and at predicting future results from 4.6k subsequently-written papers
Hita Kambhamettu, Bhavana Dalvi Mishra, Andrew Head
et al.
Developing a novel research idea is hard. It must be distinct enough from prior work to claim a contribution while also building on it. This requires iteratively reviewing literature and refining an idea based on what a researcher reads; yet when an idea changes, the literature that matters often changes with it. Most tools offer limited support for this interplay: literature tools help researchers understand a fixed body of work, while ideation tools evaluate ideas against a static, pre-curated set of papers. We introduce literature-initiated pivots, a mechanism where engagement with literature prompts revision to a developing idea, and where that revision changes which literature is relevant. We operationalize this in LitPivot, where researchers concurrently draft and vet an idea. LitPivot dynamically retrieves clusters of papers relevant to a selected part of the idea and proposes literature-informed critiques for how to revise it. A lab study ($n{=}17$) shows researchers produced higher-rated ideas with stronger self-reported understanding of the literature space; an open-ended study ($n{=}5$) reveals how researchers use LitPivot to iteratively evolve their own ideas.
The “graying of the fleet” has been a persistent challenge in many fisheries worldwide, with an aging workforce and declining youth participation raising concerns about recruitment and knowledge transfer. However, since 2014–2015, Norway has experienced a reversal of this trend. This paper explores the phenomenon of “ungraying” in the Norwegian fishing fleet. Drawing on survey data and the Fisheries Employment System (FES) theoretical framework, the study finds that recruitment challenges are not widespread and current recruitment patterns reveal a strong reliance on social networks, though formal education is becoming more important. The Norwegian case illustrates how targeted policies, combined with evolving social and economic conditions, can address demographic challenges in fisheries. However, sustaining this trend requires adaptive strategies that balance the need for formal qualifications with mechanisms that maintain community-based engagement, ensuring the long-term vitality of coastal communities and the fisheries. This study contributes to the literature on fisheries recruitment and employment and introduces the Fisher Pathway Model (FPM), which is an analytical framework to capture the evolving FES and the interplay between primary and secondary socialization.
Eleonora Cappuccio, Andrea Esposito, Francesco Greco
et al.
Artificial Intelligence (AI) is one of the major technological advancements of this century, bearing incredible potential for users through AI-powered applications and tools in numerous domains. Being often black-box (i.e., its decision-making process is unintelligible), developers typically resort to eXplainable Artificial Intelligence (XAI) techniques to interpret the behaviour of AI models to produce systems that are transparent, fair, reliable, and trustworthy. However, presenting explanations to the user is not trivial and is often left as a secondary aspect of the system's design process, leading to AI systems that are not useful to end-users. This paper presents a Systematic Literature Review on Explanation User Interfaces (XUIs) to gain a deeper understanding of the solutions and design guidelines employed in the academic literature to effectively present explanations to users. To improve the contribution and real-world impact of this survey, we also present a platform to support Human-cEnteRed developMent of Explainable user interfaceS (HERMES) and guide practitioners and scholars in the design and evaluation of XUIs.
Code Review consists in assessing the code written by teammates with the goal of increasing code quality. Empirical studies documented the benefits brought by such a practice that, however, has its cost to pay in terms of developers' time. For this reason, researchers have proposed techniques and tools to automate code review tasks such as the reviewers selection (i.e., identifying suitable reviewers for a given code change) or the actual review of a given change (i.e., recommending improvements to the contributor as a human reviewer would do). Given the substantial amount of papers recently published on the topic, it may be challenging for researchers and practitioners to get a complete overview of the state-of-the-art. We present a systematic literature review (SLR) featuring 119 papers concerning the automation of code review tasks. We provide: (i) a categorization of the code review tasks automated in the literature; (ii) an overview of the under-the-hood techniques used for the automation, including the datasets used for training data-driven techniques; (iii) publicly available techniques and datasets used for their evaluation, with a description of the evaluation metrics usually adopted for each task. The SLR is concluded by a discussion of the current limitations of the state-of-the-art, with insights for future research directions.
Christine Helle, Elisabet R. Hillesund, Nina Cecilie Øverby
Abstract Diet during the child's first years is important for growth and development. In toddlerhood, higher diet quality is reported among children eating meals together with family. Although previous literature has documented several associations between maternal mental health and early child feeding practices, less is known about the relationship between maternal mental health and child frequency of shared family meals. This study explores associations between maternal symptoms of anxiety and depression, measured by The Hopkins Symptoms Checklist (SCL‐8), and toddler participation in family meals. We used cross‐sectional data from the Norwegian study Early Food for Future Health, in which participants responded to questionnaires at child age 12 (n = 455) and 24 months (n = 295). Logistic regression was used to explore associations between maternal mental health and child having regular (≥5 per week) or irregular (<5 per week) family meals (breakfast and dinner), adjusting for relevant child and maternal confounding variables. Children of mothers with higher scores of anxiety and depression had higher odds of Irregular family meals at both timepoints; (OR: 2.067, p = 0.015) and (OR: 2.444, p = 0.023). This is one of few studies exploring associations between maternal mental health and child frequency of shared family meals in early childhood, a period where the foundation for life‐long health is shaped. Given the high prevalence of mental ailments and disorders, these findings are important and may inform future public health interventions. Further exploration of this relation is needed, including longitudinal research to test predictive associations and qualitative studies to increase insight and understanding.
AI holds promise for transforming scientific processes, including hypothesis generation. Prior work on hypothesis generation can be broadly categorized into theory-driven and data-driven approaches. While both have proven effective in generating novel and plausible hypotheses, it remains an open question whether they can complement each other. To address this, we develop the first method that combines literature-based insights with data to perform LLM-powered hypothesis generation. We apply our method on five different datasets and demonstrate that integrating literature and data outperforms other baselines (8.97\% over few-shot, 15.75\% over literature-based alone, and 3.37\% over data-driven alone). Additionally, we conduct the first human evaluation to assess the utility of LLM-generated hypotheses in assisting human decision-making on two challenging tasks: deception detection and AI generated content detection. Our results show that human accuracy improves significantly by 7.44\% and 14.19\% on these tasks, respectively. These findings suggest that integrating literature-based and data-driven approaches provides a comprehensive and nuanced framework for hypothesis generation and could open new avenues for scientific inquiry.
The rapid growth of research in Pattern Analysis and Machine Intelligence (PAMI) has rendered literature reviews essential for consolidating and interpreting knowledge across its many subfields. In this work, we present a comprehensive tertiary analysis of PAMI reviews along three complementary dimensions: (i) identifying structural and statistical regularities in existing surveys; (ii) developing quantitative strategies that help researchers navigate and prioritize within the expanding review corpus; and (iii) critically assessing emerging AI-generated review systems. To support this study, we construct RiPAMI, a large-scale database containing more than 3,000 review articles, and combine narrative synthesis with statistical analysis to capture structural and content-level features. Our analyses reveal distinctive organizational patterns as well as persistent gaps in current review practices. Building on these insights, we propose practical, article-level strategies for indicator-guided navigation that move beyond simple citation counts. Finally, our evaluation of state-of-the-art AI-generated reviews indicates encouraging advances in coherence and organization, yet also highlights enduring weaknesses in reference retrieval, coverage of recent work, and the incorporation of visual elements. Together, these findings provide both a critical appraisal of existing review practices and a forward-looking perspective on how AI-generated reviews can evolve into trustworthy, customizable, and transformative complements to traditional human-authored surveys.
In this paper, we propose a method to automatically classify AI-related documents from large-scale literature databases, leading to the creation of an AI-related literature dataset, named DeepDiveAI. The dataset construction approach integrates expert knowledge with the capabilities of advanced models, structured across two global stages. In the first stage, expert-curated classification datasets are used to train an LSTM model, which classifies coarse AI related records from large-scale datasets. In the second stage, we use Qwen2.5 Plus to annotate a random 10% of the coarse AI-related records, which are then used to train a BERT binary classifier. This step further refines the coarse AI related record set to obtain the final DeepDiveAI dataset. Evaluation results demonstrate that the entire workflow can efficiently and accurately identify AI-related literature from large-scale datasets.
In the dynamic landscape of contemporary business, the wave in data and technological advancements has directed companies toward embracing data-driven decision-making processes. Despite the vast potential that data holds for strategic insights and operational efficiencies, substantial challenges arise in the form of data issues. Recognizing these obstacles, the imperative for effective data governance (DG) becomes increasingly apparent. This research endeavors to bridge the gap in DG research within the Operations and Supply Chain Management (OSCM) domain through a comprehensive literature review. Initially, we redefine DG through a synthesis of existing definitions, complemented by insights gained from DG practices. Subsequently, we delineate the constituent elements of DG. Building upon this foundation, we develop an analytical framework to scrutinize the collected literature from the perspectives of both OSCM and DG. Beyond a retrospective analysis, this study provides insights for future research directions. Moreover, this study also makes a valuable contribution to the industry, as the insights gained from the literature are directly applicable to real-world scenarios.
Kristin Haraldstad, Eirik Abildsnes, Tormod Bøe
et al.
Abstract Background Child poverty has been gradually rising, and about 12% of all Norwegian children are living in a state of relative poverty. This study was part of the New Patterns project, which recruits low-income families requiring long-term welfare services. Included families receive integrated welfare services, with the help of a family coordinator. The current study objectives were to explore the associations between HRQoL, demographic variables (age, gender, immigration status) and leisure activities in children and adolescents in low-income families. Methods A cross-sectional survey was conducted among low-income families. Participating families had children (N = 214) aged 8–18 years.The family had a household income below 60% of the equivalized median population income for three consecutive years and needed long-term welfare services. HRQoL was measured using the KIDSCREEN-27 self-report instrument. Descriptive statistics, including means, standard deviations, and proportions, were calculated, and ordinary least squares regressions were performed, clustering standard errors at the family level. Results Compared with boys, girls reported lower HRQoL on only one out of five dimensions, physical wellbeing. In the regression analysis we found statistically significant positive associations between migrant status and HRQoL on all five dimensions: physical wellbeing, psychological wellbeing, parents and autonomy, peers and social support, and school environment. In addition, age was associated with school environment, and age, gender and participation in leisure activities was associated with better physical wellbeing. Conclusions Baseline results regarding HRQoL among children and adolescents in low-income families indicate that they have overall good HRQoL, though some participants had low HRQoL scores, especially on the physical and social support dimensions. Children with an immigrant background report higher HRQoL than do children without an immigrant background.
Abstract Background The implementation of Integrated Care Models (ICMs) represents a strategy for addressing the increasing issues of system fragmentation and improving service customization according to user needs. Available ICMs have been developed for adult populations, and less is known about ICMs specifically designed for children and youth. The study objective was to summarize and assess emerging ICMs for mental health services targeting children and youth in Norway. Methods A horizon scanning study was conducted in the field of child and youth mental health. The study encompassed two key components: (i) the identification of ICMs through a review of both scientific and grey literature, as well as input from key informants, and (ii) the evaluation of selected ICMs using semi-structured interviews with key informants. The aim of the interviews was to identify factors that either promote or hinder the successful implementation or scale up of these ICMs. Results Fourteen ICMs were chosen for analysis. These models encompassed a range of treatment philosophies, spanning from self-care and community care to specialized care. Several models placed emphasis on the referral process, prioritizing low-threshold access, and incorporating other sectors such as housing and child welfare. Four of the selected models included family or parents in their target group and five models extended their services to children and youth beyond the legal age of majority. Nine experts in the field willingly participated in the interview phase of the study. Identified challenges and facilitating factors associated with implementation or scale up of ICMs were related to the Norwegian healthcare system, mental health care delivery, as well as child and youth specific factors. Conclusion Care delivery targeting children and youth’s mental health requires further adaptation to accommodate the intricate nature of their lives. ICMs have been identified as a means to address this complexity by offering accessible services and adopting a holistic approach. This study highlights a selection of promising ICMs that appear capable of meeting some of the specific needs of children and youth. However, it is recommended to subject these models to further assessment and refinement to ensure their effectiveness and the fulfilment of their intended outcomes.
Lisa Victoria Burrell, Hanne Marie Rostad, Tore Wentzel-Larsen
et al.
Abstract Background Variation in service allocation between municipalities may arise as a result of prioritisation. Both individual and societal characteristics determine service allocation, but previous literature has often investigated these factors separately. The present study aims to map variation in allocation of long-term care services and investigate the extent to which service allocation is associated with characteristics related to the individual care recipient and the municipality. Methods This cross-sectional study used register data from the Norwegian Registry for Primary Health Care on all 250 687 individuals receiving municipal health and care services in Norway on 31 December 2019. These individual level data were paired with municipal level data from the Municipality-State-Reporting register and information on the care models in Norwegian long-term care services, derived from a nationwide survey. Multilevel analyses were used to identify individual and municipal factors that were associated with allocation of home care, practical assistance and long-term stay in institutions. Results In total, 164 634 people received home care services and 97 380 received practical assistance per 31 December 2019. Furthermore, 64 404 received both types of home-based services and 31 342 people had a long-term stay in an institution. Increased disability was strongly associated with being allocated more hours of home care and practical assistance, as well as allocation of a long-term institutional stay. The amount of home care and practical assistance declined with increasing age, but the odds of institutional stay increased with age. Care recipients living alone received more home-based services, and women had higher odds of a long-term institutional stay. Significant associations between the proportion of elderly in nursing homes and allocation of a long-term institutional stay and more practical assistance emerged. Other associations with municipalities’ structural characteristics and care service models were weak. Conclusions The influence of individual characteristics outweighed the contribution of municipality characteristics, and the results point to a limited influence of municipality characteristics on allocation of long-term care services.
Bernardo Caldarola, Dario Mazzilli, Lorenzo Napolitano
et al.
Economic Complexity (EC) methods have gained increasing popularity across fields and disciplines. In particular, the EC toolbox has proved particularly promising in the study of complex and interrelated phenomena, such as the transition towards a greener economy. Using the EC approach, scholars have been investigating the relationship between EC and sustainability, proposing to identify the distinguishing characteristics of green products and to assess the readiness of productive and technological structures for the sustainability transition. This article proposes to review and summarize the data, methods, and empirical literature that are relevant to the study of the sustainability transition from an EC perspective. We review three distinct but connected blocks of literature on EC and environmental sustainability. First, we survey the evidence linking measures of EC to indicators related to environmental sustainability. Second, we review articles that strive to assess the green competitiveness of productive systems. Third, we examine evidence on green technological development and its connection to non-green knowledge bases. Finally, we summarize the findings for each block and identify avenues for further research in this recent and growing body of empirical literature.
Pramit Bhattacharyya, Joydeep Mondal, Subhadip Maji
et al.
Bangla (or Bengali) is the fifth most spoken language globally; yet, the state-of-the-art NLP in Bangla is lagging for even simple tasks such as lemmatization, POS tagging, etc. This is partly due to lack of a varied quality corpus. To alleviate this need, we build Vacaspati, a diverse corpus of Bangla literature. The literary works are collected from various websites; only those works that are publicly available without copyright violations or restrictions are collected. We believe that published literature captures the features of a language much better than newspapers, blogs or social media posts which tend to follow only a certain literary pattern and, therefore, miss out on language variety. Our corpus Vacaspati is varied from multiple aspects, including type of composition, topic, author, time, space, etc. It contains more than 11 million sentences and 115 million words. We also built a word embedding model, Vac-FT, using FastText from Vacaspati as well as trained an Electra model, Vac-BERT, using the corpus. Vac-BERT has far fewer parameters and requires only a fraction of resources compared to other state-of-the-art transformer models and yet performs either better or similar on various downstream tasks. On multiple downstream tasks, Vac-FT outperforms other FastText-based models. We also demonstrate the efficacy of Vacaspati as a corpus by showing that similar models built from other corpora are not as effective. The models are available at https://bangla.iitk.ac.in/.
Minakshi Kaushik, Rahul Sharma, Iztok Fister
et al.
Numerical association rule mining is a widely used variant of the association rule mining technique, and it has been extensively used in discovering patterns and relationships in numerical data. Initially, researchers and scientists integrated numerical attributes in association rule mining using various discretization approaches; however, over time, a plethora of alternative methods have emerged in this field. Unfortunately, the increase of alternative methods has resulted into a significant knowledge gap in understanding diverse techniques employed in numerical association rule mining -- this paper attempts to bridge this knowledge gap by conducting a comprehensive systematic literature review. We provide an in-depth study of diverse methods, algorithms, metrics, and datasets derived from 1,140 scholarly articles published from the inception of numerical association rule mining in the year 1996 to 2022. In compliance with the inclusion, exclusion, and quality evaluation criteria, 68 papers were chosen to be extensively evaluated. To the best of our knowledge, this systematic literature review is the first of its kind to provide an exhaustive analysis of the current literature and previous surveys on numerical association rule mining. The paper discusses important research issues, the current status, and future possibilities of numerical association rule mining. On the basis of this systematic review, the article also presents a novel discretization measure that contributes by providing a partitioning of numerical data that meets well human perception of partitions.
AbstractWe exploit changes in tax subsidies for union members in Norway to identify the effects of changes in firm-level union density on productivity and wages. Increased deductions in taxable income for union members led to higher membership rates and contributed to a lower decline in union membership rates over time in Norway. Accounting for selection effects and the potential endogeneity of unionisation, the results show that increasing union density at the firm level leads to a substantial increase in both productivity and wages. The wage effect is larger in more productive firms, consistent with rent-sharing models.
This article argues that an increased focus on the inherent conceptual metaphors of chronotopes in canonical literature may contribute to students’ awareness of the historical and literary development in time and space. Thus, expanding their literacy-skills acquisition in comparison to the linear chronological periodization, author-portrait and text reading that typically characterize the reading of canon literature. Furthermore, the article argues that an increased focus on bi- and multilingual students’ interpretation of conceptual metaphors may contribute to the historical and literary development.
Confusion over different kinds of secondary research, and their divergent purposes, is undermining the effectiveness and usefulness of secondary studies in software engineering. This short paper therefore explains the differences between ad hoc review, case survey, critical review, meta-analysis (aka systematic literature review), meta-synthesis (aka thematic analysis), rapid review and scoping review (aka systematic mapping study). These definitions and associated guidelines help researchers better select and describe their literature reviews, while helping reviewers select more appropriate evaluation criteria.