Systematic literature reviews in the social sciences overwhelmingly follow arborescent logics -- hierarchical keyword filtering, linear screening, and taxonomic classification -- that suppress the lateral connections, ruptures, and emergent patterns characteristic of complex research landscapes. This research note presents the Rhizomatic Research Agent (V3), a multi-agent computational pipeline grounded in Deleuzian process-relational ontology, designed to conduct non-linear literature analysis through 12 specialized agents operating across a seven-phase architecture. The system was developed in response to the methodological groundwork established by (Narayan2023), who employed rhizomatic inquiry in her doctoral research on sustainable energy transitions but relied on manual, researcher-driven exploration. The Rhizomatic Research Agent operationalizes the six principles of the rhizome -- connection, heterogeneity, multiplicity, asignifying rupture, cartography, and decalcomania -- into an automated pipeline integrating large language model (LLM) orchestration, dual-source corpus ingestion from OpenAlex and arXiv, SciBERT semantic topography, and dynamic rupture detection protocols. Preliminary deployment demonstrates the system's capacity to surface cross-disciplinary convergences and structural research gaps that conventional review methods systematically overlook. The pipeline is open-source and extensible to any phenomenon zone where non-linear knowledge mapping is required.
This paper presents ACT (Allocate Connections between Texts), a novel three-stage algorithm for the automatic detection of biblical quotations in Rabbinic literature. Unlike existing text reuse frameworks that struggle with short, paraphrased, or structurally embedded quotations, ACT combines a morphology-aware alignment algorithm with a context-sensitive enrichment stage that identifies complex citation patterns such as "Wave" and "Echo" quotations. Our approach was evaluated against leading systems, including Dicta, Passim, Text-Matcher, as well as human-annotated critical editions. We further assessed three ACT configurations to isolate the contribution of each component. Results demonstrate that the full ACT pipeline (ACT-QE) outperforms all baselines, achieving an F1 score of 0.91, with superior Recall (0.89) and Precision (0.94). Notably, ACT-2, which lacks stylistic enrichment, achieves higher Recall (0.90) but suffers in Precision, while ACT-3, using longer n-grams, offers a tradeoff between coverage and specificity. In addition to improving quotation detection, ACT's ability to classify stylistic patterns across corpora opens new avenues for genre classification and intertextual analysis. This work contributes to digital humanities and computational philology by addressing the methodological gap between exhaustive machine-based detection and human editorial judgment. ACT lays a foundation for broader applications in historical textual analysis, especially in morphologically rich and citation-dense traditions like Aggadic literature.
Dusit Niyato, Octavia A. Dobre, Trung Q. Duong
et al.
The rapid growth of communications and networking research has created an unprecedented demand for high-quality survey and tutorial papers that can synthesize vast bodies of literature into coherent understandings and actionable insights. However, writing impactful survey papers presents multifaceted challenges that demand substantial effort beyond traditional research article composition. This article provides a systematic, practical roadmap for prospective authors in the communications research community, drawing upon extensive editorial experience from premier venues such as the IEEE Communications Surveys & Tutorials. We present structured guidelines covering seven essential aspects: strategic topic selection with novelty and importance, systematic literature collection, effective structural organization, critical review writing, tutorial content development with emphasis on case studies, comprehensive illustration design that enhances comprehension, and identification of future directions. Our goal is to enable junior researchers to craft exceptional survey and tutorial articles that enhance understanding and accelerate innovation within the communications and networking research ecosystem.
Infrastructure is an indispensable part of human life. Over the past decades, the Human-Computer Interaction (HCI) community has paid increasing attention to human interactions with infrastructure. In this paper, we conducted a systematic literature review on infrastructure studies in SIGCHI, one of the most influential communities in HCI. We collected a total of 190 primary studies, covering works published between 2006 and 2024. Most of these studies are inspired by Susan Leigh Star's notion of infrastructure. We identify three major themes in infrastructure studies: growing infrastructure, appropriating infrastructure, and coping with infrastructure. Our review highlights a prevailing trend in SIGCHI's infrastructure research: a focus on informal infrastructural activities across various sociotechnical contexts. In particular, we examine studies that problematize infrastructure and alert the HCI community to its potentially harmful aspects.
This article presents a comprehensive historical overview and analysis of Norwegian descendant literature written by children and grandchildren of World War II perpetrators—specifically Nazis, Waffen-SS front fighters and members of the fascist party Nasjonal Samling (NS)—from the 1980s to the 2020s. Based on an analysis of twenty works, it shows how these narratives articulate the emotional and social burden of family history and engage with an evolving national memory culture. The analysis identifies generational and temporal patterns, including a significant divergence within the second generation. Early publications (1980s) and later “NS children’s” accounts (2010s) foreground stigmatisation, bullying, exclusion and long-term repercussions, whereas self-reflective second- and third-generation works (2000s–2020s) increasingly portray internalised responses, such as inherited shame, guilt and emotional ambivalence. By tracing these developments, the analysis shows that descendant narratives both reflect and reshape existing frameworks of remembrance. Across periods and generations, the burden is marked by strong emotional responses and interwoven with national memory culture. These findings offer new insights into the emotional dimensions of Norway’s evolving memory of World War II, highlighting the interplay between personal, familial and collective memories.
History of scholarship and learning. The humanities
Electricity or heat production from waste incineration is often inefficient and costly, posing challenges for Norway’s ambition to achieve net-zero carbon emissions and a hydrogen-based economy by 2050. To address these challenges, this study aims to develop and evaluate two advanced thermochemical pathways- Sorption-Enhanced Chemical Looping Gasification (SE-CLG) and Pyrolysis-Integrated SE-CLG (Pyro-SE-CLG) for tri-generation (hydrogen, heat, and electricity) from Norwegian Municipal Solid Waste (MSW) and Industrial Solid Waste (ISW), while improving waste management efficiency and environmental performance. Experimental characterization of typical Norwegian ISW (HHV = 17.43 MJ/kg; LHV = 16.22 MJ/kg) revealed substantial energy potential. From literature, heavy metal presence in this type of waste and oxygen carrier (OC) deactivation with ash interaction prompted the development of the Pyro-SE-CLG model to enhance feedstock flexibility, facilitate heavy metal removal, and align waste utilization with national decarbonization goals. Both models were simulated using Aspen Plus and assessed via a 4-E (Energy, Exergy, Environment, and Economic) analysis. The SE-CLG maximized hydrogen yield at (170.6 kg H2/ton MSW; 142.8 kg H2/ton ISW), energy efficiency (up to 69.11 %), exergy efficiency (up to 57.29 %), and hot water recovery (up to 4,300 L/ton MSW) for district heating applications. Pyro-SE-CLG, while yielding 16–20 % less hydrogen and requiring five times more oxygen carrier (OC), enabled complete heavy metal removal using 200 kg of 1 M HCl per ton ISW and improved OC reusability, thereby reducing operational costs. Sensitivity analysis identified optimal hydrogen production at 800 °C (fuel reactor) and 200 °C (WGSR), with Ca2Fe2O5 ensuring stable performance across both configurations. Environmental analysis highlighted SE-CLG(MSW) as the most favorable option, achieving 25.55 % lower global warming potential (GWP) and 66.80 % lower acidification potential (AP) than ISW, while Pyro-SE-CLG reduced GWP during pyrolysis but exhibited higher post-PSA emissions due to lower CO2 capture efficiency. Economically, Pyro-SE-CLG(ISW) achieved the lowest hydrogen sale price (3.32 USD/kg), whereas SE-CLG(ISW) recorded the highest sustainability index (SI = 2.34). By optimizing hydrogen and heat recovery while addressing heavy metal contamination, this study supports Norway’s transition toward a circular, low-carbon energy system and demonstrates the potential of waste-to-hydrogen pathways to meet national 2050 sustainability targets.
Kelly Azevedo, Luigi Quaranta, Fabio Calefato
et al.
Context. Advancements in Machine Learning (ML) are revolutionizing every application domain, driving unprecedented transformations and fostering innovation. However, despite these advances, several organizations are experiencing friction in the adoption of ML-based technologies, mainly due to the shortage of ML professionals. In this context, Automated Machine Learning (AutoML) techniques have been presented as a promising solution to democratize ML adoption. Objective. We aim to provide an overview of the evidence on the benefits and limitations of using AutoML tools. Method. We conducted a multivocal literature review, which allowed us to identify 54 sources from the academic literature and 108 sources from the grey literature reporting on AutoML benefits and limitations. We extracted reported benefits and limitations from the papers and applied thematic analysis. Results. We identified 18 benefits and 25 limitations. Concerning the benefits, we highlight that AutoML tools can help streamline the core steps of ML workflows, namely data preparation, feature engineering, model construction, and hyperparameter tuning, with concrete benefits on model performance, efficiency, and scalability. In addition, AutoML empowers both novice and experienced data scientists, promoting ML accessibility. On the other hand, we highlight several limitations that may represent obstacles to the widespread adoption of AutoML. For instance, AutoML tools may introduce barriers to transparency and interoperability, exhibit limited flexibility for complex scenarios, and offer inconsistent coverage of the ML workflow. Conclusions. The effectiveness of AutoML in facilitating the adoption of machine learning by users may vary depending on the tool and the context in which it is used. As of today, AutoML tools are used to increase human expertise rather than replace it, and, as such, they require skilled users.
Francesco Salzano, Simone Scalabrino, Rocco Oliveto
et al.
Smart Contracts are programs running logic in the Blockchain network by executing operations through immutable transactions. The Blockchain network validates such transactions, storing them into sequential blocks of which integrity is ensured. Smart Contracts deal with value stakes, if a damaging transaction is validated, it may never be reverted, leading to unrecoverable losses. To prevent this, security aspects have been explored in several fields, with research providing catalogs of security defects, secure code recommendations, and possible solutions to fix vulnerabilities. In our study, we refer to vulnerability fixing in the ways found in the literature as guidelines. However, it is not clear to what extent developers adhere to these guidelines, nor whether there are other viable common solutions and what they are. The goal of our research is to fill knowledge gaps related to developers' observance of existing guidelines and to propose new and viable solutions to security vulnerabilities. To reach our goal, we will obtain from Solidity GitHub repositories the commits that fix vulnerabilities included in the DASP TOP 10 and we will conduct a manual analysis of fixing approaches employed by developers. Our analysis aims to determine the extent to which literature-based fixing strategies are followed. Additionally, we will identify and discuss emerging fixing techniques not currently documented in the literature. Through qualitative analysis, we will evaluate the suitability of these new fixing solutions and discriminate between valid approaches and potential mistakes.
We are living in an era of "big literature", where scientific literature is expanding exponentially. While this growth presents new opportunities, it complicates mapping global scientific research landscapes, as manual review methods become infeasible. Recent advancements in machine learning, complex networks, and natural language processing have enabled numerous data-driven discovery methods. Building upon these tools, we introduce an end-to-end workflow for analyzing large-scale literature landscapes, LitLA. This workflow first integrates diverse publication metadata into a bibliographic knowledge graph (KG) representing the research landscape. It then offers tools for exploratory analysis of various landscape aspects. We demonstrate the effectiveness of LitLA via a case study on follow-up works of multi-objective evolutionary algorithm based on decomposition (MOEA/D). In doing so, we constructed the MOEA/D research landscape as a KG comprising over 5,400 papers, 10,000 authors, 1,600 institutions, and 78,000 keywords. With this landscape, we start with descriptive statistics and investigate prominent topics pertaining to MOEA/D and interrogate their spatial-temporal and bilateral relationships. We then map the collaboration and citation networks to reveal the community's growth over time. We further experiment whether learning on latent patterns of this landscape can hint on future research directions.
Lilian Mayerhofer, Ragnhild Bang Nes, Xiaoyu Lan
et al.
Background: Physical and sexual violence against pregnant women have been associated with detrimental mental health outcomes for victims. Few studies have examined both positive (wellbeing) and negative (illbeing) mental health indicators in the same sample. Additionally, the literature assessing mental health based on different forms of violence is limited.Objective: To compare both wellbeing (life satisfaction) and illbeing (anxiety and depression) trajectories between non-victimized and victims of physical, sexual and both forms of violence that occurred during or shortly before pregnancy. Further, we analyse whether social support moderates these trajectories.Method: This longitudinal study is based on the Norwegian Mother, Father and Child Cohort, including the period from early pregnancy to toddlerhood (3 years). We compared wellbeing and illbeing trajectories of non-victims (n = 73,081), victims of physical abuse (n = 1076), sexual abuse (n = 683), and both forms of abuse (n = 107) using Growth Curve Modelling. Finally, social support was included as a moderator of wellbeing and illbeing trajectories.Results: Results indicated that victims scored systematically lower in wellbeing and higher in illbeing. Exposure to violence did not significantly change the wellbeing trajectory, pointing to similar developments in wellbeing among victims and non-victims for the considered period. On the other hand, different trajectories in illbeing occurred between victims and non-victims, as well as between victimized groups. Victims experienced greater change in illbeing scores, with a steeper decrease in illbeing compared to non-victims. Both victims and non-victims returned to respective baseline scores 3 years after birth. All women benefited from social support, but victims of physical abuse were particularly protected by social support.Conclusions: There is an alarming persistence of mental health problems in women exposed to violence during peripregnancy. Different forms of violence differentially impact women’s mental health. Social support is beneficial among all pregnant women.
Norske myndigheter har i flere år tatt aktive grep for å styrke internasjonaliseringen av kunnskapssektoren. Dette gjelder også samarbeid med flere autoritære land som Kina og Russland, som ikke inngår i Norges sikkerhetspolitiske samarbeid. De seneste årene har vi imidlertid sett en klar dreining mot at spørsmål om nasjonal sikkerhet og statusen til liberale verdier blir mer aktualisert, også knyttet til kunnskapsrelasjoner. Vi ser det i form av både skarpere advarsler fra sikkerhetstjenestene, endringer i regelverk og nye retningslinjer for kunnskapssamarbeid med land som Kina og Russland. I denne artikkelen presenterer vi disse endringene og diskuterer mulige implikasjoner. Empirisk bygger vi på data fra spørreundersøkelser og intervju, samt en gjennomgang av dokumenter og mediesaker om aktuelle hendelser. Teoretisk støtter vi oss på forklaringer med bakgrunn i geopolitikk- og sikkerhetiseringslitteraturen. Vi argumenterer for at tiltak som blir gjort for å beskytte nasjonal sikkerhet og liberale verdier, også kan begrense den frie forskningens handlingsrom og dermed endre rammene for akademisk frihet, spesielt for aktiviteter med tilknytning til aktører fra ikke-allierte land. For å unngå overdrevent strenge rammer, bør forskere og deres institusjoner aktivt vise og kommunisere hvordan de jobber med ansvarlighet i sine kunnskapsrelasjoner. Det gjelder ikke minst i situasjoner der etiske og sikkerhetsrelaterte utfordringer framstår som åpenbare.
Abstract in English:
Norway’s handling of knowledge relations with states outside its security cooperation
Norwegian authorities have for several years actively promoted internationalization of the knowledge sector. This includes collaboration with authoritarian countries such as China and Russia, which are not part of Norway’s security cooperation. However, in the last few years, we have seen a clear turn towards questions of national security and the status of liberal norms garnering more attention, also with consideration to knowledge relations. We observe this in sharper warnings from the security services, revised legislation and regulations and new guidelines for knowledge collaboration with countries such as China and Russia. In this article we study these changes and discuss their possible implications. Empirically, we build on survey and interview data, and we examine policy documents and media reports on relevant incidents. In terms of theory, we draw on explanations grounded in the geopolitics and securitization literature. We argue that measures that are introduced to protect national security and liberal norms may also limit the operational space for independent research and thus change the parameters for academic freedom, especially in relation to activities with connection to actors from non-allied states. To avoid unnecessarily restrictive conditions, researchers and their institutions should actively demonstrate and communicate how they work to ensure responsibility in their knowledge relations. This is especially important in situations where ethical and security-related challenges are obvious.
The theme for CUI 2023 is 'designing for inclusive conversation', but who are CUIs really designed for? The field has its roots in computer science, which has a long acknowledged diversity problem. Inspired by studies mapping out the diversity of the CHI and voice assistant literature, we set out to investigate how these issues have (or have not) shaped the CUI literature. To do this we reviewed the 46 full-length research papers that have been published at CUI since its inception in 2019. After detailing the eight papers that engage with accessibility, social interaction, and performance of gender, we show that 90% of papers published at CUI with user studies recruit participants from Europe and North America (or do not specify). To complement existing work in the community towards diversity we discuss the factors that have contributed to the current status quo, and offer some initial suggestions as to how we as a CUI community can continue to improve. We hope that this will form the beginning of a wider discussion at the conference.
Biomedical knowledge is growing in an astounding pace with a majority of this knowledge is represented as scientific publications. Text mining tools and methods represents automatic approaches for extracting hidden patterns and trends from this semi structured and unstructured data. In Biomedical Text mining, Literature Based Discovery (LBD) is the process of automatically discovering novel associations between medical terms otherwise mentioned in disjoint literature sets. LBD approaches proven to be successfully reducing the discovery time of potential associations that are hidden in the vast amount of scientific literature. The process focuses on creating concept profiles for medical terms such as a disease or symptom and connecting it with a drug and treatment based on the statistical significance of the shared profiles. This knowledge discovery approach introduced in 1989 still remains as a core task in text mining. Currently the ABC principle based two approaches namely open discovery and closed discovery are mostly explored in LBD process. This review starts with general introduction about text mining followed by biomedical text mining and introduces various literature resources such as MEDLINE, UMLS, MESH, and SemMedDB. This is followed by brief introduction of the core ABC principle and its associated two approaches open discovery and closed discovery in LBD process. This review also discusses the deep learning applications in LBD by reviewing the role of transformer models and neural networks based LBD models and its future aspects. Finally, reviews the key biomedical discoveries generated through LBD approaches in biomedicine and conclude with the current limitations and future directions of LBD.
Marita Heintz, Gyri Hval, Ragnhild Agathe Tornes
et al.
Objective: The aim of this study was to investigate if the included references in a set of completed systematic reviews are indexed in Ovid MEDLINE and Ovid Embase, and how many references would be missed if we were to constrict our literature searches to one of these sources, or the two databases in combination.
Methods: We conducted a cross-sectional study where we searched for each included reference (n = 4,709) in 274 reviews produced by the Norwegian Institute of Public Health to find out if the references were indexed in the respective databases. The data was recorded in an Excel spreadsheet where we calculated the indexing rate. The reviews were sorted into eight categories to see if the indexing rate differs from subject to subject.
Results: The indexing rate in MEDLINE (86.6%) was slightly lower than in Embase (88.2%). Without the MEDLINE records in Embase, the indexing rate in Embase was 71.8%. The highest indexing rate was achieved by combining both databases (90.2%). The indexing rate was highest in the category "Physical health - treatment" (97.4%). The category "Welfare" had the lowest indexing rate (58.9%).
Conclusion: Our data reveals that 9.8% of the references are not indexed in either database. Furthermore, in 5% of the reviews, the indexing rate was 50% or lower.
Bibliography. Library science. Information resources, Medicine
Quality aspects such as ethics, fairness, and transparency have been proven to be essential for trustworthy software systems. Explainability has been identified not only as a means to achieve all these three aspects in systems, but also as a way to foster users' sentiments of trust. Despite this, research has only marginally focused on the activities and practices to develop explainable systems. To close this gap, we recommend six core activities and associated practices for the development of explainable systems based on the results of a literature review and an interview study. First, we identified and summarized activities and corresponding practices in the literature. To complement these findings, we conducted interviews with 19 industry professionals who provided recommendations for the development process of explainable systems and reviewed the activities and practices based on their expertise and knowledge. We compared and combined the findings of the interviews and the literature review to recommend the activities and assess their applicability in industry. Our findings demonstrate that the activities and practices are not only feasible, but can also be integrated in different development processes.
Berrak Özer, Martin A. Karlsen, Zachary Thatcher
et al.
We investigate a prototype application for machine-readable literature. The program is called "pyDataRecognition" and serves as an example of a data-driven literature search, where the literature search query is an experimental data-set provided by the user. The user uploads a powder pattern together with the radiation wavelength. The program compares the user data to a database of existing powder patterns associated with published papers and produces a rank ordered according to their similarity score. The program returns the digital object identifier (doi) and full reference of top ranked papers together with a stack plot of the user data alongside the top five database entries. The paper describes the approach and explores successes and challenges.
Does technological change destroy or create jobs? New technologies may replace human workers, but can simultaneously create jobs if workers are needed to use these technologies or if new economic activities emerge. Furthermore, technology-driven productivity growth may increase disposable income, stimulating a demand-induced expansion of employment. To synthesize the existing knowledge on this question, we systematically review the empirical literature on the past four decades of technological change and its impact on employment, distinguishing between five broad technology categories (ICT, Robots, Innovation, TFP-style, Other). Overall, we find across studies that the labor-displacing effect of technology appears to be more than offset by compensating mechanisms that create or reinstate labor. This holds for most types of technology, suggesting that previous anxieties over widespread technology-driven unemployment lack an empirical base, at least so far. Nevertheless, low-skill, production, and manufacturing workers have been adversely affected by technological change, and effective up- and reskilling strategies should remain at the forefront of policy making along with targeted social support systems.