Thumeera R. Wanasinghe, Leah Wroblewski, Búi K. Petersen
et al.
With the emergence of industry 4.0, the oil and gas (O&G) industry is now considering a range of digital technologies to enhance productivity, efficiency, and safety of their operations while minimizing capital and operating costs, health and environment risks, and variability in the O&G project life cycles. The deployment of emerging technologies allows O&G companies to construct digital twins (DT) of their assets. Considering DT adoption, the O&G industry is still at an early stage with implementations limited to isolated and selective applications instead of industry-wide implementation, limiting the benefits from DT implementation. To gain the full potential of DT and related technological adoption, a comprehensive understanding of DT technology, the current status of O&G-related DT research activities, and the opportunities and challenges associated with the deployment of DT in the O&G industry are of paramount importance. In order to develop this understanding, this paper presents a literature review of DT within the context of the O&G industry. The paper follows a systematic approach to select articles for the literature review. First, a keywords-based publication search was performed on the scientific databases such as Elsevier, IEEE Xplore, OnePetro, Scopus, and Springer. The filtered articles were then analyzed using online text analytic software (Voyant Tools) followed by a manual review of the abstract, introduction and conclusion sections to select the most relevant articles for our study. These articles and the industrial publications cited by them were thoroughly reviewed to present a comprehensive overview of DT technology and to identify current research status, opportunities and challenges of DT deployment in the O&G industry. From this literature review, it was found that asset integrity monitoring, project planning, and life cycle management are the key application areas of digital twin in the O&G industry while cyber security, lack of standardization, and uncertainty in scope and focus are the key challenges of DT deployment in the O&G industry. When considering the geographical distribution for the DT related research in the O&G industry, the United States (US) is the leading country, followed by Norway, United Kingdom (UK), Canada, China, Italy, Netherland, Brazil, Germany, and Saudi Arabia. The overall publication rate was less than ten articles (approximately) per year until 2017, and a significant increase occurred in 2018 and 2019. The number of journal publications was noticeably lower than the number of conference publications, and the majority of the publications presented theoretical concepts rather than the industrial implementations. Both these observations suggest that the DT implementation in the O&G industry is still at an early stage.
Background Understanding the resilience of healthcare is critically important. A resilient healthcare system might be expected to consistently deliver high quality care, withstand disruptive events and continually adapt, learn and improve. However, there are many different theories, models and definitions of resilience and most are contested and debated in the literature. Clear and unambiguous conceptual definitions are important for both theoretical and practical considerations of any phenomenon, and resilience is no exception. A large international research programme on Resilience in Healthcare (RiH) is seeking to address these issues in a 5-year study across Norway, England, the Netherlands, Australia, Japan, and Switzerland (2018–2023). The aims of this debate paper are: 1) to identify and select core operational concepts of resilience from the literature in order to consider their contributions, implications, and boundaries for researching resilience in healthcare; and 2) to propose a working definition of healthcare resilience that underpins the international RiH research programme. Main text To fulfil these aims, first an overview of three core perspectives or metaphors that underpin theories of resilience are introduced from ecology, engineering and psychology. Second, we present a brief overview of key definitions and approaches to resilience applicable in healthcare. We position our research program with collaborative learning and user involvement as vital prerequisite pillars in our conceptualisation and operationalisation of resilience for maintaining quality of healthcare services. Third, our analysis addresses four core questions that studies of resilience in healthcare need to consider when defining and operationalising resilience. These are: resilience ‘for what’, ‘to what’, ‘of what’, and ‘through what’? Finally, we present our operational definition of resilience. Conclusion The RiH research program is exploring resilience as a multi-level phenomenon and considers adaptive capacity to change as a foundation for high quality care. We, therefore, define healthcare resilience as: the capacity to adapt to challenges and changes at different system levels, to maintain high quality care. This working definition of resilience is intended to be comprehensible and applicable regardless of the level of analysis or type of system component under investigation.
PurposeMost companies include a commitment for sustainable growth involving a switch towards the circular economy (CE) model. The purpose of this paper is to present barriers to CE adoption identified by a literature review. The paper also addresses the particular challenges faced by manufacturers by answering the research question: What are the dominant barriers faced by the manufacturing industry in moving towards a CE?Design/methodology/approachThis paper presents a literature review of research identifying barriers for adopting to CE in the manufacturing sector. The literature review is followed by a case study identifying barriers to CE as seen by ten companies within manufacturing, including the GS1 global information standardisation agency used by all manufacturers.FindingsThe manufacturers investigated focus mostly on recycling and waste reduction. These policies have low or very low CE effect. High CE effect policies like maintenance and reuse targeting the CE ideal of no waste, are nearly non-existent. The results identified seven main barriers to the CE: (1) high start-up costs, (2) complex supply chains, (3) challenging business-to-business (B2B) cooperation, (4) lack of information on product design and production, (5) lack of technical skills, (6) quality compromise and (7) disassembly of products is time-consuming and expensive.Research limitations/implicationsThe data come from participants in a single country, Norway, although the manufacturers are multinational companies adhering to enterprise policies.Practical implicationsThis research shows that all the companies interviewed are well aware of the growing need for their company to move towards more sustainable operations involving CE concepts. The barriers identified are explored, and the findings could guide such companies in their efforts to move to maintenance, reuse, remanufacture and recycle (M+3R) operational model.Social implicationsThe study has found that the major barriers for implementation of CE are quality issues in recycled materials, supply chain complexities, coordination problems between companies, design and production of the product, disassembly of products and high start-up/ investment costs.Originality/valueThe research shows how the transition towards a CE takes place in manufacturing industries by studying the manufacturing sector.
P. Mikalef, Kristina Lemmer, Cindy Schaefer
et al.
Abstract Artificial Intelligence (AI) is gradually becoming an integral part of the digital strategy of organizations. Yet, the use of AI in public organizations in still lagging significantly compared to private organizations. Prior literature looking into aspects that facilitate adoption and use of AI has concentrated on challenges concerning technical aspects of AI technologies, providing little insight regarding the organizational deployment of AI, particularly in public organizations. Building on this gap, this study seeks to examine what aspects enable public organizations to develop AI capabilities. To answer this question, we built an integrated and extended model from the Technology-Organization-Environment framework (TOE) and asked high-level technology managers from municipalities in Europe about factors that influence their development of AI capabilities. We collected data from 91 municipalities from three European countries (i.e., Germany, Norway, and Finland) and analyzed responses by means of structural equation modeling. Our findings indicate that five factors – i.e. perceived financial costs, organizational innovativeness, perceived governmental pressure, government incentives, regulatory support – have an impact on the development of AI capabilities. We also find that perceived citizen pressure and perceived value of AI solutions are not important determinants of AI capability formation. Our findings bear the potential to stimulate a more reflected adoption of AI supporting managers in public organizations to develop AI capabilities.
Victor Morel, Leonardo Iwaya, Simone Fischer-Hübner
In recent years, several personalized assistants based on AI have been researched and developed to help users make privacy-related decisions. These AI-driven Personalized Privacy Assistants (AI-driven PPAs) can provide significant benefits for users, who might otherwise struggle with making decisions about their personal data in online environments that often overload them with different privacy decision requests. So far, no studies have systematically investigated the emerging topic of AI-driven PPAs, classifying their underlying technologies, architecture and features, including decision types or the accuracy of their decisions. To fill this gap, we present a Systematic Literature Review (SLR) to map the existing solutions found in the scientific literature, which allows reasoning about existing approaches and open challenges for this research field. We screened several hundred unique research papers over the recent years (2013-2025), constructing a classification from 41 included papers. As a result, this SLR reviews several aspects of existing research on AI-driven PPAs in terms of types of publications, contributions, methodological quality, and other quantitative insights. Furthermore, we provide a comprehensive classification for AI-driven PPAs, delving into their architectural choices, system contexts, types of AI used, data sources, types of decisions, and control over decisions, among other facets. Based on our SLR, we further underline the research gaps and challenges and formulate recommendations for the design and development of AI-driven PPAs as well as avenues for future research.
Large language models (LLMs) are rapidly transforming various domains, including biomedicine and healthcare, and demonstrate remarkable potential from scientific research to new drug discovery. Graph-based retrieval-augmented generation (RAG) systems, as a useful application of LLMs, can improve contextual reasoning through structured entity and relationship identification from long-context knowledge, e.g. biomedical literature. Even though many advantages over naive RAGs, most of graph-based RAGs are computationally intensive, which limits their application to large-scale dataset. To address this issue, we introduce fastbmRAG, an fast graph-based RAG optimized for biomedical literature. Utilizing well organized structure of biomedical papers, fastbmRAG divides the construction of knowledge graph into two stages, first drafting graphs using abstracts; and second, refining them using main texts guided by vector-based entity linking, which minimizes redundancy and computational load. Our evaluations demonstrate that fastbmRAG is over 10x faster than existing graph-RAG tools and achieve superior coverage and accuracy to input knowledge. FastbmRAG provides a fast solution for quickly understanding, summarizing, and answering questions about biomedical literature on a large scale. FastbmRAG is public available in https://github.com/menggf/fastbmRAG.
Introduction. In accordance with the guidelines of the World Health Organisation (WHO) as part of the Global Strategy to Accelerate the Elimination of Cervical Cancer, the Norwegian Institute of Public Health coordinates the implementation of primary and secondary prevention measures, aiming to achieve the near-total elimination of this cancer in the general female population in Norway within the next fifteen years.
The aim of this study is to present the current epidemiological situation of women with cervical cancer in Norway and the prevention of this cancer.
Materials and methods. The study involved a review and analysis of literature from the Cancer in Norway database from 2023 and 2024, as well as data published by: Folkehelseinstituttet - FHI (from 2020-2025), Årsrapport, Screeningaktivitet og resultater fra Livmorhalsprogrammet (from 2023 and 2024), Helsedirektoratet Livmorhalskreft – pakkeforløp (from 2022), Helsenorge –Livmorhalskreft (from 2025), Kreftregisteret (from 2022-2025).
Results. Every year, approximately 25,000 Norwegian women are diagnosed with human papillomavirus (HPV) infection, which, as a result of persistent infections, is responsible for the development of dysplastic changes and cervical cancer. In 2024, 269 new cases of cervical cancer were reported, most of them in the early stages of the disease, and the incidence rate was 9.4 per 100,000, showing a significant downward trend compared to previous years. The average age of women at diagnosis was 48, and the disease rarely affected women under the age of 25.
Conclusions. Norway's experience shows that comprehensive preventive measures significantly contribute to reducing morbidity and mortality from cervical cancer. This model confirms that consistent implementation of primary and secondary prevention strategies can lead to long-term improvements in women's health and serve as a model for other countries.
In well-being research the term happiness is often used as synonymous with life satisfaction. However, little is known about lay people's understanding of happiness. Building on the available literature, this study explored lay definitions of happiness across nations and cultural dimensions, analyzing their components and relationship with participants' demographic features. Participants were 2799 adults (age range = 30–60, 50% women) living in urban areas of Argentina, Brazil, Croatia, Hungary, India, Italy, Mexico, New Zealand, Norway, Portugal, South Africa, and United States. They completed the Eudaimonic and Hedonic Happiness Investigation (EHHI), reporting, among other information, their own definition of happiness. Answers comprised definitions referring to a broad range of life domains, covering both the contextual-social sphere and the psychological sphere. Across countries and with little variation by age and gender, inner harmony predominated among psychological definitions, and family and social relationships among contextual definitions. Whereas relationships are widely acknowledged as basic happiness components, inner harmony is substantially neglected. Nevertheless, its cross-national primacy, together with relations, is consistent with the view of an ontological interconnectedness characterizing living systems, shared by several conceptual frameworks across disciplines and cultures. At the methodological level, these findings suggest the potential of a bottom-up, mixed method approach to contextualize psychological dimensions within culture and lay understanding.
AbstractGenerating prediction models from high dimensional data often result in large models with many predictors. Causal inference for such models can therefore be difficult or even impossible in practice. The stand-alone software package MinLinMo emphasizes small linear prediction models over highest possible predictability with a particular focus on including variables correlated with the outcome, minimal memory usage and speed. MinLinMo is demonstrated on large epigenetic datasets with prediction models for chronological age, gestational age, and birth weight comprising, respectively, 15, 14 and 10 predictors. The parsimonious MinLinMo models perform comparably to established prediction models requiring hundreds of predictors.
Sarah Ghidalia, Ouassila Labbani Narsis, Aurélie Bertaux
et al.
Motivated by the desire to explore the process of combining inductive and deductive reasoning, we conducted a systematic literature review of articles that investigate the integration of machine learning and ontologies. The objective was to identify diverse techniques that incorporate both inductive reasoning (performed by machine learning) and deductive reasoning (performed by ontologies) into artificial intelligence systems. Our review, which included the analysis of 128 studies, allowed us to identify three main categories of hybridization between machine learning and ontologies: learning-enhanced ontologies, semantic data mining, and learning and reasoning systems. We provide a comprehensive examination of all these categories, emphasizing the various machine learning algorithms utilized in the studies. Furthermore, we compared our classification with similar recent work in the field of hybrid AI and neuro-symbolic approaches.
Background: Accurate effort estimation is crucial for planning in Agile iterative development. Agile estimation generally relies on consensus-based methods like planning poker, which require less time and information than other formal methods (e.g., COSMIC) but are prone to inaccuracies. Understanding the common reasons for inaccurate estimations and how proposed approaches can assist practitioners is essential. However, prior systematic literature reviews (SLR) only focus on the estimation practices (e.g., [26, 127]) and the effort estimation approaches (e.g., [6]). Aim: We aim to identify themes of reasons for inaccurate estimations and classify approaches to improve effort estimation. Method: We conducted an SLR and identified the key themes and a taxonomy. Results: The reasons for inaccurate estimation are related to information quality, team, estimation practice, project management, and business influences. The effort estimation approaches were the most investigated in the literature, while only a few aim to support the effort estimation process. Yet, few automated approaches are at risk of data leakage and indirect validation scenarios. Recommendations: Practitioners should enhance the quality of information for effort estimation, potentially by adopting an automated approach. Future research should aim to improve the information quality, while avoiding data leakage and indirect validation scenarios.
In this paper, we explore the relevance of large language models (LLMs) for annotating references to Roman and Greek mythological entities in modern and contemporary French literature. We present an annotation scheme and demonstrate that recent LLMs can be directly applied to follow this scheme effectively, although not without occasionally making significant analytical errors. Additionally, we show that LLMs (and, more specifically, ChatGPT) are capable of offering interpretative insights into the use of mythological references by literary authors. However, we also find that LLMs struggle to accurately identify relevant passages in novels (when used as an information retrieval engine), often hallucinating and generating fabricated examples-an issue that raises significant ethical concerns. Nonetheless, when used carefully, LLMs remain valuable tools for performing annotations with high accuracy, especially for tasks that would be difficult to annotate comprehensively on a large scale through manual methods alone.
Bintang Noor Prabowo, Alenka Temeljotov Salaj, Jardar Lohne
This study validated the theoretical keypoints obtained from a previously published scoping literature review within the context of three Norwegian World Heritage sites: Røros, Rjukan, and Notodden. The cross-sectional table of the urban heritage facility management (UHFM) framework, which is based on interviews and correspondence, demonstrates the connection between the tasks of the six clusters of technical departments responsible for the provision of urban-scale support services and the modified critical steps of the Historic Urban Landscape approach, in which an additional step for “monitoring and evaluation” was included. UHFM operates at the intersection of heritage preservation, urban-scale facility management, and stakeholder coordination, which requires a careful balance between urban heritage conservation and sustainable urban management practices, thus enabling the preservation of World Heritage status that, among others, fosters sustainable tourism. The three case studies highlighted the significance of UHFM in preserving heritage value, authenticity, visual quality, and significance. Besides providing comprehensive support services that extend beyond the daily tasks of conservators and World Heritage managers, UHFM also allows feedback mechanisms for continuous improvement. This study highlighted the complex relationship between the provision of urban-scale support services and the preservation of Outstanding Universal Value as the core business of World Heritage sites.
We highlight the complexities in estimating the valuation effects of board gender quotas by critically revisiting studies of Norway’s pioneering board gender-quota law. We use the short-run event study of Ahern and Dittmar [Ahern KR, Dittmar A (2012) The changing of the boards: The impact on firm valuation of mandated female board representation. Quart. J. Econom. 127(1):137–197] to illustrate (1) the difficulties in attributing quota-related news to specific dates, (2) the need to account for contemporaneous cross-correlation of stock returns when judging the statistical significance of event-related abnormal stock returns, and (3) the fundamental difficulty of separating quota-induced valuation effects from the influences of firm characteristics and macroeconomic events such as the financial crisis. We provide new evidence suggesting that the valuation effect of Norway’s quota law was statistically insignificant. Overall, our evidence suggests that, at the time of the Norwegian quota, the supply of qualified female director candidates was high enough to avoid the negative consequences of the quota highlighted previously in the literature. This paper was accepted by Renee Adams, finance.
Zander S. Venter, Ruben E. Roos, Megan S. Nowell
et al.
Mapping the spatial and temporal dynamics of species distributions is necessary for biodiversity conservation land-use planning decisions. Recent advances in remote sensing and machine learning have allowed for high-resolution species distribution modeling that can inform landscape-level decision-making. Here we compare the performance of three popular Sentinel-2 (10-m) land cover maps, including dynamic world (DW), European land cover (ELC10), and world cover (WC), in predicting wild bee species richness over southern Norway. The proportion of grassland habitat within 250 m (derived from the land cover maps), along with temperature and distance to sandy soils, were used as predictors in both Bayesian regularized neural network and random forest models. Models using grassland habitat from DW performed best (RMSE = 2.8 ± 0.03; average ± standard deviation across models), followed by ELC10 (RMSE = 2.85 ± 0.03) and WC (RMSE = 2.87 ± 0.02). All satellite-derived maps outperformed a manually mapped Norwegian land cover dataset called AR5 (RMSE = 3.02 ± 0.02). When validating the model predictions of bee species richness against citizen science data on solitary bee occurrences using generalized linear models, we found that ELC10 performed best (AIC = 2278 ± 4), followed by WC (AIC = 2367 ± 3), and DW (AIC = 2376 ± 3). While the differences in RMSE we observed between models were small, they may be significant when such models are used to prioritize grassland patches within a landscape for conservation subsidies or management policies. Partial dependencies in our models showed that increasing the proportion of grassland habitat is positively associated with wild bee species richness, thereby justifying bee conservation schemes that aim to enhance semi-natural grassland habitat. Our results confirm the utility of satellite-derived land cover maps in supporting high-resolution species distribution modeling and suggest there is scope to monitor changes in species distributions over time given the dense time series provided by products such as DW.
Ahmad Haji Mohammadkhani, Nitin Sai Bommi, Mariem Daboussi
et al.
Context: In recent years, leveraging machine learning (ML) techniques has become one of the main solutions to tackle many software engineering (SE) tasks, in research studies (ML4SE). This has been achieved by utilizing state-of-the-art models that tend to be more complex and black-box, which is led to less explainable solutions that reduce trust and uptake of ML4SE solutions by professionals in the industry. Objective: One potential remedy is to offer explainable AI (XAI) methods to provide the missing explainability. In this paper, we aim to explore to what extent XAI has been studied in the SE community (XAI4SE) and provide a comprehensive view of the current state-of-the-art as well as challenge and roadmap for future work. Method: We conduct a systematic literature review on 24 (out of 869 primary studies that were selected by keyword search) most relevant published studies in XAI4SE. We have three research questions that were answered by meta-analysis of the collected data per paper. Results: Our study reveals that among the identified studies, software maintenance (\%68) and particularly defect prediction has the highest share on the SE stages and tasks being studied. Additionally, we found that XAI methods were mainly applied to classic ML models rather than more complex models. We also noticed a clear lack of standard evaluation metrics for XAI methods in the literature which has caused confusion among researchers and a lack of benchmarks for comparisons. Conclusions: XAI has been identified as a helpful tool by most studies, which we cover in the systematic review. However, XAI4SE is a relatively new domain with a lot of untouched potentials, including the SE tasks to help with, the ML4SE methods to explain, and the types of explanations to offer. This study encourages the researchers to work on the identified challenges and roadmap reported in the paper.
This article explores and summarizes the characteristics and findings in Norwegian research on mentoring for inclusion, using a scoping literature review. Mentoring matches younger or less experienced individuals with non-parental mentors to provide support and promote skills, personal development, and/or attainment of specific goals, such as employment. Searches were conducted in databases and in grey literature, with 19 publications included in our final analyses. The included publications encompass various approaches to organizing mentoring: by public sector organizations such as NAV and by non-public organizations (ideal organizations, social entrepreneurships). Over half of the mentoring programs in the included publications had immigrants or individuals with minority backgrounds as target groups. Nearly all the included publications assessed program results, concluding that mentoring generally achieved its (often broadly defined) objectives and/or that participants were satisfied. Notably, a robust assessment of the effects of mentoring remains an area for future inquiry. The included studies provide valuable insights into mentoring for supporting welfare state institutions in inclusion of vulnerable groups. Mentoring represents an individualized and flexible approach with the potential to supplement public services. Based on the findings, future directions for research on mentoring in the welfare state context are discussed.
Social pathology. Social and public welfare. Criminology
The current paper aims to present the didactic use of the Norwegian concrete poems. The concept of concrete poetry will be approached through Jan Erik Vold’s literary perspective as the promoter of concretism in Norway. In order to prove the effectiveness of these poems in the teaching process, a survey was conducted including a questionnaire with closed-ended and open-ended questions. The respondents were 1st- and 2nd- year students of BA in Norwegian language and literature and a group of 3rd-year students from The Centre for Language Industries, enrolled at the Faculty of Letters, at Babeș-Bolyai University. This research aimed at the effectiveness of using Vold’s concrete poems when teaching specific language structures in Norwegian. Thus, a survey was done at the end of one semester of experiment when I used Jan Erik Vold’s concrete poems for my students during my Norwegian language courses and seminars. The results showed that especially during these seminars students read, analyse and design concrete poems in Norwegian, including grammatical and typographical poems, ready-mades, tongue twisters and nursery-rhymes-like poems, in order to better understand and to revise a specific grammatical, syntactic or lexical structure in Norwegian.