Financial fraudsters use Gen AI, digital channels, global networks, and synthetic identities making it complex to identify the fraudulent activities. Traditional rule-based systems relying on traditional methods do not identify frauds which use multi-step transaction routing with multiple institutions and across borders. Graph database using Labelled Property Graphs, represents customers, accounts, and transactions as interconnected nodes and edges. By ingesting live transaction data, they apply pattern-matching and community-detection to expose suspicious subgraphs. Money-laundering rings or collusive clusters—and let investigators trace multi-hop links to “hub” accounts with clear visual audit trails. Machine learning models trained on vast historical datasets, using supervised classifiers (e.g., gradient boosting) and unsupervised anomaly detectors. Features like transaction amounts, geolocation consistency, device fingerprints, and temporal sequences feed these models, while recurrent architectures capture evolving fraud tactics. Yet they often suffer from concept drift, require extensive labelled data, underperform on imbalanced cases, and behave as opaque black boxes, generating false positives and hampering trust. A hybrid framework combines relational graph insights with statistical scoring, boosting detection accuracy, reducing false alarms, and enhancing investigators’ confidence in fraud detection and prevention.
José Peixoto, Alexis Gonzalez, Janki Bhimani
et al.
Programmable caching engines like CacheLib are widely used in production systems to support diverse workloads in multi-tenant environments. CacheLib's design focuses on performance, portability, and configurability, allowing applications to inherit caching improvements with minimal implementation effort. However, its behavior under dynamic and evolving workloads remains largely unexplored. This paper presents an empirical study of CacheLib with multi-tenant settings under dynamic and volatile environments. Our evaluation across multiple CacheLib configurations reveals several limitations that hinder its effectiveness under such environments, including rigid configurations, limited runtime adaptability, lack of quality-of-service support and coordination, which lead to suboptimal performance, inefficient memory usage, and tenant starvation. Based on these findings, we outline future research directions to improve the adaptability, fairness, and programmability of future caching engines.
Abstract Remanufacturing has become a mainstream sustainable manufacturing paradigm for energy conservation and environmental protection. Disassembly and reprocessing operations are two main activities in remanufacturing. This work proposes multiobjective integrated scheduling of disassembly and reprocessing operations considering product structures and random processing time. First, a stochastic programming model is developed to minimize maximum completion time and total tardiness. Second, a reinforcement learning-based multiobjective evolutionary algorithm is devised considering problem-specific knowledge. Three search strategy combinations are formed: crossover and mutation, crossover and key product-based iterated local search, mutation and key product-based iterated local search. At each iteration, a Q-learning method is devised to intelligently choose a combination of premium strategies. A stochastic simulation is incorporated to evaluate the objective values of the searched solutions. Finally, the formulated model and method are compared with an exact solver, CPLEX, and three well-known metaheuristics from the literature on a set of test instances. The results confirm the excellent competitiveness of the developed model and algorithm for solving the considered problem.
Electronic computers. Computer science, Information technology
Роман Штонда, Світлана Паламарчук, Олена Бокій
et al.
У сучасних умовах інтенсивного розвитку інформаційно-комунікаційних технологій та стрімкого зростання кількості кіберзагроз, захист кінцевих пристроїв та інформаційно-комунікаційних систем організацій набуває критичного значення. У зв’язку з цим антивірусні програмні засоби залишаються ключовим інструментом у забезпеченні кіберзахисту від шкідливого програмного забезпечення та сценаріїв цілеспрямованих атак. Однак, для вибору оптимального антивірусного програмного засобу важливо мати об’єктивний і комплексний підхід до оцінки їхніх функціональних можливостей. Метою цієї статті є розробка Комплексної методики оцінювання функціональних можливостей антивірусного програмного забезпечення. Запропонована методика враховує широкий спектр тестів, що моделюють типові та нетипові вектори проникнення шкідливого програмного забезпечення: від заражених ZIP-архівів, фішингових листів, змін системних файлів (hosts, реєстру) до виявлення Beacon-активності, автозавантажуваних скриптів, обфускованих PowerShell-команд, макросів Office-документів тощо. У дослідженні оцінюються чотири популярні антивірусні програмні засоби: ESET Endpoint Security, Avast Business Antivirus, Zillya та Windows Defender. У межах експерименту дослідницька група провела оцінювання функцій кожного антивірусного програмного засобу за 21 критерієм. Оцінювання здійснювалось у балах (0–2) із відповідною вагою критичності (1 – критична, 0.8 – висока, 0.5 – середня). Методика дозволяє визначити загальний рівень функціональності та ефективність у відсотковому значенні. Це дозволяє об’єктивно підходити до вибору антивірусного програмного засобу залежно від характеру інформаційної інфраструктури та рівня ризику. Запропонований підхід є універсальним і придатним до адаптації під інші платформи та умови, а також може бути розширений для взаємодії з системами класу Endpoint Detection and Response (Extended Detection and Response). Результати дослідження підтверджують важливість комплексного підходу до кіберзахисту, з урахуванням особливостей сучасних кібератак.
P. Vijaya Bharati, J. S. V. Siva Kumar, Sathish K Anumula
et al.
Fourth Industrial Revolution has brought in a new era of smart manufacturing, wherein, application of Internet of Things , and data-driven methodologies is revolutionizing the conventional maintenance. With the help of real-time data from the IoT and machine learning algorithms, predictive maintenance allows industrial systems to predict failures and optimize machines life. This paper presents the synergy between the Internet of Things and predictive maintenance in industrial engineering with an emphasis on the technologies, methodologies, as well as data analytics techniques, that constitute the integration. A systematic collection, processing, and predictive modeling of data is discussed. The outcomes emphasize greater operational efficiency, decreased downtime, and cost-saving, which makes a good argument as to why predictive maintenance should be implemented in contemporary industries.
Bianca Trinkenreich, Fabio Calefato, Geir Hanssen
et al.
The adoption of Large Language Models (LLMs) is not only transforming software engineering (SE) practice but is also poised to fundamentally disrupt how research is conducted in the field. While perspectives on this transformation range from viewing LLMs as mere productivity tools to considering them revolutionary forces, we argue that the SE research community must proactively engage with and shape the integration of LLMs into research practices, emphasizing human agency in this transformation. As LLMs rapidly become integral to SE research - both as tools that support investigations and as subjects of study - a human-centric perspective is essential. Ensuring human oversight and interpretability is necessary for upholding scientific rigor, fostering ethical responsibility, and driving advancements in the field. Drawing from discussions at the 2nd Copenhagen Symposium on Human-Centered AI in SE, this position paper employs McLuhan's Tetrad of Media Laws to analyze the impact of LLMs on SE research. Through this theoretical lens, we examine how LLMs enhance research capabilities through accelerated ideation and automated processes, make some traditional research practices obsolete, retrieve valuable aspects of historical research approaches, and risk reversal effects when taken to extremes. Our analysis reveals opportunities for innovation and potential pitfalls that require careful consideration. We conclude with a call to action for the SE research community to proactively harness the benefits of LLMs while developing frameworks and guidelines to mitigate their risks, to ensure continued rigor and impact of research in an AI-augmented future.
Context: Jupyter Notebook has emerged as a versatile tool that transforms how researchers, developers, and data scientists conduct and communicate their work. As the adoption of Jupyter notebooks continues to rise, so does the interest from the software engineering research community in improving the software engineering practices for Jupyter notebooks. Objective: The purpose of this study is to analyze trends, gaps, and methodologies used in software engineering research on Jupyter notebooks. Method: We selected 146 relevant publications from the DBLP Computer Science Bibliography up to the end of 2024, following established systematic literature review guidelines. We explored publication trends, categorized them based on software engineering topics, and reported findings based on those topics. Results: The most popular venues for publishing software engineering research on Jupyter notebooks are related to human-computer interaction instead of traditional software engineering venues. Researchers have addressed a wide range of software engineering topics on notebooks, such as code reuse, readability, and execution environment. Although reusability is one of the research topics for Jupyter notebooks, only 64 of the 146 studies can be reused based on their provided URLs. Additionally, most replication packages are not hosted on permanent repositories for long-term availability and adherence to open science principles. Conclusion: Solutions specific to notebooks for software engineering issues, including testing, refactoring, and documentation, are underexplored. Future research opportunities exist in automatic testing frameworks, refactoring clones between notebooks, and generating group documentation for coherent code cells.
Hydro-Science and Engineering (Hydro-SE) is a critical and irreplaceable domain that secures human water supply, generates clean hydropower energy, and mitigates flood and drought disasters. Featuring multiple engineering objectives, Hydro-SE is an inherently interdisciplinary domain that integrates scientific knowledge with engineering expertise. This integration necessitates extensive expert collaboration in decision-making, which poses difficulties for intelligence. With the rapid advancement of large language models (LLMs), their potential application in the Hydro-SE domain is being increasingly explored. However, the knowledge and application abilities of LLMs in Hydro-SE have not been sufficiently evaluated. To address this issue, we propose the Hydro-SE LLM evaluation benchmark (Hydro-SE Bench), which contains 4,000 multiple-choice questions. Hydro-SE Bench covers nine subfields and enables evaluation of LLMs in aspects of basic conceptual knowledge, engineering application ability, and reasoning and calculation ability. The evaluation results on Hydro-SE Bench show that the accuracy values vary among 0.74 to 0.80 for commercial LLMs, and among 0.41 to 0.68 for small-parameter LLMs. While LLMs perform well in subfields closely related to natural and physical sciences, they struggle with domain-specific knowledge such as industry standards and hydraulic structures. Model scaling mainly improves reasoning and calculation abilities, but there is still great potential for LLMs to better handle problems in practical engineering application. This study highlights the strengths and weaknesses of LLMs for Hydro-SE tasks, providing model developers with clear training targets and Hydro-SE researchers with practical guidance for applying LLMs.
Covid has made online teaching and learning acceptable and students, faculty, and industry professionals are all comfortable with this mode. This comfort can be leveraged to offer an online multi-institutional research-level course in an area where individual institutions may not have the requisite faculty to teach and/or research students to enroll. If the subject is of interest to industry, online offering also allows industry experts to contribute and participate with ease. Advanced topics in Software Engineering are ideally suited for experimenting with this approach as industry, which is often looking to incorporate advances in software engineering in their practices, is likely to agree to contribute and participate. In this paper we describe an experiment in teaching a course titled "AI in Software Engineering" jointly between two institutions with active industry participation, and share our and student's experience. We believe this collaborative teaching approach can be used for offering research level courses in any applied area of computer science by institutions who are small and find it difficult to offer research level courses on their own.
Background: Harnessing advanced computing for scientific discovery and technological innovation demands scientists and engineers well-versed in both domain science and computational science and engineering (CSE). However, few universities provide access to both integrated domain science/CSE cross-training and Top-500 High-Performance Computing (HPC) facilities. National laboratories offer internship opportunities capable of developing these skills. Purpose: This student presents an evaluation of federally-funded postgraduate internship outcomes at a national laboratory. This study seeks to answer three questions: 1) What computational skills, research skills, and professional skills do students improve through internships at the selected national laboratory. 2) Do students gain knowledge in domain science topics through their internships. 3) Do students' career interests change after these internships? Design/Method: We developed a survey and collected responses from past participants of five federally-funded internship programs and compare participant ratings of their prior experience to their internship experience. Findings: Our results indicate that participants improve CSE skills and domain science knowledge, and are more interested in working at national labs. Participants go on to degree programs and positions in relevant domain science topics after their internships. Conclusions: We show that national laboratory internships are an opportunity for students to build CSE skills that may not be available at all institutions. We also show a growth in domain science skills during their internships through direct exposure to research topics. The survey instrument and approach used may be adapted to other studies to measure the impact of postgraduate internships in multiple disciplines and internship settings.
A flaky test yields inconsistent results upon repetition, posing a significant challenge to software developers. An extensive study of their presence and characteristics has been done in classical computer software but not quantum computer software. In this paper, we outline challenges and potential solutions for the automated detection of flaky tests in bug reports of quantum software. We aim to raise awareness of flakiness in quantum software and encourage the software engineering community to work collaboratively to solve this emerging challenge.
MOHAMMAD EHSANIFAR, Associate Professor Ph.D, FATEMEH DEKAMINI, Ph.D, MOEIN KHAZAEI, Ph.D
et al.
The current research was conducted with the topic of investigating the effect of cultural, economic and technological
factors on the risk management of construction companies in Iran. This research was applied in terms of purpose and
descriptive-survey in terms of method. The statistical population of this research is: senior managers and engineers of
grade 1 construction companies in Iran, of which 120 people were selected as a sample through sampling in available
was selected and a researcher-made questionnaire was distributed among them and they were asked to rate each item
according to its importance from one (lowest) to five (highest). To analyze the data, partial least squares technique was
used with the help of SmartPLS software, and the results showed that cultural and economic factors do not affect the
risk management of Iran construction companies, but technological factors have an effect on the risk management of
Iran construction companies.
Commercial geography. Economic geography, Economics as a science
The passive mobilization of the hand joints by means of dedicated equipment accelerates patient recovery and decreases significantly the costs of therapy. For this reason, research and development of such equipment is essential. Important reductions in the development cycle duration of such equipment can be achieved by means of a specific technique known as Model-Based Design. Starting from these considerations, this paper puts forward a Model-Based Design approach to the study of a new concept of rehabilitation equipment of the hand joints actuated by a pneumatic muscle. The originality of the paper consists in the MATLAB-based rendering of the functional model of the rehabilitation equipment actuation system and in the presented simulation results. The purpose of this research was to obtain information concerning the behavior of the proposed system and to predict its performance prior to it being built physically. After simulation, the results are compared to the operational performance of the experimental model. The conclusion shows that the proposed operational model describes accurately the actual behavior of the system and can be used for future optimization of the rehabilitation equipment.
In this paper, we propose a method to estimate the mean square error (MSE) of the estimated channel for ATSC (Advanced Television Systems Committee) 3.0 systems. When combining the channel MSE and noise variance, we can better estimate the a priori LLR (log likelihood ratio) for the sum–product algorithm. The experimental results show that doing so yields better BER (bit error rate) performance in the 0 dB echo channel. The improvement in the 2-D channel estimation is about 0.2 dB. In the 1-D estimation case, the proposed approach is essential to decode codewords.
Matteo Gavazzoni, Nicola Ferro, Simona Perotto
et al.
We present a new algorithm to design lightweight cellular materials with required properties in a multi-physics context. In particular, we focus on a thermo-elastic setting by promoting the design of unit cells characterized both by an isotropic and an anisotropic behavior with respect to mechanical and thermal requirements. The proposed procedure generalizes the microSIMPATY algorithm to a thermo-elastic framework by preserving all the good properties of the reference design methodology. The resulting layouts exhibit non-standard topologies and are characterized by very sharp contours, thus limiting the post-processing before manufacturing. The new cellular materials are compared with the state-of-art in engineering practice in terms of thermo-elastic properties, thus highlighting the good performance of the new layouts which, in some cases, outperform the consolidated choices.
Abstract The supply chain is a dynamic and uncertain system consisting of material, information, and fund flows between different organizations, from the acquisition of the raw materials to the delivery of the finished products to the end customers. Closed-loop supply chains do not end with the delivery of the finished products to the end customers, the process continues until economic value is obtained from the returned products or they are disposed properly in landfills. Incorporating reverse flows in supply chains increases the uncertainty and complexity, as well as complicating the management of supply chains that are already composed of different actors and have a dynamic structure. Since agent-based modeling and simulation is a more efficient method of handling the dynamic and complex nature of supply chains than the traditional analytical methods, in this study agent-based modeling methodology has been used to model a generic closed-loop supply chain network design problem with the aims of integrating customer behavior into the network, coping with the dynamism, and obtaining a more realistic structure by eliminating the required assumptions for solving the model with analytical methods. The actors in the CLSC network have been defined as agents with goals, properties and behaviors. In the proposed model dynamic customer arrivals, the changing aspects of customers' purchasing preferences for new and refurbished products and the time, quantity and quality uncertainties of returns have been handled via the proposed agent-based architecture. To observe the behavior of the supply chain in several conditions various scenarios have been developed according to different parameter settings for the supplier capacities, the rate of customers being affected by advertising, the market incentive threshold values, and the environmental awareness of customers. From the scenarios, it has been concluded that the system should be fed in the right amounts for the new and refurbished products to increase the effectiveness of factors such as advertising, incentives, and environmental awareness for achieving the desired sales amounts and cost targets.
Electronic computers. Computer science, Information technology