Diana Hawashin, Khaled Salah, Raja Jayaraman
et al.
Efficiently matching patients to clinical trials is essential for advancing medical research and ensuring reliable outcomes. However, current matching methods face several challenges. These include data integrity issues from tampered records, privacy risks caused by weak anonymization, and manual processes that delay recruitment. In addition, centralized systems lack transparency, expose sensitive patient data to security vulnerabilities, and suffer from single points of failure that reduce resilience and trust. In this paper, we propose a blockchain and Large Language Models (LLMs)-driven solution for secure, trustworthy, traceable, decentralized, and transparent patient–clinical trial matching. Blockchain ensures data integrity, security, and transparency by eliminating single points of failure and enabling tamper-proof records. LLMs enhance patient–trial matching by automating the interpretation of complex eligibility criteria, improving accuracy, and significantly reducing the time required for manual review. Our approach uses Ethereum-based smart contracts to automate workflows such as trial registration, eligibility assessment, and consent tracking. We fine-tune GPT-4, T5, and Gemini on synthetic data derived from real clinical trial records and employ majority voting to ensure consistent and unbiased eligibility decisions. A prototype Gradio interface was developed as a minimum viable product (MVP) to demonstrate seamless interaction between LLMs and smart contracts. Performance evaluation based on accuracy (0.800), precision (0.733), recall (1.000), and F1-score (0.846) demonstrates reliable eligibility prediction. Cost analysis confirms affordability, and security evaluation verifies resilience against known threats. Comparison with existing solutions highlights the framework’s advantages in transparency, trust, and automation. The smart contract code is publicly available on GitHub.
Diffusion models were initially developed for text-to-image generation and are now being utilized to generate high quality synthetic images. Preceded by generative adversarial networks (GANs), diffusion models have shown impressive results using various evaluation metrics. However, commonly used metrics such as Frechet inception distance and inception score are not suitable for determining whether diffusion models are simply reproducing the training images. Here we train StyleGAN and a diffusion model, using BRATS20, BRATS21 and a chest x-ray (CXR) pneumonia dataset, to synthesize brain MRI and CXR images, and measure the correlation between the synthetic images and all training images. Our results show that diffusion models are more likely to memorize the training images, compared to StyleGAN, especially for small datasets and when using 2D slices from 3D volumes. Researchers should be careful when using diffusion models (and to some extent GANs) for medical imaging, if the final goal is to share the synthetic images.
Ummay Maria Muna, Shanta Biswas, Syed Abu Ammar Muhammad Zarif
et al.
Automated video anomaly detection (VAD) is a challenging task due to its context-dependent and sporadic nature. However, recent deep learning advancements offer promising solutions. In this paper, we propose a novel framework for detecting anomalies in videos by uniquely analyzing spatial and temporal (spatio-temporal) features. We address challenges such as the processing of lengthy videos and the sparse occurrence of anomalies by segmenting and labeling anomalous parts within videos. We employ a modified pre-trained vision transformer for video feature extraction, leveraging its ability to capture complex spatio-temporal patterns and the global context. Additionally, we incorporate a parameter-efficient recurrent model, the Simple Recurrent Unit Plus Plus (SRU++), which processes long sequential video embeddings efficiently by reducing computational costs by ten times compared to traditional methods. To further enhance the multiclass prediction performance, we develop a cluster-based weighting mechanism that assigns weights to classification scores based on feature similarity. We extensively evaluated our approach on three popular datasets — UCF-Crime, RWF-2000, and Smart City CCTV Violence Detection (SCVD) — achieving superior performance compared to state-of-the-art methods, making it well-suited for real-world surveillance applications.
As one of the major threats in cybersecurity, malware has been growing continuously and steadily. In recent years, researchers have proposed a number of graph representation learning based malware detection methods by leveraging the intrinsic topological features of malware, which has led to considerable development in this area. However, these existing malware studies still have two major limitations. (1) The complex topological structures of malware graphs often result in high computational overhead during feature extraction and processing. (2) Most existing approaches rely on conventional graph neural networks that are not specifically designed for malware classification tasks, leading to suboptimal performance, especially when dealing with minority class samples. To address these problems, we propose MalGEA, a novel malware detection and classification framework based on matrix factorization and graph external attention mechanisms. First, MalGEA extracts function call information from malware and constructs corresponding function call graphs. These graphs are then processed using sparse matrix factorization and spectral propagation to efficiently generate node embeddings. Finally, we employ an graph external attention network to model inter-graph relationships and perform malware detection and classification. To evaluate our approach, we utilized a benchmark malware dataset which contains 6 categories and 35 families, including 50k benign and 50k malicious samples. Experimental results demonstrate that our method significantly outperforms existing node embedding approaches in terms of computational efficiency, while also achieving high accuracy in malware detection and family classification tasks.
<p>In modern educational settings, overcrowded classrooms challenge student engagement and learning efficiency. To address these issues, we propose a novel smart seating system powered by Fog Computing that leverages Wireless Sensor Networks (WSN), Internet of Things (IoT), Fog Computing (FC) and Cloud Computing (CC) technologies. Our work introduces the first fog computing-driven smart seating system for classroom settings. It demonstrates significant improvements in latency (3.29 ms in Fog-based vs. 108.69 ms in cloud-based systems), while maintaining comparable network efficiency. Our findings highlight fog computing’s potential to transform real-time classroom management. Using iFogSim, we conducted a comparative study between traditional cloud-centric architectures and our fog-based system across various classroom scenarios. Results demonstrate that the fog-based architecture delivers superior real-time responsiveness, making it particularly suitable for dynamic educational environments. This research provides both technical insights into performance improvements and practical implementation guidelines for educational institutions seeking to optimize classroom management systems.</p><p> </p><p><strong>Received on, 06 May 2025</strong></p><p><strong>Accepted on, 03 June 2025 </strong></p><p><strong>Published on, 19 June 2025</strong></p>
Mathematical modelling of physiological processes is a key component of intelligent medical systems, as it describes disease mechanisms in greater detail and contributes to early diagnosis. This study presents an analytical model for assessing eye health, incorporating key ophthalmological parameters: intraocular pressure (IOP), perfusion coefficient (Pperf), best-corrected visual acuity (BCVA), visual field index (VFI), retinal nerve fibre layer thickness (RNFL), and neuroretinal rim area (Rim_area). The study aimed to develop a model that can accurately evaluate the nonlinear interactions between these parameters, improving diagnostic accuracy and predicting glaucoma progression. The study also aimed to determine critical threshold values of these ophthalmological indicators to improve clinical decision-making. The results demonstrated that application of numerical optimisation techniques such as L-BFGS-B and logarithmic-exponential transformations significantly improves the accuracy of glaucoma risk prediction; critical threshold values of ophthalmological parameters have been identified, improving precision of detection of glaucoma stages. Additionally, the study facilitates a systematic evaluation of the association between intraocular pressure and optic nerve condition, a factor deemed critical for accurate prediction of disease progression. The practical significance of this research is determined by the potential integration into medical IT systems for automated glaucoma screening and patient monitoring. The proposed approach can assist ophthalmologists in clinical decision-making by optimising treatment strategies and preventing irreversible vision loss. The model’s adaptability also enables its use in telemedicine applications, facilitating remote diagnostics and continuous patient assessment
SHA Yuyang, LU Jingtao, DU Haofan, ZHAI Xiaobing, MENG Weiyu, LIAN Xu, LUO Gang, LI Kefeng
Image segmentation is a crucial technology for environmental perception, and it is widely used in various scenarios such as autonomous driving and virtual reality. With the rapid development of technology, computer vision-based blind guiding systems are attracting increasing attention as they outperform traditional solutions in terms of accuracy and stability. The semantic segmentation of road images is an essential feature of a visual guiding system. By analyzing the output of algorithms, the guiding system can understand the current environment and aid blind people in safe navigation, which helps them avoid obstacles, move efficiently, and get the optimal moving path. Visual blind guiding systems are often used in complex environments, which require high running efficiency and segmentation accuracy. However, commonly used high-precision semantic segmentation algorithms are unsuitable for use in blind guiding systems owing to their low running speed and a large number of model parameters. To solve this problem, this paper proposes a lightweight road image segmentation algorithm based on multiscale features. Unlike existing methods, the proposed model contains two feature extraction branches, namely, the Detail Branch and Semantic Branch. The Detail Branch extracts low-level detail information from the image, while the Semantic Branch extracts high-level semantic information. Multiscale features from the two branches are processed and used by the designed feature mapping module, which can further improve the feature modeling performance. Subsequently, a simple and efficient feature fusion module is designed for the fusion of features with different scales to enhance the ability of the model in terms of encoding contextual information by fusing multiscale features. A large amount of road segmentation data suitable for blind guiding scenarios are collected and labeled, and a corresponding dataset is generated. The model is trained and tested on the dataset. The experimental results show that the mean Intersection over Union (mIoU) of the proposed method is 96.5%, which is better than that of existing image segmentation models. The proposed model can achieve a running speed of 201 frames per second on NVIDIA GTX 3090Ti, which is higher than that of existing lightweight image segmentation models. The model can be deployed on NVIDIA AGX Xavier to obtain a running speed of 53 frames per second, which can meet the requirements for practical applications.
Session-based recommendation systems serve as essential tools for assisting users in identifying matching interests and requirements from large volumes of data. These systems aim to predict the next user actions based on anonymous sessions. However, existing methods inadequately represent the overall interests of a user and frequently neglect the positional relationships among items. To address this limitation, an enhanced memory network-based session recommendation model, SR-MAN, is proposed to analyze global user interest representations and item sequence problems. Initially, the method introduces position encoding during the generation of item embedding vectors to emphasize the impact of different positions on the sequence. Subsequently, a neural Turing machine is employed to store recent session information, and an attention network is designed to learn long-term user preferences by integrating the most recent user interaction as the current interest indicator. Finally, the method integrates long-term and current preferences to predict and recommend items of interest. Bayesian Personalized Ranking(BPR) is employed to estimate the model parameters. Experiments on three datasets demonstrate the effectiveness of the proposed method.
On 30 December 2006, the REACH regulation was published in the Official Journal and it came immediately into force in the EU member states plus Norway, Iceland and Liechtenstein, which adopt it. Compared to the past legislation on the subject, the novelty introduced is the reversal of the burden of proof, i.e. the industry will guarantee that its goods that produces and merchandises are not harmful for the environment and human health Manufacturers, importers and users are obliged to manufacture, use or place on the market substances that do not cause harm to health or environment, regulating the substance throughout its life cycle, and even in products in which they are present. The regulation also establishes the European Chemistry Agency, whose mission is to secure the use and circulation of chemicals in Europe. This change in scenario has created serious troubles in the conformity assigning to products. In order to deal with this situation and to support members in the chemical safety assessment process, the Associazione Nazionale Fabbricanti Articoli Ottici (Italian National Association of Optical Goods Manufacturers), through its Technical Committee, in 2010 prepared a list of materials used by glasses, lenses and cases manufacturers matching substances and their risk level. Therefore an ad hoc Product Restricted Substance List has been developed to define specific Market Limits with their related test methods, that take into account the main international regulations. The document has been divided into two documents, one for spectacles and lenses, and one for cases.
Chemical engineering, Computer engineering. Computer hardware
Nowadays, the crude oil demand increasing with the growing the energy and plastic product demand, nevertheless the crude oil stocks are finite. The main part of crude oil content in reservoir in not recoverable with commercial recovery methods (primary and secondary). Widespread method of enhancing the crude oil production recovery is the tertiary crude oil recovery where many times use chemical agents. One of these methods when surfactants and surfactant mixtures are used in polymer-surfactant flooding. In this case the effective surfactant selection is required. An important step of the selection is the investigation of emulsification effect.
There are lot of variable and human factor in the classic manual emulsifying effect measurements what can cause mistakes. Our aim was decreasing the possible inaccuracy. To achieve this goal, the standardised (ISO 6611 and ASTM D1401) application in crude oil – surfactants – brine system was investigated, what used in case of crude oil derivatives. Surfactants/surfactant mixtures based on vegetable oil was used to investigate the emulsifying effect with the classic manual method and a new automated method. These results were compared and the conclusions to applicability the automatic method in surfactant selection was drew.
Chemical engineering, Computer engineering. Computer hardware
Information Technology (IT) services are services developed by service provider organizations for customers. Service provider organizations in implementing IT services have a goal to ensure IT systems remain in good condition to avoid incidents. Incidents are unexpected interruptions and reductions in the quality of IT services. Therefore, when an incident occurs in IT services, it is necessary to recover as quickly as possible or it is called incident management. Incident management is very important for service provider organizations, because if incidents are not managed and analyzed carefully then these incidents will recur. On this basis, a study was conducted to develop an IT service incident management analysis model based on the Information Technology Infrastructure Library (ITIL) V3 which aims to serve as a guide in analyzing and making recommendations for incident management. This model uses case studies in the PPTI section at Dinamika University. This model produces three stages including the incident data collection stage which produces IT service incident data. The incident management analysis stage produces a gap analysis, while the recommendation stage produces recommendations for improvement.
Taniza Marium, S.M. Ishraqul Huq, Oli Lowna Baroi
et al.
In terms of mechanical flexibility, organic SRAM offers better designs and a commercially feasible option with the ability to deliver acceptable performance. This paper investigates the implementation of different SRAM topologies based on organic thin film transistors (OTFTs). In this work, a compact spice model is used to simulate pOTFT and nOTFT in LTSpice software. Time delays, power consumption, the power delay product (PDP), and static noise margin (SNM) for read and write operations are calculated, and a comparative analysis of OTFT based 6T, 7T, 8T, and 9T SRAM topologies is performed. Among different topologies, 9T OTFT SRAM cell achieves a 1.67× increase in SNM, compared to conventional 6T OTFT-based SRAM cell. The highest figure of merit value of 9T SRAM cell indicates its suitability for various applications.
Electric apparatus and materials. Electric circuits. Electric networks, Computer engineering. Computer hardware
Federico Canfarini, Andrea Reverberi, Marco Vocciante
et al.
The continuous changes in socio-political scenarios of the last decades led to an impressive increase in terrorist events related to the use of improvised explosive devices (IEDs). The energetic material contained in them, representing the essential part of the apparatus, is object of intense investigation owing to the need of optimizing many variables, namely the chemical energy storage of the detonating compound, the availability of raw materials required for its synthesis, the ease of process synthesis by commonly used tools and the stability of the chemical energy carrier towards transport and handling. This critical analysis proposes a classification of the detonating compounds or mixtures according to their chemical, thermodynamic and ballistic properties that make them basic ingredients in IEDs and homemade explosives. The wide and always growing variety of ingredient combination poses a challenging problem of chemical identification, owing to an interference of signals in analytical data regression. Finally, a discussion on technical realizations of such improvised weapons is outlined in light of the recent protocols of process safety and disaster control.
Chemical engineering, Computer engineering. Computer hardware
Carlos Enrique Gomez Camacho, Loris Giansante, Bernardo Ruggeri
The management of Municipal Solid Waste (MSW) has always represented an important challenge of our society, and it is a constantly evolving issue characterized by variable temporal and spatial specificities. Despite recent technological developments, Waste-to-Energy (WtE) approaches considering environmental and energy sustainability factors and their implementation in real contexts are rather scarce. Unsorted MSW (the residue aside from the differentiate collection of recyclables) can be divided into key fractions for WtE applications, such as the gasification of the Refuse Derived Fuel (RDF) cut or the anaerobic digestion (AD) of the Organic Fraction of Municipal Solid Waste (OFMSW); however, there are certain residues of these processes as well as other fractions of MSW which typically end up in landfills. The present ex-ante study evaluates the convenience of introducing a Plasma Torch (PT) in the management strategy for unsorted MSW replacing the landfill option. The research is based on the modeling of the PT, considering the possible feed streams to this unit. In addition, sensitivity studies are carried to shed light on suitable operating conditions (temperature, equivalence ratio, gasifying agent) and the convenience of the process is assessed from an environmental and energy point of view, by comparing the PT scenario with the baseline case of landfilling. The Life Cycle Assessment (LCA) suggests that the environmental loads can be significantly reduced (>80%) by introducing the PT unit, while the system shows an increase of more than 50 % in the Energy Sustainability Index (ESI).
Chemical engineering, Computer engineering. Computer hardware
Study shows that counterfeit semiconductors or Integrated Circuits (ICs) are increasingly penetrating into advanced
electronic defense systems. Traditional supply chain management policies have been found unsuccessful in protecting
the IC supply chain. Our study demonstrates that the newly started threat mitigation initiative of Defense Advanced
Research Projects Agency's (DARPA) Supply Chain Hardware Integrity for Electronics Defense (SHIELD) scheme has not
matured yet, and the proposed authentication protocol improvements are still vulnerable to known non-invasive,
side-channel attacks. In present work, a novel authentication protocol based on strong mutual authentication is
proposed, which resists the demonstrated attacks on previous schemes. The security and performance comparison
with the previous work is provided, to inform the IC community about the seriousness of the weaknesses,
in previous works. The comparison results show that our proposed protocol exchanges more information,
uses more memory and makes more encryption computations. Thus, although our proposed scheme consumes
more energy, it has the security required by SHIELD. The outcome forces IC producers to provide enough
memory and processing power in a small die area, if the electronic defense IC supply chain is to have
the expected security.