Cristian Cosentino, Simone Gatto, Pietro Liò
et al.
Machine Learning (ML) models trained on large-scale datasets learn useful predictive patterns, but they may also memorize undesired information, leading to risks such as information leakage, bias, copyright violations, and privacy attacks. As these models are increasingly deployed in real-world and regulated settings, the consequences of such memorization become practical and high-stakes, reinforced by data-protection frameworks that grant individuals a Right to be Forgotten (e.g., the GDPR). Simply removing a record from the training dataset does not guarantee the elimination of its influence from the model, while retrain-from-scratch procedures are often prohibitive for modern architectures, including Transformers and Large Language Models (LLMs). In this work, we provide a perspective on Machine Unlearning (MU) in supervised learning settings, with a particular focus on Natural Language Processing (NLP) scenarios, grounded in a PRISMA-driven systematic review. We propose a multi-level taxonomy that organizes MU techniques along practical and conceptual dimensions, including exactness (exact versus approximate), unlearning granularity, guarantees, and application constraints. To complement this perspective, we run an illustrative benchmark evaluation using a standardized unlearning protocol on DistilBERT trained on a public corpus of news headlines for topic classification, contrasting the retraining gold standard with representative design-for-unlearning and approximate post hoc techniques. For completeness, we also report two oracle-assisted upper-bound baselines (distillation and scrubbing) that rely on a clean retrained reference model, and we account for their incremental cost separately. Our analysis jointly considers model utility, probabilistic quality, forgetting and privacy indicators, as well as computational efficiency. The results highlight systematic trade-offs between accuracy, computational cost, and removal effectiveness, providing practical guidance for selecting machine unlearning techniques in realistic deployment scenarios.
Efficient thermal management is vital in modern mechanical and energy systems, where conventional engine oils often exhibit limited heat transfer capabilities. This study investigates the enhancement of thermal convection in engine oil by dispersing molybdenum tetrasulfide nanoparticles (MoS₄) to form a high-performance nanofluid. The natural convection behavior of this nanofluid is analyzed within a square porous cavity featuring uniformly heated horizontal walls and isothermally cooled vertical walls. The governing equations are developed using scaling variables and the Boussinesq approximation and solved numerically through the finite element method. The effects of nanoparticle volume fraction (0–0.07), Rayleigh number (103–10⁶), and Darcy number (10⁻⁵–10⁻²) are systematically examined. Results show that increasing the MoS₄ nanoparticle concentration substantially enhances convective heat transfer, with the average Nusselt number rising by up to 28 % and the peak stream function reaching 17.0 at a volume fraction of 0.07 under low Darcy and Rayleigh conditions. These findings demonstrate that even minimal nanoparticle addition can significantly improve the heat transport capability of engine oils in porous enclosures. The study introduces a novel combination of molybdenum tetrasulfide-based nanofluids and porous media analysis, extending beyond prior work by quantifying the coupled effects of nanoparticle concentration and porous resistance on buoyancy-driven flow performance.
Abstract Mining safety heavily depends on ventilation, which constitutes a significant portion of the energy costs in operations. Optimizing mine ventilation systems (MVSO) is crucial for minimizing this energy expenditure. However, current algorithms encounter challenges when applied to large-scale mines, primarily due to the complexity of variables and limited attention to optimizing main fans. This study introduces a theoretical knowledge enhanced genetic algorithm for MVSO, incorporating main fan adjustments. The algorithm models changes in the main fan’s operational status and integrates ventilation network equivalent simplification (VNES) and the minimum spanning tree (MST) to reduce the number of variables in the mine ventilation network. Additionally, leveraging mine ventilation sensitivity theory (MVST) enhances the quality of the initial algorithmic population. A simple case and two engineering cases collectively validated that the algorithm consistently provides effective and reliable optimization solutions for mine ventilation systems across varying scales. Specifically, the algorithm reduced energy consumption from 326.94 to 186.99 kW, 433.14 to 239.48 kW, and 520.53 to 324.90 kW across three different scales of mine ventilation systems. Comparative analysis with four other algorithms shows that, although this algorithm has a longer runtime due to the need to identify the minimum spanning tree during iterations, its ability to reduce problem dimensionality and improve population quality results in more stable and superior convergence performance, especially for large-scale mine ventilation systems. Graphical abstract
Electronic computers. Computer science, Information technology
In biomedical engineering, the behavior of gyrotactic microorganisms with non-Newtonian fluids such as tangent hyperbolic fluids improve the design of targeted drug delivery systems. In this system control over microorganism movement is essential. The present study deals with the synergistic influence of gyrotactic microorganisms and bimolecular reactions on the bidirectional flow of tangent hyperbolic fluids under Nield boundary conditions. Further, the flow characteristic of the non-Newtonian fluid is enhanced by incorporating the impact of thermal radiation, heat sources, Brownian motion, and thermophoresis. The presentation of these phenomena is vital for an extensive range of applications, including industrial processes, biomedical engineering, and environmental management. The analysis employs advanced mathematical modeling which needs suitable transformation rules to get the non-dimensional form and further numerical simulation is presented with the assistance of the “shooting-based fourth-order Runge–Kutta technique”. The results are depicted for the several contributing factors via the built- in-house function bvp4c in “MATLAB”. The authentication of the study with the prior research is a benchmark to precede further research in this direction. However, the outstanding results are; the fluid velocity is controlled by increasing non-Newtonian Weissenberg number whereas the velocity slip shows dual characteristics on the axial velocity distribution. Further, the motile microorganism profile is controlled by the enhanced bioconvection Lewis number.
Katia Rasheva-Yordanova, Georgi P. Dimitrov, Paulina T. Tsvetkova
et al.
The palimpsests represent unique historical sources that hold the potential for new insights in the field of human history. These manuscripts, rewritten and reused over time, pose challenges in the research related to their readability and interpretation. The present study aims to investigate the readability of palimpsests through the use of image preprocessing techniques. The article focuses on methods for the preprocessing of palimpsests that could lead to a significant improvement in the readability of the 'hidden' text. The challenges encountered during the processing of palimpsests are explored and various techniques applicable to improving the readability of these manuscripts are analyzed.
The primary goal of preprocessing in this context is to separate the 'hidden' text from the visible one, neutralizing material defects and aging. The article presents specific methods such as extracting specific color range from a palimsest image.
The experimental techniques are highlighted with sample codes illustrating the application of the respective technology. The current research attempts to advance the development of methods for processing palimpsests and opens up new perspectives for extracting information from those historically valuable manuscripts.
Personality provides a deep insight of someone and has an important part in someone’s job performance. Predicting personality through social media has been studied on several research. The problem is how to improve the performance of personality prediction system. The purpose of this research is to predict personality on Twitter users and increase the performance of the personality prediction system. An online survey using Big Five Inventory (BFI) questionnaire has been distributed and gathered 295 Twitter users with 511,617 tweets data. In this research, we experiment on two different methods using Support Vector Machine (SVM), and the combination of SVM and BERT as the semantic approach. This research also implements Linguistic Inquiry Word Count (LIWC) as the linguistic feature for personality prediction system. The results showed that combination of these two methods achieve 79.35% accuracy score and with the implementation of LIWC can improve the accuracy score up to 80.07%. Overall, these results showed that the combination of SVM and BERT as the semantic approach with the implementation of LIWC is recommended to gain a better performance for the personality prediction system.
Nathan Martindale, Muhammad Ismail, Douglas A. Talbert
As new cyberattacks are launched against systems and networks on a daily basis, the ability for network intrusion detection systems to operate efficiently in the big data era has become critically important, particularly as more low-power Internet-of-Things (IoT) devices enter the market. This has motivated research in applying machine learning algorithms that can operate on streams of data, trained online or “live” on only a small amount of data kept in memory at a time, as opposed to the more classical approaches that are trained solely offline on all of the data at once. In this context, one important concept from machine learning for improving detection performance is the idea of “ensembles”, where a collection of machine learning algorithms are combined to compensate for their individual limitations and produce an overall superior algorithm. Unfortunately, existing research lacks proper performance comparison between homogeneous and heterogeneous online ensembles. Hence, this paper investigates several homogeneous and heterogeneous ensembles, proposes three novel online heterogeneous ensembles for intrusion detection, and compares their performance accuracy, run-time complexity, and response to concept drifts. Out of the proposed novel online ensembles, the heterogeneous ensemble consisting of an adaptive random forest of Hoeffding Trees combined with a Hoeffding Adaptive Tree performed the best, by dealing with concept drift in the most effective way. While this scheme is less accurate than a larger size adaptive random forest, it offered a marginally better run-time, which is beneficial for online training.
Noorbakhsh Amiri Golilarz, Mirpouya Mirmozaffari, Tayyebeh Asgari Gashteroodkhani
et al.
In this research, we propose to utilize the newly introduced Multi-population differential evolution-assisted Harris Hawks Optimization Algorithm (CMDHHO) in the optimization process for satellite image denoising in the wavelet domain. This optimization algorithm is the improved version of the previous HHO algorithm which consists of chaos, multi-population, and differential evolution strategies. In this study, we applied several optimization algorithms in the optimization procedure and we compared the de-noising results with CMDHHO based noise suppression as well as with the Thresholding Neural Network (TNN) approaches. It is observed that applying the CMDHHO algorithm provides us with better qualitative and quantitative results comparing with other optimized and TNN based noise removal techniques. In addition to the quality and quantity improvement, this method is computationally efficient and improves the processing time. Based on the experimental analysis, optimized based noise suppression performs better than TNN based image de-noising. Peak Signal to Noise Ratio (PSNR) and Mean Structural Similarity Index (MSSIM) are used to evaluate and measure the performance of different de-noising methods. Experimental results indicate the superiority of the proposed CMDHHO based satellite image de-noising over other available approaches in the literature.
In this article, an integrated computer-assisted process-planning and computer-assisted fixture layout planning system is presented for the automatic generation of process and fixture maps. The part feature model is created using various algorithms and the geometric reasoning approach. The feature-based methodology based on the machining database is then applied for the generation of process maps. The setup scheme algorithm allocates each feature to a definite setup based on its location. The part geometric database and setup plan aids the fixture layout planning process. During fixture layout planning, the standard fixture rules are applied to determine the locating arrangement and feasible datum along with the suitable positions using a fixture database for initial fixture layout planning data. Visual C++ is used to implement the proposed methodology because it interacts with current computer-aided design software. Then a case study is presented to develop an initial fixture layout. Afterwards, the ANSYS parametric design language optimisation tool is applied to automatically optimise locator and clamp positions that yield minimum workpiece deformation. At the end, finite element analysis results depicting deformation magnitudes are presented.
Berkembangnya industri kecil dan menengah menimbulkan persaingan dalam industri. Hal ini membuat setiap usaha harus memperhatikan kebutuhan pelanggan. Kualitas merupakan salah satu faktor pemenuhan kebutuhan pelanggan dan merupakan jaminan yang harus diberikan oleh perusahaan kepada pelanggan. Industri yang menghasilkan kualitas produk yang baik akan mengurangi kerugian karena kegagalan produk. Produk yang sesuai spesifikasi dan seragam dapat dihasilkan dengan cara memperkecil variasi proses. Tujuan dari penelitian ini adalah mengimplementasikan konsep six sigma pada suatu industri yang memproduksi genteng. Six sigma merupakan metode mengidentifikasi penyebab kecacatan pada produk maupun proses dengan memperbaiki permasalahan dan meningkatkan kualitas melalui siklus DMAIC (Define, Measure, Analysis, Improve, Control). Dari hasil penelitian didapatkan penurunan DPMO dari 29311 menjadi 8974,35 dan terjadi peningkatan nilai sigma level dari 3,35 menjadi 3,99 sigma.
Abstract
[Quality Improvement On Tile Products With Six Sigma Method] The development of small and medium industries has led to competition in the industry. Competitions make every business must pay attention to customer needs. Quality is one of the factors fulfilling customer needs and is a guarantee that must be given by the company to customers. Industries that produce good quality products will reduce losses due to product failure. Products that meet specifications and uniforms can be produced by minimizing process variations. The purpose of this study is to implement the concept of six sigma in an industry that produces tile. Six Sigma is a method of identifying causes of defects in products and processes by fixing problems and improving quality through the DMAIC cycle (Define, Measure, Analysis, Improve, Control). From the results of the study, there was a decrease in DPMO from 29311 to 8974.35 and an increase in the sigma level from 3.35 to 3.99 sigma.
Keywords: quality; Six Sigma; DMAIC; taguchi method
Анатолій Міщенко, Андрій Шишацький, Тетяна Бондаренко
et al.
На початку ведення гібридної війни на Сході України Збройні Сили України були оснащені застарілими засобами зв ’язку, які в умовах ведення бойових дій були малоефективними. Так як часу на розробку нової техніки не було, Збройні Сили України було укомплектовано цивільним телекомунікаційним обладнанням, яке непогано себе зарекомендувало, але все ж не відповідало стандартам та вимогам, що висуваються до зразків озброєння та військової техніки. Наприклад, експлуатації засобів зв ’язку в важких погодних умовах, можливості протидії засобами радіоелектронної боротьби та радіоелектронної розвідки противника. Незважаючи на ці недоліки, технології передачі даних, що використовують комерційні компанії, знаходять широке застосування в військовій сфері. Під час проведення авторами зазначеного дослідження використані класичні методи наукового пізнання аналізу та синтезу, основні положення теорії зв’язку, теорії сигналів, теорії завадозахищеності та сигнально-кодових конструкцій. В ході проведеного авторами дослідження розглянуто основні технології формування та обробки сигналів, що можуть знайти застосування при розробці та модернізації засобів військового радіозв ’язку. Основну увагу в статті приділено аналізу технологій широкосмугового доступу та технологіям, в основу яких покладена робота з складними, складеними та шумоподібними сигналами, оскільки це дасть змогу підвищити завадозахищеність, скритність та безпеку засобів радіозв’язку під час передачі інформації. Отже, перспективним напрямком подальших наукових досліджень авторів слід вважати розробку математичної моделі функціонування засобів військового радіозв ’язку під час використання тієї чи іншої технології передачі даних.
Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.
The cement distributor has a cement warehouse in Pasuruan. Cement warehouse always ready to be shipped to a regular customer. Current demand from customers remains unfulfilled because the number of corporate modes of transportation is not proportional to the number of cement demand. Thus, the company plans to add the truck with two alternatives of buying new trucks or renting Hino Ranger FG 235 JJ. To determine the best alternative, an assessment of the criteria for the addition of trucks is required. In this study AHP is used to determine the level of importance among the criteria, assess the criteria of discounts, shipping security, corporate attributes listed in vehicles, cost, age, administration, maintenance, risk of damage, availability. The result of the criteria weighting is used as the input for the selection of the best alternative in the TOPSIS method. The results obtained from the calculation of TOPSIS for alternative purchase of a new truck and renting that have value of 0.599 and 0.401.
In a connected care environment, more citizens are engaging in their health care through mobile apps and social media tools. Given this growing health care engagement, it is important for health care professionals to have the knowledge and skills to evaluate and recommend appropriate digital tools. The purpose of this article is to identify and review criteria or instruments that can be used to evaluate mobile apps and social media. The analysis will review current literature as well as literature designed by professional health care organizations. This review will facilitate health care professionals’ assessment of mobile apps and social media tools that may be pertinent to their patient population. The review will also highlight strategies which a health care system can use to provide guidance in recommending mobile apps and social media tools for their patients, families, and caregivers.
In this paper, we aim to maximize the sum rate of a full-duplex cognitive femtocell network (FDCFN) as well as guaranteeing the quality of service (QoS) of users in the form of a required signal to interference plus noise ratios (SINR). We first consider the case of a pair of channels, and develop optimum-achieving power control solutions. Then, for the case of multiple channels, we formulate joint duplex model selection, power control, and channel allocation as a mixed integer nonlinear problem (MINLP), and propose an iterative framework to solve it. The proposed iterative framework consists of a duplex mode selection scheme, a near-optimal distributed power control algorithm, and a greedy channel allocation algorithm. We prove the convergence of the proposed iterative framework as well as a lower bound for the greedy channel allocation algorithm. Numerical results show that the proposed schemes effectively improve the sum rate of FDCFNs.