The integration of machine learning (ML) algorithms into Internet of Things (IoT) applications has introduced significant advantages alongside vulnerabilities to adversarial attacks, especially within IoT-based intrusion detection systems (IDS). While theoretical adversarial attacks have been extensively studied, practical implementation constraints have often been overlooked. This research addresses this gap by evaluating the feasibility of evasion attacks on IoT network-based IDSs, employing a novel black-box adversarial attack. Our study aims to bridge theoretical vulnerabilities with real-world applicability, enhancing understanding and defense against sophisticated threats in modern IoT ecosystems. Additionally, we propose a defense scheme tailored to mitigate the impact of evasion attacks, thereby reinforcing the resilience of ML-based IDSs. Our findings demonstrate successful evasion attacks against IDSs, underscoring their susceptibility to advanced techniques. In contrast, we proposed a defense mechanism that exhibits robust performance by effectively detecting the majority of adversarial traffic, showcasing promising outcomes compared to current state-of-the-art defenses. By addressing these critical cybersecurity challenges, our research contributes to advancing IoT security and provides insights for developing more resilient IDS.
Nasim Soltani, Shayan Nejadshamsi, Zakaria Abou El Houda
et al.
Adversarial examples can represent a serious threat to machine learning (ML) algorithms. If used to manipulate the behaviour of ML-based Network Intrusion Detection Systems (NIDS), they can jeopardize network security. In this work, we aim to mitigate such risks by increasing the robustness of NIDS towards adversarial attacks. To that end, we explore two adversarial methods for generating malicious network traffic. The first method is based on Generative Adversarial Networks (GAN) and the second one is the Fast Gradient Sign Method (FGSM). The adversarial examples generated by these methods are then used to evaluate a novel multilayer defense mechanism, specifically designed to mitigate the vulnerability of ML-based NIDS. Our solution consists of one layer of stacking classifiers and a second layer based on an autoencoder. If the incoming network data are classified as benign by the first layer, the second layer is activated to ensure that the decision made by the stacking classifier is correct. We also incorporated adversarial training to further improve the robustness of our solution. Experiments on two datasets, namely UNSW-NB15 and NSL-KDD, demonstrate that the proposed approach increases resilience to adversarial attacks.
Grigorios Tzionis, Georgia Kougka, Ilias Gialampoukidis
et al.
This paper addresses the critical instability of Local Interpretable Model-agnostic Explanations (LIME). We introduce Adaptive Kernel Density Estimation LIME (AKDE-LIME), a novel approach that enhances local explanation stability by incorporating a density-aware weighting scheme. Unlike LIME’s standard proximity kernel, AKDE-LIME combines distance weighting with a Kernel Density Estimate (KDE) of the local sample distribution, assigning more representative weights to generated perturbations. We conduct a comprehensive evaluation of AKDE-LIME against LIME, TreeSHAP, and Anchor on five diverse tree-based models using a real-world dataset. Assessing performance on Stability and Robustness metrics across a matrix of noise levels (5% to 20%), our results consistently demonstrate that AKDE-LIME produces significantly more stable and robust explanations than standard LIME under all conditions. The performance of our method is often comparable to or better than state-of-the-art explainers like TreeSHAP. We conclude that AKDE-LIME is a promising and reliable alternative for generating trustworthy local explanations, addressing a key weakness of the original LIME algorithm.
Milad Leyli-abadi, Ricardo J. Bessa, Jan Viebahn
et al.
The interaction between humans and AI in safety-critical systems presents a unique set of challenges that remain partially addressed by existing frameworks. These challenges stem from the complex interplay of requirements for transparency, trust, and explainability, coupled with the necessity for robust and safe decision-making. A framework that holistically integrates human and AI capabilities while addressing these concerns is notably required, bridging the critical gaps in designing, deploying, and maintaining safe and effective systems. This paper proposes a holistic conceptual framework for critical infrastructures by adopting an interdisciplinary approach. It integrates traditionally distinct fields such as mathematics, decision theory, computer science, philosophy, psychology, and cognitive engineering and draws on specialized engineering domains, particularly energy, mobility, and aeronautics. Its flexibility is further demonstrated through a case study on power grid management.
Deepika Saxena, Smruti Rekha Swain, Jatinder Kumar
et al.
Secure resource management (SRM) within a cloud computing environment is a critical yet infrequently studied research topic. This paper provides a comprehensive survey and comparative performance evaluation of potential cyber threat countermeasure strategies that address security challenges during cloud workload execution and resource management. Cybersecurity is explored specifically in the context of cloud resource management, with an emphasis on identifying the associated challenges. The cyber threat countermeasure methods are categorized into three classes: defensive strategies, mitigating strategies, and hybrid strategies. The existing countermeasure strategies belonging to each class are thoroughly discussed and compared. In addition to conceptual and theoretical analysis, the leading countermeasure strategies within these categories are implemented on a common platform and examined using two real-world virtual machine (VM) data traces. Based on this comprehensive study and performance evaluation, the paper discusses the trade-offs among these countermeasure strategies and their utility, providing imperative concluding remarks on the holistic study of cloud cyber threat countermeasures and secure resource management. Furthermore, the study suggests future methodologies that could effectively address the emerging challenges of secure cloud resource management.
Inclusiveness and economic development have been slowed by the pandemics and military conflicts. This study investigates the main determinants of inclusiveness at the European level. A multi-method approach is used, with Principal Component Analysis (PCA) applied to create the Inclusiveness Index and Generalised Method of Moments (GMM) analysis used to investigate the determinants of inclusiveness. The data comprises a range of 22 years, from 2000 to 2021, for 32 European countries. The determinants of inclusiveness and their effects were identified. First, economic growth, industrial upgrading, electricity consumption, digitalisation, and the quantitative aspect of governance, all have a positive impact on inclusive growth in Europe. Second, the level of CO2 emissions and inflation have a negative impact on inclusiveness. Tomorrow's inclusive and sustainable growth must include investments in renewable energy, digital infrastructure, inequality policies, sustainable governance, human capital, and inflation management. These findings can help decision makers design inclusive growth policies.
This paper investigates two predictive modeling approaches for estimating the thermal and tribological performance of graphene-enhanced greases, aiming to reduce reliance on protracted endurance tests. Seven grease formulations with varying graphene concentrations (0–4 wt%) were prepared and tested under a uniform load to capture temperature evolution, wear scar area and coefficient of friction. A classical piecewise regression model, augmented by a Linear Quadratic Regulator (LQR), leverages feedback control to correct temperature predictions and subsequently estimate wear using a polynomial fit. This framework demonstrated high accuracy in tracking transient thermal behaviour, maintaining temperature deviations within ±1 °C of measured data. In parallel, a quantum-classical hybrid model employs a fidelity-based quantum kernel with support vector regression. By encoding partial early-cycle temperature measurements (e.g., from 30 to 120s) into a higher-dimensional Hilbert space, the quantum approach captures subtle nonlinearities and yields strong correlations for both final temperature and wear scar area. Moreover, consistent performance on IBM Quantum models with realistically simulated noise underscores the model’s potential for practical industrial implementation. Collectively, these results confirm the viability of advanced computational tools, both classical and quantum, for rapid, data-driven lubricant assessments. They highlight opportunities to optimize graphene content while minimizing costly trial and error testing.
Pareto set learning (PSL) is an emerging approach for acquiring the complete Pareto set of a multi-objective optimization problem. Existing methods primarily rely on the mapping of preference vectors in the objective space to Pareto optimal solutions in the decision space. However, the sampling of preference vectors theoretically requires prior knowledge of the Pareto front shape to ensure high performance of the PSL methods. Designing a sampling strategy of preference vectors is difficult since the Pareto front shape cannot be known in advance. To make Pareto set learning work effectively in any Pareto front shape, we propose a Pareto front shape-agnostic Pareto Set Learning (GPSL) that does not require the prior information about the Pareto front. The fundamental concept behind GPSL is to treat the learning of the Pareto set as a distribution transformation problem. Specifically, GPSL can transform an arbitrary distribution into the Pareto set distribution. We demonstrate that training a neural network by maximizing hypervolume enables the process of distribution transformation. Our proposed method can handle any shape of the Pareto front and learn the Pareto set without requiring prior knowledge. Experimental results show the high performance of our proposed method on diverse test problems compared with recent Pareto set learning algorithms.
Dynamic community detection is crucial for elucidating the temporal evolution of social structures, information dissemination, and interactive behaviors within complex networks. Nonnegative matrix factorization provides an efficient framework for identifying communities in static networks but fall short in depicting temporal variations in community affiliations. To solve this problem, this paper proposes a Modularity maximization-incorporated Nonnegative Tensor RESCAL Decomposition (MNTD) model for dynamic community detection. This method serves two primary functions: a) Nonnegative tensor RESCAL decomposition extracts latent community structures in different time slots, highlighting the persistence and transformation of communities; and b) Incorporating an initial community structure into the modularity maximization algorithm, facilitating more precise community segmentations. Comparative analysis of real-world datasets shows that the MNTD is superior to state-of-the-art dynamic community detection methods in the accuracy of community detection.
Ivan Izonin, Roman Tkachenko, Oleh Berezsky
et al.
Today, the field of biomedical engineering spans numerous areas of scientific research that grapple with the challenges of intelligent analysis of small datasets. Analyzing such datasets with existing artificial intelligence tools is a complex task, often complicated by issues like overfitting and other challenges inherent to machine learning methods and artificial neural networks. These challenges impose significant constraints on the practical application of these tools to the problem at hand. While data augmentation can offer some mitigation, existing methods often introduce their own set of limitations, reducing their overall effectiveness in solving the problem. In this paper, the authors present an improved neural network-based technology for predicting outcomes when analyzing small and extremely small datasets. This approach builds on the input doubling method, leveraging response surface linearization principles to improve performance. Detailed flowcharts of the improved technology’s operations are provided, alongside descriptions of new preparation and application algorithms for the proposed solution. The modeling, conducted using two biomedical datasets with optimal parameters selected via differential evolution, demonstrated high prediction accuracy. A comparison with several existing methods revealed a significant reduction in various errors, underscoring the advantages of the improved neural network technology, which does not require training, for the analysis of extremely small biomedical datasets.
In multi-turn dialogue generation, responses are not only related to the topic and background of the context but also related to words and phrases in the sentences of the context. However, currently widely used hierarchical dialog models solely rely on context representations from the utterance-level encoder, ignoring the sentence representations output by the word-level encoder. This inevitably results in a loss of information while decoding and generating. In this paper, we propose a new dialog model X-ReCoSa to tackle this problem which aggregates multi-scale context information for hierarchical dialog models. Specifically, we divide the generation decoder into upper and lower parts, namely the intention part and the generation part. Firstly, the intention part takes context representations as input to generate the intention of the response. Then the generation part generates words depending on sentence representations. Therefore, the hierarchical information has been fused into response generation. we conduct experiments on the English dataset DailyDialog. Experimental results exhibit that our method outperforms baseline models on both automatic metric-based and human-based evaluations.
Younes El koudia, Jarou Tarik, Abdouni Jawad
et al.
The work show in this paper progresses through a sequence of physics-based increasing fidelity models that are used to design the robot controllers that respect the limits of the robot capabilities, develop a reference simple controller applicable to a large subset of tracking conditions, which include mostly non-invasive or highly dynamic movements and define path geometry following the control problem and develop both a simple geometric control and a dynamic model predictive control approach. In this paper, we propose for a nonlinear model with disturbance effect, the mathematical modeling of the longitudinal and lateral movements using PID with a feed-forward controller. This study proposes a feedforward controller to eliminate the disturbance effect.
Relevance. Distributed systems containing hundreds and thousands of objects are usually built in the form of hierarchical structures. In these structures, lower-level objects are combined into subsets for connection to the corresponding centers. The existing algorithms are not capable of successfully solving structuring problems on sets of this dimension. Therefore, new algorithms are needed that are suitable for solving structuring problems on sets containing thousands of objects. Aim. To develop an algorithm for generating a compact partition on high-dimensional sets containing up to a thousand objects located in a given territory. Methods. Applied graph theory, linear programming methods, construction and analysis of the effectiveness of algorithms, the theory of compact partitions, compact sets of objects and their clusters. Results. It is proposed to represent the territorial location of many objects of a distributed system in the form of a topological graph. The paper introduces the concept of an active search zone for the nearest vertices to increase the efficiency of the algorithm for generating compact sets and identifying clusters. This makes it possible to replace the matrix of distances between graph vertices with a list of vertex incidentors generated based on the active search zone. The authors developed the algorithm for approximate solution to the problem of compact partitioning a set of objects of a topological graph, represented by a list of vertex incidentors, into a given number of subsets. For each object, the algorithm recursively increases the power of compact sets, analyzes the resulting clusters and, under certain conditions, proceeds to the formation of a compact partition. The problem of forming subsets of a compact partition based on clusters is formed as a linear programming problem of transport type. The algorithm presentation is accompanied by an example.
Chi-Ching Hsu, Gaetan Frusque, Mahir Muratovic
et al.
Circuit breakers (CBs) play an important role in modern society because they make the power transmission and distribution systems reliable and resilient. Therefore, it is important to maintain their reliability and to monitor their operation. A key to ensure a reliable operation of CBs is to monitor their condition. In this work, we performed an accelerated life testing for mechanical failures of a vacuum circuit breaker (VCB) by performing close-open operations continuously until failure. We recorded data for each operation and made the collected run-to-failure dataset publicly available. In our experiments, the VCB operated more than 26000 close-open operations without current load with the time span of five months. The run-to-failure long-term monitoring enables us to monitor the evolution of the VCB condition and the degradation over time. To monitor CB condition, closing time is one of the indicators, which is usually measured when the CB is taken out of operation and is completely disconnected from the network. We propose an algorithm that enables to infer the same information on the closing time from a non-intrusive sensor. By utilizing the short-time energy (STE) of the vibration signal, it is possible to identify the key moments when specific events happen including the time when the latch starts to move, and the closing time. The effectiveness of the proposed algorithm is evaluated on the VCB dataset and is also compared to the binary segmentation (BS) change point detection algorithm. This research highlights the potential for continuous online condition monitoring, which is the basis for applying future predictive maintenance strategies.
Network robustness is critical for various societal and industrial networks again malicious attacks. In particular, connectivity robustness and controllability robustness reflect how well a networked system can maintain its connectedness and controllability against destructive attacks, which can be quantified by a sequence of values that record the remaining connectivity and controllability of the network after a sequence of node- or edge-removal attacks. Traditionally, robustness is determined by attack simulations, which are computationally very time-consuming or even practically infeasible. In this paper, an improved method for network robustness prediction is developed based on learning feature representation using convolutional neural network (LFR-CNN). In this scheme, higher-dimensional network data are compressed to lower-dimensional representations, and then passed to a CNN to perform robustness prediction. Extensive experimental studies on both synthetic and real-world networks, both directed and undirected, demonstrate that 1) the proposed LFR-CNN performs better than other two state-of-the-art prediction methods, with significantly lower prediction errors; 2) LFR-CNN is insensitive to the variation of the network size, which significantly extends its applicability; 3) although LFR-CNN needs more time to perform feature learning, it can achieve accurate prediction faster than attack simulations; 4) LFR-CNN not only can accurately predict network robustness, but also provides a good indicator for connectivity robustness, better than the classical spectral measures.
The article examines the impact of the shock induced by COVID-19 on the Polish stock market. As an object of research, 18 shares of companies included in the WIG20 index were taken. The impact of the shock is examined in the context of changing “risk-return”
correspondence. Three-time intervals were used for the study: before the shock, shock, in fact, aftershock. For the shock in fact period, two parameters have been introduced, which in pairs describe the “reaction” of stocks to a shock. These are shock deepness and recovery rate parameters. A linear type of regression relationship between them is identified. In the periods
“before shock” and “aftershock”, “risk-return” correspondence is considered in terms of two approaches: variability and Value-at-Risk. Both approaches show an increased risk in the post-shock period but to varying degrees. The first approach shows an increase to a greater extent than the second. An explanation of this observation is given. The dynamics of changes in liquidity in terms of the average daily trading volume is considered complementary. The investigated dynamics shows an increase in trading volumes directly in the shock and post-shock periods. The explanation for this is considered in the aspect of reformatting by investors of their portfolios.
Nathan L.D. Sarmento, José Marques Basílio, Maxsuel F. Cunha
et al.
This paper presents a study of the resistive behavior of a Shape Memory Alloy spring, with a focus on the application of electrical resistance feedback in control systems. Artificial Neural Networks of different topologies were designed to learn the relation between spring electrical resistance and the force exerted. The feedback between layers in Neural Networks is demonstrated to be a key parameter in learning the non-linear and hysteretic behavior of Shape Memory Alloys. Experiments with closed-loop systems showed that shape memory alloy springs generated forces that converged satisfactorily to the desired reference values. The scientific contribution of this work is the use of electrical resistance variation as feedback for controlling the spring force, eliminating the use of an external force sensor. Neural networks were used for both, the sensing process and the system control; in that way the nonlinear and hysterical behavior of the shape memory alloy actuator was well considered.
Anesthetic agents are widely used for their hypnotic and sedative effects as part of surgical procedures, but despite their widespread use there continues to be suboptimal dosing of the anesthetic to the patients, which necessitates more effective means of monitoring the depth of anesthesia (DoA) as anesthetic agents are administered. Effective means towards DoA monitoring could improve the optimal dosing for each patient to reduce the incidence awareness under general anesthesia and post-operative cognitive dysfunction; as well as reduce the incidence of complications associated with overdosing, such as hypertension. This work presents a novel pilot case study on an ongoing research around more effective means of DoA prediction, where patient-specific models are designed using a combination of signal processing and machine learning alongside electroencephalography (EEG) signals acquired from the frontal cortex. This particular case study investigates the use of various intelligence sources, i.e., machine intelligence, representing unsupervised feature extraction from a convolutional neural network (CNN), and expert-based intelligence via handcrafted features for the prediction of the DoA. It was seen that the handcrafted features provided the highest prediction accuracy across the various patient data, due to the ability to ‘bake-in’ prior knowledge regarding the physics of the process into the feature extraction process. The highest prediction accuracy was seen to be 86.5 ± 9.9 % for the LDA classification model upon pre-processing with the Linear Series Decomposition Learner (LSDL) algorithm. The fusion of both intelligence sources also provided an equivalent prediction accuracy similar to that of the hand-crafted features only.