In this paper, we propose a differential evolution (DE) algorithm specifically tailored for the design of Linear-Quadratic-Gaussian (LQG) controllers in quantum systems. Building upon the foundational DE framework, the algorithm incorporates specialized modules, including relaxed feasibility rules, a scheduled penalty function, adaptive search range adjustment, and the ``bet-and-run'' initialization strategy. These enhancements improve the algorithm's exploration and exploitation capabilities while addressing the unique physical realizability requirements of quantum systems. The proposed method is applied to a quantum optical system, where three distinct controllers with varying configurations relative to the plant are designed. The resulting controllers demonstrate superior performance, achieving lower LQG performance indices compared to existing approaches. Additionally, the algorithm ensures that the designs comply with physical realizability constraints, guaranteeing compatibility with practical quantum platforms. The proposed approach holds significant potential for application to other linear quantum systems in performance optimization tasks subject to physically feasible constraints.
Virtualizing the Radio-Access Network (RAN) is increasingly viewed as an enabler of affordable 5G expansion and a stepping-stone toward AI-native 6G. Most discussions, however, still approach spectrum policy, cloud engineering and organizational practice as separate topics. This paper offers an integrated perspective spanning four pillars -- science, technology, business strategy and culture. A comparative U.S.\ case study illustrates how mid-band contiguity, complemented by selective mmWave capacity layers, can improve both coverage and churn when orchestrated through software-defined carrier aggregation. We derive analytic capacity and latency bounds for Split 7.2 $\times$ vRAN/O-RAN deployments, quantify the throughput penalty of end-to-end 256-bit encryption, and show how GPU/FPGA off-load plus digital-twin-driven automation keeps the hybrid-automatic-repeat request (HARQ) round-trip within a 0.5 ms budget. When these technical enablers are embedded in a physics-first delivery roadmap, average vRAN cycle time drops an order of magnitude -- even in the presence of cultural head-winds such as dual-ladder'' erosion. Three cybernetic templates -- the Clock-Hierarchy Law, Ashby's Requisite Variety and a delay-cost curve -- are then used to explain why silo-constrained automation can amplify, rather than absorb, integration debt. Looking forward, silicon-paced 6G evolution (9-12 month node shrinks, sub-THz joint communication-and-sensing, chiplet architectures and optical I/O) calls for a dual-resolution planning grid that couples five-year spectrum physics with six-month silicon sprints.'' The paper closes with balanced, action-oriented recommendations for operators, vendors and researchers on sub-THz fronthaul, AI-native security, energy-proportional accelerators and zero-touch assurance.
Rusudan Makhachashvili, Nataliia Vinnikova, Ivan Semenist
et al.
In times of war and crisis, higher education institutions (HEIs) face unprecedented challenges requiring transdisciplinary adaptability, resilience, and innovative leadership. Digital transformation plays a crucial role in sustaining transdisciplinary academic processes, institutional governance, and crisis management. This study aims to examine the transdisciplinary strategies deployed by Ukrainian universities, in navigating wartime impediments while fostering digital institutional leadership, ensuring academic sustainability, and strengthening governance frameworks. Drawing from the Universities' experience in educational leadership, strategic management, and crisis adaptation, the study explores digital governance, AI-enhanced institutional resilience, and leadership frameworks rooted in servant leadership philosophy. The paper highlights key institutional responses, including the integration of digitalized administrative workflows, crisis management systems, and AI-powered strategic decision-making to support academic operations during wartime uncertainty. Applied trans-disciplinary lens contributes to the solution of holistic modeling of processes and results of updating models and mechanisms of the highly dynamic communication system of education in the digital environment as a whole and its individual formats in the emergency digitization measures of different types.
This paper proposes a distributed optimization algorithm with a convergence time that can be assigned in advance according to task requirements. To this end, a sliding manifold is introduced to achieve the sum of local gradients approaching zero, based on which a distributed protocol is derived to reach a consensus minimizing the global cost. A novel approach for convergence analysis is derived in a unified settling time framework, resulting in an algorithm that can precisely converge to the optimal solution at the prescribed time. The method is interesting as it simply requires the primal states to be shared over the network, which implies less communication requirements. The result is extended to scenarios with time-varying objective function, by introducing local gradients prediction and non-smooth consensus terms. Numerical simulations are provided to corroborate the effectiveness of the proposed algorithms.
Industrial Multivariate Time Series (MTS) is a critical view of the industrial field for people to understand the state of machines. However, due to data collection difficulty and privacy concerns, available data for building industrial intelligence and industrial large models is far from sufficient. Therefore, industrial time series data generation is of great importance. Existing research usually applies Generative Adversarial Networks (GANs) to generate MTS. However, GANs suffer from unstable training process due to the joint training of the generator and discriminator. This paper proposes a temporal-augmented conditional adaptive diffusion model, termed Diff-MTS, for MTS generation. It aims to better handle the complex temporal dependencies and dynamics of MTS data. Specifically, a conditional Adaptive Maximum-Mean Discrepancy (Ada-MMD) method has been proposed for the controlled generation of MTS, which does not require a classifier to control the generation. It improves the condition consistency of the diffusion model. Moreover, a Temporal Decomposition Reconstruction UNet (TDR-UNet) is established to capture complex temporal patterns and further improve the quality of the synthetic time series. Comprehensive experiments on the C-MAPSS and FEMTO datasets demonstrate that the proposed Diff-MTS performs substantially better in terms of diversity, fidelity, and utility compared with GAN-based methods. These results show that Diff-MTS facilitates the generation of industrial data, contributing to intelligent maintenance and the construction of industrial large models.
This is a theoretical paper on "Deep Learning" misconduct in particular and Post-Selection in general. As far as the author knows, the first peer-reviewed papers on Deep Learning misconduct are [32], [37], [36]. Regardless of learning modes, e.g., supervised, reinforcement, adversarial, and evolutional, almost all machine learning methods (except for a few methods that train a sole system) are rooted in the same misconduct -- cheating and hiding -- (1) cheating in the absence of a test and (2) hiding bad-looking data. It was reasoned in [32], [37], [36] that authors must report at least the average error of all trained networks, good and bad, on the validation set (called general cross-validation in this paper). Better, report also five percentage positions of ranked errors. From the new analysis here, we can see that the hidden culprit is Post-Selection. This is also true for Post-Selection on hand-tuned or searched hyperparameters, because they are random, depending on random observation data. Does cross-validation on data splits rescue Post-Selections from the Misconducts (1) and (2)? The new result here says: No. Specifically, this paper reveals that using cross-validation for data splits is insufficient to exonerate Post-Selections in machine learning. In general, Post-Selections of statistical learners based on their errors on the validation set are statistically invalid.
Evolutionary algorithms, such as Differential Evolution, excel in solving real-parameter optimization challenges. However, the effectiveness of a single algorithm varies across different problem instances, necessitating considerable efforts in algorithm selection or configuration. This paper aims to address the limitation by leveraging the complementary strengths of a group of algorithms and dynamically scheduling them throughout the optimization progress for specific problems. We propose a deep reinforcement learning-based dynamic algorithm selection framework to accomplish this task. Our approach models the dynamic algorithm selection a Markov Decision Process, training an agent in a policy gradient manner to select the most suitable algorithm according to the features observed during the optimization process. To empower the agent with the necessary information, our framework incorporates a thoughtful design of landscape and algorithmic features. Meanwhile, we employ a sophisticated deep neural network model to infer the optimal action, ensuring informed algorithm selections. Additionally, an algorithm context restoration mechanism is embedded to facilitate smooth switching among different algorithms. These mechanisms together enable our framework to seamlessly select and switch algorithms in a dynamic online fashion. Notably, the proposed framework is simple and generic, offering potential improvements across a broad spectrum of evolutionary algorithms. As a proof-of-principle study, we apply this framework to a group of Differential Evolution algorithms. The experimental results showcase the remarkable effectiveness of the proposed framework, not only enhancing the overall optimization performance but also demonstrating favorable generalization ability across different problem classes.
Ahmad Mohammad Saber, Amr Youssef, Davor Svetinovic
et al.
Line Current Differential Relays (LCDRs) are high-speed relays progressively used to protect critical transmission lines. However, LCDRs are vulnerable to cyberattacks. Fault-Masking Attacks (FMAs) are stealthy cyberattacks performed by manipulating the remote measurements of the targeted LCDR to disguise faults on the protected line. Hence, they remain undetected by this LCDR. In this paper, we propose a two-module framework to detect FMAs. The first module is a Mismatch Index (MI) developed from the protected transmission line's equivalent physical model. The MI is triggered only if there is a significant mismatch in the LCDR's local and remote measurements while the LCDR itself is untriggered, which indicates an FMA. After the MI is triggered, the second module, a neural network-based classifier, promptly confirms that the triggering event is a physical fault that lies on the line protected by the LCDR before declaring the occurrence of an FMA. The proposed framework is tested using the IEEE 39-bus benchmark system. Our simulation results confirm that the proposed framework can accurately detect FMAs on LCDRs and is not affected by normal system disturbances, variations, or measurement noise. Our experimental results using OPAL-RT's real-time simulator confirm the proposed solution's real-time performance capability.
In the contemporary landscape, the fields of cybernetics, artificial intelligence, and digital technology significantly impact society, reshaping production processes, decision-making frameworks, and human behaviors. Training engineers with transversal skills becomes imperative to navigate workflow complexities and communicate across these disciplines. We propose a new learning approach structured around expert prerequisites, integrating AI principles dedicated to Embedded Systems engineering track. Our module focuses on creating an autonomous driving vehicle using an autonomous robot kit, fostering interdisciplinary learning. Real-time demonstrations assess learning outcomes, emphasizing problem-solving skills. Inspired from recent evaluation concept of interdisciplinary assessment. Our evaluation criteria emphasize functionality, integrated idea defense, and written reports. The defense organization scheme fosters positive perceptions of interdisciplinary links.
Jeffry Vincent Louis , Noerlina Noerlina , Dicky Hida Syahchari
The purpose of this study is to determine whether artificial intelligence used in E-Commerce influences product recommendations for users. This study explains how much influence artificial intelligence on product recommendations supplied by E-commerce in terms of consumer behavior in making purchasing decisions. Research methods. This research used bibliometric analysis to find the mapping of this topic with articles period 2017 to 2023 from Scopus database. Of the 103 articles were showed by keyword and analyzing the articles according to the relate of the content about 29 articles were finally obtained. The research result is Artificial Intelligence has influence for E-commerce, recommendation system, decision support system, customer behaviourβs, and customer trust. Product recommendations have an impact on E-Commerce. Conclusion. However, from the literature review, founded that there are still a few journals discussing related to considerations to the implementation regarding the use of AI in e-commerce "Consumer behaviour", "Customer Trust", "Purchasing decisions". This study is also useful to generate additional AI-related research in e-commerce and unquestionably for a fresh subject will be covered especially in context of product recommendations on E-commerce.
Sergii Lavreniuk, Yevhen Nazarenko, Daria Tulchynska
et al.
Introduction. The study of borehole acoustic waves is an important stage in geophysical well research. The main acoustic parameters are P-wave velocity of compression, S-wave velocity of shear, L-wave velocity of Stoneley along the boundary between the rock and the well fluid.
The "Slowness-Time Coherence" (STC) method of estimating the velocity (slowness) is based on the coherence of signal arrays on 2 or more receivers of the well sonic tool. Compared with traditional acoustic logging, the main advantage of STC method is the automation of processing. The main disadvantages of STC method are the high cost and complexity of operating multi-channel sonic tools, and low quality of STC method in layers of high anisotropy, high fracturing, carbonate deposits, in horizontal wells.
These disadvantages caused STC method spread slowly until the last decade. However, at present, the world's leading geophysical service companies (Halliburton, Schlumberger, etc.) use sonic tools with 8-12 receivers and 4 modes of the source signal. Over the past decade, the quality of tools and processing technologies has improved, but the problem of the high cost of using modern tools remains extremely relevant in Ukraine.
The purpose of the article is β to investigate modern methods of data processing of the well sonic tools; to identify the features of the "Slowness-Time Coherence" (STC) algorithm; to propose improvements to the STC method; to implement, to test, and to integrate into production the acoustic data processing technology based on improved STC algorithm.
Results. Improved "Slowness-Time Coherence" (STC) algorithm for calculating the velocity (slowness) of an acoustic wave in geological deposits. In the software package "GeoPoshuk" STC technology has been developed for the processing of acoustic waves. The technology based on the basic and improved STC algorithms. A methodology for comparing the improved STC algorithm with the basic STC algorithm has been developed. Statistical data show the advantage of the improved STC algorithm over the basic one.
Conclusions. The use of the improved STC algorithm provides better automatic data processing compared to the basic STC algorithm.
In transfer learning, transferability is one of the most fundamental problems, which aims to evaluate the effectiveness of arbitrary transfer tasks. Existing research focuses on classification tasks and neglects domain or task differences. More importantly, there is a lack of research to determine whether to transfer or not. To address these, we propose a new analytical approach and metric, Wasserstein Distance based Joint Estimation (WDJE), for transferability estimation and determination in a unified setting: classification and regression problems with domain and task differences. The WDJE facilitates decision-making on whether to transfer or not by comparing the target risk with and without transfer. To enable the comparison, we approximate the target transfer risk by proposing a non-symmetric, easy-to-understand and easy-to-calculate target risk bound that is workable even with limited target labels. The proposed bound relates the target risk to source model performance, domain and task differences based on Wasserstein distance. We also extend our bound into unsupervised settings and establish the generalization bound from finite empirical samples. Our experiments in image classification and remaining useful life regression prediction illustrate the effectiveness of the WDJE in determining whether to transfer or not, and the proposed bound in approximating the target transfer risk.
For decades, researchers have been trying to understand how people form their opinions. This quest has become even more pressing with the widespread usage of online social networks and social media, which seem to amplify the already existing phenomenon of polarization. In this work, we study the problem of polarization assuming that opinions evolve according to the popular Friedkin-Johnsen (FJ) model. The FJ model is one of the few existing opinion dynamics models that has been validated on small/medium-sized social groups. First, we carry out a comprehensive survey of the FJ model in the literature (distinguishing its main variants) and of the many polarization metrics available, deriving an invariant relation among them. Secondly, we derive the conditions under which the FJ variants are able to induce opinion polarization in a social network, as a function of the social ties between the nodes and their individual susceptibility to the opinion of others. Thirdly, we discuss a methodology for finding concrete opinion vectors that are able to bring the network to a polarized state. Finally, our analytical results are applied to two real social network graphs, showing how our theoretical findings can be used to identify polarizing conditions under various configurations.
Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviours of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this paper, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning for dietary intake assessment in real life settings.
Beyond generating long and topic-coherent paragraphs in traditional captioning tasks, the medical image report composition task poses more task-oriented challenges by requiring both the highly-accurate medical term diagnosis and multiple heterogeneous forms of information including impression and findings. Current methods often generate the most common sentences due to dataset bias for individual case, regardless of whether the sentences properly capture key entities and relationships. Such limitations severely hinder their applicability and generalization capability in medical report composition where the most critical sentences lie in the descriptions of abnormal diseases that are relatively rare. Moreover, some medical terms appearing in one report are often entangled with each other and co-occurred, e.g. symptoms associated with a specific disease. To enforce the semantic consistency of medical terms to be incorporated into the final reports and encourage the sentence generation for rare abnormal descriptions, we propose a novel framework that unifies template retrieval and sentence generation to handle both common and rare abnormality while ensuring the semantic-coherency among the detected medical terms. Specifically, our approach exploits hybrid-knowledge co-reasoning: i) explicit relationships among all abnormal medical terms to induce the visual attention learning and topic representation encoding for better topic-oriented symptoms descriptions; ii) adaptive generation mode that changes between the template retrieval and sentence generation according to a contextual topic encoder. Experimental results on two medical report benchmarks demonstrate the superiority of the proposed framework in terms of both human and metrics evaluation.