Frame Replication and Elimination for Reliability (FRER) in Time-Sensitive Networking (TSN) enhances fault tolerance by duplicating critical traffic across disjoint paths. However, always-on FRER configurations introduce persistent redundancy overhead, even under nominal network conditions. This paper proposes a predictive FRER activation framework that anticipates faults using a Key Performance Indicator (KPI)-driven bidirectional Long Short-Term Memory (BiLSTM) model. By continuously analyzing multivariate KPIs—such as latency, jitter, and retransmission rates—the model forecasts potential faults and proactively activates FRER. Redundancy is deactivated upon KPI recovery or after a defined minimum protection window, thereby reducing bandwidth usage without compromising reliability. The framework includes a Python-based simulation environment, a real-time visualization dashboard built with Streamlit, and a fully integrated runtime controller. The experimental results demonstrate substantial improvements in link utilization while preserving fault protection, highlighting the effectiveness of anticipatory redundancy strategies in industrial TSN environments.
Zhong-Xia Shang, Yukun Zhang, Han-Sen Zhong
et al.
In this work, we give a hybrid quantum-classical algorithm for solving electronic structure problems of molecules using only linear quantum optical systems. The variational ansatz we proposed is a hybrid of noninteracting boson dynamics and classical computational chemistry methods, specifically, the Hartree-Fock method and the configuration interaction method. The boson part is built by a linear optical interferometer, which is easier to realize compared with the well-known unitary coupled cluster (UCC) ansatz composed of quantum gates in conventional variational quantum eigensolver, and the classical part is merely classical processing acting on the Hamiltonian. The appearance of permanents in the boson part has its physical intuition to provide different kinds of resources from commonly used single, double, and higher excitations in classical methods and the UCC ansatz to explore chemical quantum states. Such resources can help enhance the accuracy of methods used in the classical parts. We give a scalable hybrid homodyne and photon-number measurement procedure for evaluating the energy value, which has intrinsic abilities to mitigate photon loss errors, and discuss the extra measurement cost induced by the no Pauli exclusion principle for bosons with its solutions. To demonstrate our proposal, we run numerical experiments on several molecules and obtain their potential energy curves reaching chemical accuracy.
Graph Neural Networks (GNNs) are powerful tools for learning graph-structured data, but their scalability is hindered by inefficient mini-batch generation, data transfer bottlenecks, and costly inter-GPU synchronization. Existing training frameworks fail to overlap these stages, leading to suboptimal resource utilization. This paper proposes MQ-GNN, a multi-queue pipelined framework that maximizes training efficiency by interleaving GNN training stages and optimizing resource utilization. MQ-GNN introduces Ready-to-Update Asynchronous Consistent Model (RaCoM), which enables asynchronous gradient sharing and model updates while ensuring global consistency through adaptive periodic synchronization. Additionally, it employs global neighbor sampling with caching to reduce data transfer overhead and an adaptive queue-sizing strategy to balance computation and memory efficiency. Experiments on four large-scale datasets and ten baseline models demonstrate that MQ-GNN achieves up to <inline-formula> <tex-math notation="LaTeX">${4.6\,\times }$ </tex-math></inline-formula> faster training time and 30% improved GPU utilization while maintaining competitive accuracy. These results establish MQ-GNN as a scalable and efficient solution for multi-GPU GNN training. The code is available at MQ-GNN.
Abstract The development of non-noble metal electrocatalysts for the Oxygen Evolution Reaction (OER) is advancing towards the use of multi-element materials. To reveal the complex correlations of multi-element OER electrocatalysts, we developed an iterative workflow combining high-throughput experiments and AI-generated content (AIGC) processes. An increased number of 909 (compared to 145 in previous literature) universal descriptors for inorganic materials science were constructed and used as Artificial Neural Network (ANN) input. A large number of statistical ensembles with each ANN individual ensemble having a reduced number of descriptors were integrated with a new Hierarchical Neural Network (HNN) algorithm. This algorithm addresses the longstanding challenge of balancing overwhelming descriptor numbers with insufficient datasets in traditional ANN approaches to materials science problems. As a result, the combination of AIGC and experimental validation significantly enhanced prediction accuracy, increase the R2 values from 0.7 to 0.98 for Tafel slopes.
Materials of engineering and construction. Mechanics of materials, Computer software
As autonomous vehicles and the other supporting infrastructures (e.g., smart cities and intelligent transportation systems) become more commonplace, the Internet of Vehicles (IoV) is getting increasingly prevalent. There have been attempts to utilize Digital Twins (DTs) to facilitate the design, evaluation, and deployment of IoV-based systems, for example by supporting high-fidelity modeling, real-time monitoring, and advanced predictive capabilities. However, the literature review undertaken in this paper suggests that integrating DTs into IoV-based system design and deployment remains an understudied topic. In addition, this paper explains how DTs can benefit IoV system designers and implementers, as well as describes several challenges and opportunities for future researchers.
Kristen Y. Edwards, Stephen J. LeBlanc, Trevor J. DeVries
et al.
Establishing accurate illness and treatment rates in dairy calves is crucial, yet calf health records are often incomplete. Thus, the objective of this study was to investigate barriers for dairy farmers for recording calf illnesses and treatments on dairy farms in Ontario, Canada. An online survey was completed by a convenience sample of 88 Ontario dairy farms in 2022, with 34 questions regarding farm demographics, current practices surrounding record keeping and analysis, and factors that would improve recording compliance. Multivariable models were built to assess associations between explanatory variables and the following outcomes: likelihood of making management or treatment protocol changes based on records analysis, factors that would increase the use of electronic recording methods, and whether all calf illnesses and treatments are recorded. Pearson's chi-squared tests were also used to investigate associations between explanatory variables and whether the respondent agreed or disagreed with a proposed reason for why a calf illness or treatment would not be recorded on their farm. Producers had 3.45 times greater odds of recording all antimicrobial treatments if they used a computer software system compared with those that did not. With respect to anti-inflammatory treatments, producers had 3.11 times greater odds of recording these treatments if records were located in the calf barn than elsewhere. Nonfamily employees had 6.08 times greater odds of recording all supportive therapy treatments than farm owners. When calf health records were kept in the calf barn, respondents were less likely to report that illnesses were not recorded due to time constraints (5% vs. 36% if records were elsewhere) or because calf health records were not analyzed (10% vs. 34% if records were elsewhere). On farms that recorded calf treatments in a paper booklet, respondents were more likely to report that treatments were not recorded because calf health records were not analyzed (44% for paper records vs. 21% for other systems). The most commonly indicated factors that would increase recording of illness were recording with a mobile app (27% of respondents) and for the recording system to be easy to use (31% of respondents). Overall, these data indicate that recording may be improved by keeping calf health records in close proximity to the calves and using a recording method that allows for data analysis. An easy-to-use mobile app may also improve recording if it could be used in the calf barn, provide data analytics, and allow for time-efficient data entry.
The joint extraction of entities and relations provides key technical support for the construction of knowledge graphs,and the problem of overlapping relations has always been the focus of joint extraction model research.Many of the existing me-thods use multi-step modeling methods.Although they have achieved good results in solving the problem of overlapping relations,they have produced the problem of exposure bias.In order to solve the problem of overlapping relations and exposure bias at the same time,a joint entities and relations extraction method(DE-AA) based on word-pair distance embedding and axial attention mechanism is proposed.Firstly,the table features of the representative word-pair relation are constructed,and the word-pair distance feature information is added to optimize its representation.Secondly,the axial attention model based on row attention and column attention is applied to enhance the table features,which can reduce the computational complexity while fusing the global features.Finally,the table features are mapped to each relation space to generate the relation-specific word-pair relation table,and the table filling method is used to assign labels to each item in the table,and the triples are extracted by triple classification.The proposed model is evaluated on the public datasets NYT and WebNLG.Experimental results show that the proposed model achieves better performance than other baseline models,and has significant advantages in dealing with overlapping relations or multiple relations.
The efficient operation of mobile crowd-sensing(MCS) largely depends on whether a large number of users participate in the sensing tasks.However,in reality,due to the increase of user's sensing cost and the privacy disclosure of users,the users' participation enthusiasm is not high,so an effective mean is needed to ensure the privacy security of users,and it can also promote users to actively participate in the tasks.In response to the above issues,a new privacy incentive mechanism of bilateral auction with comprehensive scoring(BCS) based on local differential privacy protection technology is proposed.This incentive mechanism includes three parts:auction mechanism,data perturbation and aggregation mechanism,and reward and punishment mechanism.The auction mechanism comprehensively considers the impact of various factors on users' sensing tasks,to some extent,it improves the matching degree of tasks.The data perturbation and aggregation mechanism makes a balance between privacy protection and data accuracy,and achieves good protection of user privacy while ensuring data quality.The reward and punishment mechanism rewards users of high integrity and activity to encourage users to actively participate in sensing tasks.Experimental results indicate that BCS can improve platform revenue and task matching rate while ensuring the quality of sensing data.
Abstract Background It is well recognized that the molar activity of a radioligand is an important pharmacokinetic parameter, especially in positron emission tomography (PET) of small animals. Occupation of a significant number of binding sites by radioligand molecules results in low radioligand accumulation in a target region (mass effect). Nevertheless, small-animal PET studies have often been performed without consideration of the molar activity or molar dose of radioligands. A simulation study would therefore help to assess the importance of the mass effect in small-animal PET. Here, we introduce a new compartmental model-based numerical method, which runs on commonly used spreadsheet software, to simulate the effect of molar activity or molar dose on the pharmacokinetics of radioligands. Results Assuming a two-tissue compartmental model, time-concentration curves of a radioligand were generated using four simulation methods and the well-known Runge–Kutta numerical method. The values were compared with theoretical values obtained under an ultra-high molar activity condition (pseudo-first-order binding kinetics), a steady-state condition and an equilibrium condition (second-order binding kinetics). For all conditions, the simulation method using the simplest calculation yielded values closest to the theoretical values and comparable with those obtained using the Runge–Kutta method. To satisfy a maximum occupancy less than 5%, simulations showed that a molar activity greater than 150 GBq/μmol is required for a model radioligand when 20 MBq is administered to a 250 g rat and when the concentration of binding sites in target regions is greater than 1.25 nM. Conclusions The simulation method used in this study is based on a very simple calculation and runs on widely used spreadsheet software. Therefore, simulation of radioligand pharmacokinetics using this method can be performed on a personal computer and help to assess the importance of the mass effect in small-animal PET. This simulation method also enables the generation of a model time-activity curve for the evaluation of kinetic analysis methods.
Medical physics. Medical radiology. Nuclear medicine, Therapeutics. Pharmacology
WANG Jinwei, ZENG Kehui, ZHANG Jiawei, LUO Xiangyang, MA Bin
The rapid development of generative adversarial networks(GANs) has led to unprecedented success in the field of image generation.The emergence of new GANs such as StyleGAN makes the generated images more realistic and deceptive,posing a greater threat to national security,social stability,and personal privacy.In this paper,a detection algorithm based on a space-frequency joint two-stream convolutional neural network is proposed.Since GAN images will leave clearly discernible artifacts on the spectrum due to the up-sampling operation during the generation process,a learnable frequency-domain filter kernel and frequency domain network are designed to fully learn and extract frequency-domain features.In order to reduce the influence of the information discarded from the image transformation to the frequency domain,a spatial domain network is also designed to learn that the image content itself has differentiated spatial domain features.Finally,the two features are fused to detect the face image generated by GAN.Experimental results on multiple datasets show that the proposed model outperforms existing algorithms in detection accuracy on high-quality generated datasets and generalization across datasets.And for JPEG compression,random cropping,Gaussian blur,and other operations,this method has stronger robustness.In addition,the proposed method also performs well on the local face dataset generated by GAN,which further proves that this model has better generality and wider application prospects.
Least Mean Square(LMS) adaptive filtering algorithms with mean square error as the cost function have the advantages of simple structure, easy implementation, low computational complexity, and good stability.During estimation of the impulse response of an unknown system, the traditional Diffusion LMS(DLMS) algorithm is usually corrupted by noise, thereby reducing its estimation accuracy.To address this problem, a Frequency-domain Correlation DLMS(FCDLMS) algorithm is proposed.Because the correlation coefficient of the uncorrelated signals approaches zero, the autocorrelation function of the input signal and the cross-correlation function of the input and the desired signal in the DLMS algorithm are used as new observation data to propose a Correlation DLMS(CDLMS) algorithm.This CDLMS algorithm is then extended to the frequency domain, and a multiplication operation rather than a convolution operation is adopted to update the tap coefficients, reducing computational complexity.Experimental results show that, compared with the traditional DLMS algorithm, the FCDLMS algorithm has a better estimation result for the impulse response of an unknown system over distributed adaptive networks in a noisy environment, and its performance improved.It can also better adapt to complex environments such as multi-tap number, multi-node number, and strong noise.
Saif Ur Rehman, Noha Alnazzawi, Jawad Ashraf
et al.
Internet of Things (IoT)-backed smart shopping carts are generating an extensive amount of data in shopping markets around the world. This data can be cleaned and utilized for setting business goals and strategies. Artificial intelligence (AI) methods are used to efficiently extract meaningful patterns or insights from such huge amounts of data or big data. One such technique is Association Rule Mining (ARM) which is used to extract strategic information from the data. The crucial step in ARM is Frequent Itemsets Mining (FIM) followed by association rule generation. The FIM process starts by tuning the support threshold parameter from the user to produce the number of required frequent patterns. To perform the FIM process, the user applies hit and trial methods to rerun the aforesaid routine in order to receive the required number of patterns. The research community has shifted its focus towards the development of top-K most frequent patterns not using the support threshold parameter tuned by the user. Top-K most frequent patterns mining is considered a harder task than user-tuned support-threshold-based FIM. One of the reasons why top-K most frequent patterns mining techniques are computationally intensive is the fact that they produce a large number of candidate itemsets. These methods also do not use any explicit pruning mechanism apart from the internally auto-maintained support threshold parameter. Therefore, we propose an efficient TKIFIs Miner algorithm that uses depth-first search strategy for top-K identical frequent patterns mining. The TKIFIs Miner uses specialized one- and two-itemsets-based pruning techniques for topmost patterns mining. Comparative analysis is performed on special benchmark datasets, for example, Retail with 16,469 items, T40I10D100K and T10I4D100K with 1000 items each, etc. The evaluation results have proven that the TKIFIs Miner is at the top of the line, compared to recently available topmost patterns mining methods not using the support threshold parameter.
Neethi Deborah Devadason, Senthilkumar S, Rajasekar S
Periodontitis can lead to the loss of hard and soft tissues of the oral cavity. Dental implants have become a reliable treatment modality in recent times, especially with the evolution of digital technology such as CBCT, implant planning software, computer-assisted manufacturing, and guided implant surgery. Documentation of such advancements and their clinical implications would add to the existing knowledge on implant dentistry, encouraging dentists to approach complex implant surgeries confidently. This paper discusses the rehabilitation of missing teeth by applying computer-assisted guided implant placement in two cases with deficient bone volume anteriorly and posteriorly in the maxilla, respectively. Digital planning and careful execution have resulted in precise implant placement and complete osseointegration. In these cases, we could devise treatment plans with both anatomical and prosthetic considerations while being minimally invasive and more predictable, with shorter treatment time and greater patient comfort.
In this paper, we present a data-driven method for crowd simulation with holonification model. With this extra module, the accuracy of simulation will increase and it generates more realistic behaviors of agents. First, we show how to use the concept of holon in crowd simulation and how effective it is. For this reason, we use simple rules for holonification. Using real-world data, we model the rules for joining each agent to a holon and leaving it with random forests. Then we use this model in simulation. Also, because we use data from a specific environment, we test the model in another environment. The result shows that the rules derived from the first environment exist in the second one. It confirms the generalization capabilities of the proposed method.
Recently, there have been significant advances in image super-resolution based on generative adversarial networks (GANs) to achieve breakthroughs in generating more images with high subjective quality. However, there are remaining challenges needs to be met, such as simultaneously recovering the finer texture details for large upscaling factors and mitigating the geometric transformation effects. In this paper, we propose a novel robust super-resolution GAN (i.e. namely RSR-GAN) which can simultaneously perform both the geometric transformation and recovering the finer texture details. Specifically, since the performance of the generator depends on the discreminator, we propose a novel discriminator design by incorporating the spatial transformer module with residual learning to improve the discrimination of fake and true images through removing the geometric noise, in order to enhance the super-resolution of geometric corrected images. Finally, to further improve the perceptual quality, we introduce an additional DCT loss term into the existing loss function. Extensive experiments, measured by both PSNR and SSIM measurements, show that our proposed method achieves a high level of robustness against a number of geometric transformations, including rotation, translation, a combination of rotation and scaling effects, and a cobmination of rotaion, transalation and scaling effects. Benchmarked by the existing state-of-the-arts SR methods, our proposed delivers superior performances on a wide range of datasets which are publicly available and widely adopted across research communities.
In order to solve the problem of low recall rate in object detection with the deep reinforcement learning method,on the basis of simulating human visual mechanism,a dynamic searching hierarchical offset method is proposed.It uses the idea of anchors based on the original hierarchical searching method,which adds a region offset.This method avoids the limitations generated by hierarchical searching method,and makes the search more flexible.This paper combines the advantages of Double DQN and Dueling DQN,using Double Dueling DQN network structure as the deep reinforcement learning network of the agent.Experimental results show that the accuracy and recall ratio are higher than the original hierarchical searching method.