BP Neural Network–Based Kalman Filtering Method Under Multiple Cyberattacks
Zijing Li, Keting Huang, Gang Wang
et al.
This paper proposes a Kalman-gain-driven neural Kalman filtering (KF) defense framework, termed KFDBP, for secure state estimation in cyber–physical systems (CPSs) under denial-of-service (DoS), spoofing, and replay attacks. Unlike end-to-end neural filtering approaches such as KalmanNet that directly learn state estimators or implicitly approximate the Kalman gain using deep recurrent architectures, the proposed method employs a lightweight back-propagation (BP) neural network to adaptively regulate the Kalman gain online, while strictly preserving the classical Kalman filter prediction–correction recursion. By formulating an innovation-oriented Kalman gain learning objective, KFDBP explicitly addresses attack-induced observation uncertainty and non-Gaussian measurement corruption without requiring prior knowledge of attack timing, attack type, or attack probability during online estimation. The bounded gain regulation mechanism enhances estimation stability and interpretability, which are critical for safety-sensitive CPS applications, while significantly reducing computational complexity compared with deep neural network–based filters. Extensive Monte Carlo simulations under single and hybrid attack scenarios demonstrate that KFDBP consistently achieves lower estimation error and improved robustness than the conventional Kalman filter and KalmanNet under different attack probabilities, making it suitable for real-time and resource-constrained CPS applications.
PlantCV v2: Image analysis software for high-throughput plant phenotyping
Malia A. Gehan, N. Fahlgren, A. Abbasi
et al.
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.
281 sitasi
en
Medicine, Computer Science
Quantum-Cognitive Neural Networks: Assessing Confidence and Uncertainty with Human Decision-Making Simulations
Milan Maksimovic, Ivan S. Maksymov
Contemporary machine learning (ML) systems excel in recognising and classifying images with remarkable accuracy. However, like many computer software systems, they can fail by generating confusing or erroneous outputs or by deferring to human operators to interpret the results and make final decisions. In this paper, we employ the recently proposed quantum tunnelling neural networks (QT-NNs) inspired by human brain processes alongside quantum cognition theory to classify image datasets while emulating human perception and judgment. Our findings suggest that the QT-NN model provides compelling evidence of its potential to replicate human-like decision-making. We also reveal that the QT-NN model can be trained up to 50 times faster than its classical counterpart.
The Critical Importance of Software for HEP
HEP Software Foundation, :, Christina Agapopoulou
et al.
Particle physics has an ambitious and broad global experimental programme for the coming decades. Large investments in building new facilities are already underway or under consideration. Scaling the present processing power and data storage needs by the foreseen increase in data rates in the next decade for HL-LHC is not sustainable within the current budgets. As a result, a more efficient usage of computing resources is required in order to realise the physics potential of future experiments. Software and computing are an integral part of experimental design, trigger and data acquisition, simulation, reconstruction, and analysis, as well as related theoretical predictions. A significant investment in computing and software is therefore critical. Advances in software and computing, including artificial intelligence (AI) and machine learning (ML), will be key for solving these challenges. Making better use of new processing hardware such as graphical processing units (GPUs) or ARM chips is a growing trend. This forms part of a computing solution that makes efficient use of facilities and contributes to the reduction of the environmental footprint of HEP computing. The HEP community already provided a roadmap for software and computing for the last EPPSU, and this paper updates that, with a focus on the most resource critical parts of our data processing chain.
en
hep-ex, physics.comp-ph
Privacy by Design: Aligning GDPR and Software Engineering Specifications with a Requirements Engineering Approach
Oleksandr Kosenkov, Ehsan Zabardast, Davide Fucci
et al.
Context: Consistent requirements and system specifications are essential for the compliance of software systems towards the General Data Protection Regulation (GDPR). Both artefacts need to be grounded in the original text and conjointly assure the achievement of privacy by design (PbD). Objectives: There is little understanding of the perspectives of practitioners on specification objectives and goals to address PbD. Existing approaches do not account for the complex intersection between problem and solution space expressed in GDPR. In this study we explore the demand for conjoint requirements and system specification for PbD and suggest an approach to address this demand. Methods: We reviewed secondary and related primary studies and conducted interviews with practitioners to (1) investigate the state-of-practice and (2) understand the underlying specification objectives and goals (e.g., traceability). We developed and evaluated an approach for requirements and systems specification for PbD, and evaluated it against the specification objectives. Results: The relationship between problem and solution space, as expressed in GDPR, is instrumental in supporting PbD. We demonstrate how our approach, based on the modeling GDPR content with original legal concepts, contributes to specification objectives of capturing legal knowledge, supporting specification transparency, and traceability. Conclusion: GDPR demands need to be addressed throughout different levels of abstraction in the engineering lifecycle to achieve PbD. Legal knowledge specified in the GDPR text should be captured in specifications to address the demands of different stakeholders and ensure compliance. While our results confirm the suitability of our approach to address practical needs, we also revealed specific needs for the future effective operationalization of the approach.
FEATURES OF COMPUTER SPATIAL VISUALIZATION AND HISTOTOPOGRAPHY OF ODONTOGENIC KERATOCYSTS OF THE JAWS WITH THEIR DIFFICULT DIAGNOSIS
D.S. Avetikov, V.M. Havryliev, D.V. Steblovkyi
et al.
This article is devoted to the establishment of radiological and histological features of odontogenic keratocysts under the conditions of an expansive lesion using computed tomography with contrast, panoramic radiography with subsequent biopsy of the neoplasm. These neoplasms are relatively common and account for 10–12% of all jaw cysts, usually occurring in the second and third decades of life. In the published materials, there are isolated data on histological studies on visualization of odontogenic keratocysts without signs of mineralization or calcification inside the lesion, which complicates differential diagnosis with other neoplasms of the jaw bone tissue. Most authors believe that this phenomenon is associated with a high concentration of viscous dense keratin protein in the lumen of the cyst.
Мaterials and methods. Computer imaging was performed using the Morita R-100 cone-beam computed tomography software after contrast material injection. Scanning was performed with a step of 0.5 mm. To clarify the final clinical diagnosis, a biopsy was performed followed by a histological examination.
The results. We established the following main radiological features of keratocyst: the shell of the neoplasm is often scalloped; there is an expansion of the neoplasm, especially in the direction of the lingual side, growth along the body of the mandibular bone; displacement of developing teeth; resorption of the roots of erupted teeth and extrusion of erupting teeth; on a panoramic X-ray, the neoplasm's lumen is transparent (45.7% of cases), and sometimes cloudy (54.3% of cases). According to contrast-enhanced CT, high attenuation in an expansive benign lesion of the lower jaw is suggestive of keratocyst. It was found that the high attenuation in this case is the result of a high concentration of protein in the dense keratin filling the lumen (82.5% of cases); may contain hemorrhage (10.2%) or calcification (7.3%), which was not detected during histological examination. Upon histological examination, all odontogenic keratocysts (OKC) are divided by us into parakeratotic and orthokeratotic subtypes, related to the characteristics of the mucous membrane and the type of keratin produced.
Conclusions. We confirmed the opinion of many authors that compared to the parakeratotic subtype, the orthokeratotic subtype produces keratin more similar to the normal keratin produced by the skin.
Human-computer interaction in translation and interpreting: software and applications
Felix do Carmo, José Ramos, Carlos S. C. Teixeira
This issue of Revista Tradumàtica explores how technology, including machine translation, AI, and accessibility tools, transforms professional translation. Articles address psychological impacts, productivity, quality, and usability. Highlights include autonomy’s link to job satisfaction, stress from concurrent workflows, and challenges with large language models and remote interpreting platforms. Accessibility studies emphasize user involvement in design. While technology boosts productivity, it introduces stress and uncertainty, underscoring the importance of user-driven development to enhance satisfaction, autonomy, and translation quality.
Translating and interpreting
No Free Lunch: Research Software Testing in Teaching
Michael Dorner, Andreas Bauer, Florian Angermeir
Software is at the core of most scientific discoveries today. Therefore, the quality of research results highly depends on the quality of the research software. Rigorous testing, as we know it from software engineering in the industry, could ensure the quality of the research software but it also requires a substantial effort that is often not rewarded in academia. Therefore, this research explores the effects of research software testing integrated into teaching on research software. In an in-vivo experiment, we integrated the engineering of a test suite for a large-scale network simulation as group projects into a course on software testing at the Blekinge Institute of Technology, Sweden, and qualitatively measured the effects of this integration on the research software. We found that the research software benefited from the integration through substantially improved documentation and fewer hardware and software dependencies. However, this integration was effortful and although the student teams developed elegant and thoughtful test suites, no code by students went directly into the research software since we were not able to make the integration back into the research software obligatory or even remunerative. Although we strongly believe that integrating research software engineering such as testing into teaching is not only valuable for the research software itself but also for students, the research of the next generation, as they get in touch with research software engineering and bleeding-edge research in their field as part of their education, the uncertainty about the intellectual properties of students' code substantially limits the potential of integrating research software testing into teaching.
Volume Segmentation of Liver and Liver Tumor with Fusion of Multi-Branch Features
Benchen YANG, Yuhang JIA, Haibo JIN
The overcomplete convolutional structure for biological images and volume segmentation is an excellent solution to the problem in which traditional codec methods cannot accurately segment the boundary regions. Although such methods perform well, the drawback that convolutional operations do not effectively learn global and remote semantic information interactions must be addressed. Accordingly, a new image segmentation network, KTU-Net, is proposed for the medical image segmentation of liver tumors. The network structure constitutes three branches: 1)Kite-Net, which is an overcomplete convolutional network that learns to capture input details and precise edges; 2)U-Net, which learns high-level features; 3)Transformer, which learns sequential representations of input bodies and efficiently captures global multiscale information. KTU-Net is designed for both early and late fusion, and a hybrid loss function is adopted to guide network training to achieve increased stability. From extensive experimental results regarding the LiTS liver tumor segmentation dataset, KTU-Net achieves higher or similar segmentation accuracy than other advanced 3D medical image segmentation methods such as KiU-Net, TransBTS, and UNETR. Fusing the three branching features, the average Dice scores of liver tumors are improved by 0.7% and 2.1%, achieving increased quality of features learned by the network and more accurate segmentation results of liver tumors, thus providing a reliable basis for doctors to perform precise liver tumor cell assessments and treatment plans.
Computer engineering. Computer hardware, Computer software
Enhanced Deep Learning Model for Classification of Retinal Optical Coherence Tomography Images
Esraa Hassan, Samir Elmougy, Mai R. Ibraheem
et al.
Retinal optical coherence tomography (OCT) imaging is a valuable tool for assessing the condition of the back part of the eye. The condition has a great effect on the specificity of diagnosis, the monitoring of many physiological and pathological procedures, and the response and evaluation of therapeutic effectiveness in various fields of clinical practices, including primary eye diseases and systemic diseases such as diabetes. Therefore, precise diagnosis, classification, and automated image analysis models are crucial. In this paper, we propose an enhanced optical coherence tomography (EOCT) model to classify retinal OCT based on modified ResNet (50) and random forest algorithms, which are used in the proposed study’s training strategy to enhance performance. The Adam optimizer is applied during the training process to increase the efficiency of the ResNet (50) model compared with the common pre-trained models, such as spatial separable convolutions and visual geometry group (VGG) (16). The experimentation results show that the sensitivity, specificity, precision, negative predictive value, false discovery rate, false negative rate accuracy, and Matthew’s correlation coefficient are 0.9836, 0.9615, 0.9740, 0.9756, 0.0385, 0.0260, 0.0164, 0.9747, 0.9788, and 0.9474, respectively.
An Intelligent Fuzzy System for Diabetes Disease Detection using Harris Hawks Optimization
Zahra Asghari Varzaneh, Soodeh Hosseini
This paper proposed a fuzzy expert system for diagnosing diabetes. In the proposed method, at first, the fuzzy rules are generated based on the Pima Indians Diabetes Database (PIDD) and then the fuzzy membership functions are tuned using the Harris Hawks optimization (HHO). The experimental data set, PIDD with the age group from 25-30 is initially processed and the crisp values are converted into fuzzy values in the stage of fuzzification. The improved fuzzy expert system increases the classification accuracy which outperforms several famous methods for diabetes disease diagnosis. The HHO algorithm is applied to tune fuzzy membership functions to determine the best range for fuzzy membership functions and increase the accuracy of fuzzy rule classification. The experimental results in terms of accuracy, sensitivity, and specificity prove that the proposed expert system has a higher ability than other data mining models in diagnosing diabetes.
Information technology, Computer software
Field-Aware Click-Through Rate Prediction Model Based on Attention Mechanism
SHEN Xueli, HAN Qianwen
Click-Through Rate(CTR) prediction is one of the most important tools for ad placement.Predicting the CTR of an ad and making recommendations to users can increase ad revenue.Field-aware click-through rate prediction models are superior to other click-through rate prediction models because they consider the field information; however, they generate a large amount of redundant information during feature interaction, which results in a low prediction accuracy.A Field-aware Attention Embedding Neural Network(FAENN) model is herein proposed.This model uses a Self-Attentive Mechanism(SAM) to distribute weights to the input vectors of the embedding layer.This helps to clearly identify the importance of the field-aware embedded features, speeding up the training process.The lower-order feature interaction layer focuses on the explicit first-order information of the features and the second-order interaction feature information and outputs the effective features to the higher-order interaction layer.The higher-order feature interaction layer combines the learned interaction vectors with the deep neural network to capture higher-order feature interactions to improve prediction accuracy.The experimental results show that the FAENN model has a higher prediction accuracy than the FM, FFM, and AFM models.
Computer engineering. Computer hardware, Computer software
Small insulator target detection based on multi‐feature fusion
Minan Tang, Kai Liang, Jiandong Qiu
Abstract The proportion of insulators in aerial power patrol images is small and the background of overhead lines is complex, often leading to incomplete and inaccurate detection of insulators. Therefore, an algorithm for detecting insulator targets based on multi‐feature fusion is developed in this study. Firstly, a dynamic threshold oriented fast and rotated brief algorithm is proposed, which uses the bag‐of‐words dictionary model to determine local shape features of the image, applies gradient weighting to the global texture feature vector extracted by the histogram of oriented gradients algorithm and performs radial gradient transformations to get the improved HOG of features. Secondly, the feature vectors are fused serially, the learning machine is trained and the parameters of the support vector machine are optimized using the quantum particle swarm optimization algorithm. Finally, the target area is pre‐divided by the selective search algorithm, and the area is classified by the learning machine. The experimental results show that the proposed feature extraction method can describe the image details more accurately than the existing methods, and the average accuracy of the feature extraction classifier can reach 93.7%, which helps to overcome the incomplete detection problem of insulator detection at the aerial work site.
Photography, Computer software
Exploring the Need of Accessibility Education in the Software Industry: Insights from a Survey of Software Professionals in India
Parthasarathy P D, Swaroop Joshi
A UserWay study in 2021 indicates that an annual global e-commerce revenue loss of approximately $16 billion can be attributed to inaccessible websites and applications. According to the 2023 WebAIM study, only 3.7% of the world's top one million website homepages are fully accessible. This shows that many software developers use poor coding practices that don't adhere to the Web Content Accessibility Guidelines (WCAG). This research centers on software professionals and their role in addressing accessibility. This work seeks to understand (a) who within the software development community actively practices accessibility, (b) when and how accessibility is considered in the software development lifecycle, (c) the various challenges encountered in building accessible software, and (d) the resources required by software professionals to enhance product accessibility. Our survey of 269 software professionals from India sheds light on the pressing need for accessibility education within the software industry. A substantial majority (69.9%, N=269) of respondents express the need for training materials, workshops, and bootcamps to enhance their accessibility skills. We present a list of actionable recommendations that can be implemented within the industry to promote accessibility awareness and skills. We also open source our raw data for further research, encouraging continued exploration in this domain.
Taxing Collaborative Software Engineering
Michael Dorner, Maximilian Capraro, Oliver Treidler
et al.
The engineering of complex software systems is often the result of a highly collaborative effort. However, collaboration within a multinational enterprise has an overlooked legal implication when developers collaborate across national borders: It is taxable. In this article, we discuss the unsolved problem of taxing collaborative software engineering across borders. We (1) introduce the reader to the basic principle of international taxation, (2) identify three main challenges for taxing collaborative software engineering making it a software engineering problem, and (3) estimate the industrial significance of cross-border collaboration in modern software engineering by measuring cross-border code reviews at a multinational software company.
What Pakistani Computer Science and Software Engineering Students Think about Software Testing?
Luiz Fernando Capretz, Abdul Rehman Gilal
Software testing is one of the crucial supporting processes of the software life cycle. Unfortunately for the software industry, the role is stigmatized, partly due to misperception and partly due to treatment of the role. The present study aims to analyze the situation to explore what restricts computer science and software engineering students from taking up a testing career in the software industry. To conduct this study, we surveyed 88 Pakistani students taking computer science or software engineering degrees. The results showed that the present study supports previous work into the unpopularity of testing compared to other software life cycle roles. Furthermore, the findings of our study showed that the role of tester has become a social role, with as many social connotations as technical implications.
Enhanced gradient learning for deep neural networks
Ming Yan, Jianxi Yang, Cen Chen
et al.
Abstract Deep neural networks have achieved great success in both computer vision and natural language processing tasks. How to improve the gradient flows is crucial in training very deep neural networks. To address this challenge, a gradient enhancement approach is proposed through constructing the short circuit neural connections. The proposed short circuit is a unidirectional neural connection that back propagates the sensitivities rather than gradients in neural networks from the deep layers to the shallow layers. Moreover, the short circuit is further formulated as a gradient truncation operation in its connecting layers, which can be plugged into the backbone models without introducing extra training parameters. Extensive experiments demonstrate that the deep neural networks, with the help of short circuit connection, gain a large margin of improvement over the baselines on both computer vision and natural language processing tasks. The work provides the promising solution to the low‐resource scenarios, such as, intelligence transport systems of computer vision, question answering of natural language processing.
Photography, Computer software
Rhizomer: Interactive semantic knowledge graphs exploration
Roberto García, Juan-Miguel López-Gil, Rosa Gil
Rhizomer helps researchers and practitioners explore knowledge graphs available as Semantic Web data by performing the three data analysis tasks: overview, zoom and filter, and details-on-demand. This approach makes it easier for users to get an idea about the overall structure and intricacies of a dataset, when compared to existing approaches and even without prior knowledge. Rhizomer is helpful for data reusers, who want to know about the reuse opportunities of a given dataset, and for knowledge graph creators, who can check if the generated data follow their expectations. Rhizomer has been applied in many scenarios, from research and commercial projects to teaching.
Temperature- and vacancy-concentration-dependence of heat transport in Li3ClO from multi-method numerical simulations
Paolo Pegolo, Stefano Baroni, Federico Grasselli
Abstract Despite governing heat management in any realistic device, the microscopic mechanisms of heat transport in all-solid-state electrolytes are poorly known: existing calculations, all based on simplistic semi-empirical models, are unreliable for superionic conductors and largely overestimate their thermal conductivity. In this work, we deploy a combination of state-of-the-art methods to calculate the thermal conductivity of a prototypical Li-ion conductor, the Li3ClO antiperovskite. By leveraging ab initio, machine learning, and force-field descriptions of interatomic forces, we are able to reveal the massive role of anharmonic interactions and diffusive defects on the thermal conductivity and its temperature dependence, and to eventually embed their effects into a simple rationale which is likely applicable to a wide class of ionic conductors.
Materials of engineering and construction. Mechanics of materials, Computer software
Music Genre Recommendations Based on Spectrogram Analysis Using Convolutional Neural Network Algorithm with RESNET-50 and VGG-16 Architecture
nyoman purnama
Recommendations are a very useful tool in many industries. Recommendations provide the best selection of what the user wants and provide satisfaction compared to ordinary searches. In the music industry, recommendations are used to provide songs that have similarities in terms of genre or theme. There are various kinds of genres in the world of music, including pop, classic, reggae and others. With genre, the difference between one song and another can be heard clearly. This genre can be analyzed by spectrogram analysis. In this study, a spectrogram analysis was developed which will be the input feature for the Convolutional Neural Network. CNN will classify and provide song recommendations according to what the user wants. In addition, testing was carried out with two different architectures from CCN, namely VGG-16 and RESNET-50. From the results of the study obtained, the best accuracy results were obtained by the VGG-16 model with 20 epochs with accuracy 60%, compared to the RESNET-50 model with more than 20 epochs. The results of the recommendations generated on the test data obtained a good similarity value for VGG-16 compared to RESNET-50.
Information technology, Computer software