Structural Safety Performance Simulation Analysis of a Certain Electric Vehicle Battery Pack Based on Multi-Working-Condition Safety Evaluation
Jinbo Wang, Wei Liao, Weihai Zhang
et al.
This study takes the power battery pack of a pure electric vehicle as the research object, focusing on safety—a core concern widely emphasized in the automotive industry. In practical application scenarios, evaluating the safety of the power battery pack through a single operating condition fails to fully reflect its comprehensive safety performance throughout the vehicle’s entire life cycle. To overcome this limitation, a systematic analysis process was established. First, Catia geometric modeling software was used to simplify the battery pack structure, and HyperMesh was then employed for mesh generation. Second, three core analyses were conducted: static analysis, modal analysis, and extrusion condition analysis. A multi-condition safety evaluation system for electric vehicle battery packs during computer simulation analysis was proposed, which evaluates the battery pack from three dimensions: “dynamic stiffness-static strength-extrusion safety”. Results show that: modal analysis reveals the battery pack’s low-order natural frequencies exceed the vehicle’s excitation frequency (excitation point on the case cover); static analysis confirms it meets operational requirements; extrusion verification proves its safety complies with new national standards. The coupling effect of this multi-dimensional analysis breaks through the limitations of safety performance evaluation under a single operating condition, more realistically reflecting the battery pack’s comprehensive safety over its life cycle and providing a more systematic basis for power battery pack optimization.
Electrical engineering. Electronics. Nuclear engineering, Transportation engineering
An Overview of Vehicle Target Detection on Highway
Li Yilin
This paper provides a comprehensive literature review in the field of highway vehicle target detection, aiming to summarize and analyze the evolution from traditional methods to deep learning-based methods, classify and analyze the standard techniques of highway vehicle target detection, and explore their performance characteristics, applicable scenarios, and future development directions. Vehicle target detection on highways has always been the core of the intersection of multiple disciplines, and highway vehicle target detection is of great significance in traffic flow monitoring, intelligent driver assistance systems, and traffic accident prevention. Its impact has been widely reflected in social, scientific, and economic fields. First, this paper summarizes and introduces traditional target detection, deep learning-based target detection (which includes single-stage and two-stage target detection), and other target detection methods. In addition, through an in-depth combing and analysis of domestic and international literature, this paper summarizes analyzes, and compares the relevant data, and puts forward the advantages and shortcomings of each method, as well as the direction and possible trends of future research.
Uncovering multiscale structure-property correlations via active learning in scanning tunneling microscopy
Ganesh Narasimha, Dejia Kong, Paras Regmi
et al.
Abstract Atomic arrangements and local sub-structures fundamentally influence emergent material functionalities. These structures are conventionally probed using spatially resolved studies and the property correlations are deciphered by a researcher based on sequential explorations, thereby limiting the efficiency and scope. Here we demonstrate a multi-scale Bayesian deep-learning based framework that automatically correlates material structure with its electronic properties using scanning tunneling microscopy (STM) measurements in real-time. Its predictions are used to autonomously direct exploration toward regions of the sample that optimize a given material property. This method is deployed on a low-temperature ultra-high vacuum STM to understand the structure-property relationship in a europium-based semimetal, EuZn2As2, a promising candidate relevant to magnetism-driven topological phenomena. The framework employs a sparse-sampling approach to efficiently construct the scalar-property space using minimal measurements, about 1–10% of the data required in standard hyperspectral methods. Moreover, we formulate the problem hierarchically across length scales, implementing autonomous workflow to locate mesoscopic and atomic structures that correspond to a target material property. This framework offers the choice to design scalar-property from the spectroscopic data to steer sample exploration. Our findings reveal correlations of the electronic properties unique to surface terminations, local defect density, and point defects.
Materials of engineering and construction. Mechanics of materials, Computer software
Formalization of Side-Aware DNA Origami Words and Their Rewriting System, and Equivalent Classes
Da-Jung Cho
DNA origami is a powerful technique for constructing nanoscale structures by folding a single-stranded DNA scaffold with short staple strands. While traditional models assume staples bind to a fixed side of the scaffold, we introduce a side-aware DNA origami framework that incorporates the directional binding of staples to either the left or right side. The graphical representation of DNA origami is described using rectangular basic modules of scaffolds and staples, which we refer to as symbols in side-aware DNA origami words. We further define the concatenation of these symbols to represent side-aware DNA origami words. A set of rewriting rules is introduced to define equivalent words that correspond to the same graphical structure. Finally, we compute the number of possible structures by determining the equivalence classes of these words.
Human-AI Experience in Integrated Development Environments: A Systematic Literature Review
Agnia Sergeyuk, Ilya Zakharov, Ekaterina Koshchenko
et al.
The integration of Artificial Intelligence (AI) into Integrated Development Environments (IDEs) is reshaping software development, fundamentally altering how developers interact with their tools. This shift marks the emergence of Human-AI Experience in Integrated Development Environment (in-IDE HAX), a field that explores the evolving dynamics of Human-Computer Interaction in AI-assisted coding environments. Despite rapid adoption, research on in-IDE HAX remains fragmented, which highlights the need for a unified overview of current practices, challenges, and opportunities. To provide a structured overview of existing research, we conduct a systematic literature review of 90 studies, summarizing current findings and outlining areas for further investigation. We organize key insights from reviewed studies into three aspects: Impact, Design, and Quality of AI-based systems inside IDEs. Impact findings show that AI-assisted coding enhances developer productivity but also introduces challenges, such as verification overhead and over-reliance. Design studies show that effective interfaces surface context, provide explanations and transparency of suggestion, and support user control. Quality studies document risks in correctness, maintainability, and security. For future research, priorities include productivity studies, design of assistance, and audit of AI-generated code. The agenda calls for larger and longer evaluations, stronger audit and verification assets, broader coverage across the software life cycle, and adaptive assistance under user control.
Bridging Quantum Mechanics and Computing: A Primer for Software Engineers
Arvind W Kiwelekar
Quantum mechanics, the fundamental theory that governs the behaviour of matter and energy at microscopic scales, forms the foundation of quantum computing and quantum information science. As quantum technologies progress, software engineers must develop a conceptual understanding of quantum mechanics to grasp its implications for computing. This article focuses on fundamental quantum mechanics principles for software engineers, including wave-particle duality, superposition, entanglement, quantum states, and quantum measurement. Unlike traditional physics-oriented discussions, this article focuses on computational perspectives, assisting software professionals in bridging the gap between classical computing and emerging quantum paradigms.
CoDocBench: A Dataset for Code-Documentation Alignment in Software Maintenance
Kunal Pai, Premkumar Devanbu, Toufique Ahmed
One of the central tasks in software maintenance is being able to understand and develop code changes. Thus, given a natural language description of the desired new operation of a function, an agent (human or AI) might be asked to generate the set of edits to that function to implement the desired new operation; likewise, given a set of edits to a function, an agent might be asked to generate a changed description, of that function's new workings. Thus, there is an incentive to train a neural model for change-related tasks. Motivated by this, we offer a new, "natural", large dataset of coupled changes to code and documentation mined from actual high-quality GitHub projects, where each sample represents a single commit where the code and the associated docstring were changed together. We present the methodology for gathering the dataset, and some sample, challenging (but realistic) tasks where our dataset provides opportunities for both learning and evaluation. We find that current models (specifically Llama-3.1 405B, Mixtral 8$\times$22B) do find these maintenance-related tasks challenging.
Hierarchical Graph Neural Network: A Lightweight Image Matching Model with Enhanced Message Passing of Local and Global Information in Hierarchical Graph Neural Networks
Enoch Opanin Gyamfi, Zhiguang Qin, Juliana Mantebea Danso
et al.
Graph Neural Networks (GNNs) have gained popularity in image matching methods, proving useful for various computer vision tasks like Structure from Motion (SfM) and 3D reconstruction. A well-known example is SuperGlue. Lightweight variants, such as LightGlue, have been developed with a focus on stacking fewer GNN layers compared to SuperGlue. This paper proposes the h-GNN, a lightweight image matching model, with improvements in the two processing modules, the GNN and matching modules. After image features are detected and described as keypoint nodes of a base graph, the GNN module, which primarily aims at increasing the h-GNN’s depth, creates successive hierarchies of compressed-size graphs from the base graph through a clustering technique termed SC+PCA. SC+PCA combines Principal Component Analysis (PCA) with Spectral Clustering (SC) to enrich nodes with local and global information during graph clustering. A dual non-contrastive clustering loss is used to optimize graph clustering. Additionally, four message-passing mechanisms have been proposed to only update node representations within a graph cluster at the same hierarchical level or to update node representations across graph clusters at different hierarchical levels. The matching module performs iterative pairwise matching on the enriched node representations to obtain a scoring matrix. This matrix comprises scores indicating potential correct matches between the image keypoint nodes. The score matrix is refined with a ‘dustbin’ to further suppress unmatched features. There is a reprojection loss used to optimize keypoint match positions. The Sinkhorn algorithm generates a final partial assignment from the refined score matrix. Experimental results demonstrate the performance of the proposed h-GNN against competing state-of-the-art (SOTA) GNN-based methods on several image matching tasks under homography, estimation, indoor and outdoor camera pose estimation, and 3D reconstruction on multiple datasets. Experiments also demonstrate improved computational memory and runtime, approximately 38.1% and 26.14% lower than SuperGlue, and an average of about 6.8% and 7.1% lower than LightGlue. Future research will explore the effects of integrating more recent simplicial message-passing mechanisms, which concurrently update both node and edge representations, into our proposed model.
Pansharpening Based on Adaptive High-Frequency Fusion and Injection Coefficients Optimization
Yong Yang, Chenxu Wan, Shuying Huang
et al.
The purpose of pansharpening is to fuse a multispectral (MS) image with a panchromatic (PAN) image to generate a high spatial-resolution multispectral (HRMS) image. However, the traditional pansharpening methods do not adequately take consideration of the information of MS images, resulting in inaccurate detail injection and spectral distortion in the pansharpened results. To solve this problem, a new pansharpening approach based on adaptive high-frequency fusion and injection coefficients optimization is proposed, which can obtain an accurate injected high-frequency component (HFC) and injection coefficients. First, we propose a multi-level sharpening model to enhance the spatial information of the MS image, and then extract the HFCs from the sharpened MS image and PAN image. Next, an adaptive fusion strategy is designed to obtain the accurate injected HFC by calculating the similarity and difference of the extracted HFCs. Regarding the injection coefficients, we propose injection coefficients optimization scheme based on the spatial and spectral relationship between the MS image and PAN image. Finally, the HRMS image is obtained through injecting the fused HFC into the upsampled MS image with the injection coefficients. Experiments with simulated and real data are performed on IKONOS and Pléiades datasets. Both subjective and objective results indicate that our method has better performance than state-of-the-art pansharpening approaches.
Ocean engineering, Geophysics. Cosmic physics
Mutual Coupling Reduction in Antenna Arrays Using Artificial Intelligence Approach and Inverse Neural Network Surrogates
Saeed Roshani, Slawomir Koziel, Salah I. Yahya
et al.
This paper presents a novel approach to reducing undesirable coupling in antenna arrays using custom-designed resonators and inverse surrogate modeling. To illustrate the concept, two standard patch antenna cells with 0.07λ edge-to-edge distance were designed and fabricated to operate at 2.45 GHz. A stepped-impedance resonator was applied between the antennas to suppress their mutual coupling. For the first time, the optimum values of the resonator geometry parameters were obtained using the proposed inverse artificial neural network (ANN) model, constructed from the sampled EM-simulation data of the system, and trained using the particle swarm optimization (PSO) algorithm. The inverse ANN surrogate directly yields the optimum resonator dimensions based on the target values of its S-parameters being the input parameters of the model. The involvement of surrogate modeling also contributes to the acceleration of the design process, as the array does not need to undergo direct EM-driven optimization. The obtained results indicate a remarkable cancellation of the surface currents between two antennas at their operating frequency, which translates into isolation as high as −46.2 dB at 2.45 GHz, corresponding to over 37 dB improvement as compared to the conventional setup.
Analysis of Automation Testing Using Repeato for Functional Testing of the Yess Nutrition Application Based on Flutter
Ari Rifqi Muhammad, Endang Anjarwani Sri, Hernawan Ari
Automated testing has the advantage of executing test cases faster than manual testing and has a higher accuracy rate because it can detect more defects in the application. Automated testing is also effective for regression testing performed when there is a fix or update, to ensure that the fix does not cause new bugs to appear in the system. Therefore, automated testing becomes essential to replace manual testing. Automated testing involves the use of testing tools or frameworks that can reduce the time required in the testing process. This paper reviews Repeato software as an automatic testing tool, where Repeato works based on Computer vision. Experiments were conducted to see the test steps and results of the application display rendering time using the Repeato tool. The advantage of Repeato lies in its ability to automate visualbased testing, which can save the time and effort required in manual testing. However, as is the case with other testing tools, Repeato also has its limitations and drawbacks. Repeatos may not be able to recognize visual elements that are very complex or have arbitrary patterns. Repeato can conduct two rounds of tests on the Yess Nutrition application within 216 seconds, which is equivalent to 3.6 minutes.
Rumor Detection Model on Social Media Based on Contrastive Learning with Edge-inferenceAugmentation
LIU Nan, ZHANG Fengli, YIN Jiaqi, CHEN Xueqin, WANG Ruijin
In recent years,in order to deal with various social problems which are caused by the wide spreading of rumors,researchers have developed many deep learning-based rumor detection methods.Although these methods improve detection performance by learning the high-level representation of rumor from its propagation structure,they still suffer the problem of lower reliability and cumulative errors effect,due to the ignoring of edges’ uncertainty when constructing the propagation network.To address such a problem,this paper proposes the edge-inference contrastive learning(EIC) model.EICL first constructs a propagation graph based on timestamps of retweets(comments) for a given message.Then,it augments the event propagation graph to capture the edge uncertainty of the propagation structure by a newly designed edge-weight adjustment strategy.Finally,it employs the contrastive learning technique to solve the sparsity problem of the original dataset and improve the model generalization.Experimental results show that the accuracy of EICL is improved by 2.0% and 3.0% on Twitter15 and Twitter16,respectively,compared with other state-of-the-art baselines,which demonstrate that it can significantly improve the performance of rumor detection on social media.
Computer software, Technology (General)
DBTRG: De Bruijn Trim rotation graph encoding for reliable DNA storage
Yunzhu Zhao, Ben Cao, Penghao Wang
et al.
DNA is a high-density, long-term stable, and scalable storage medium that can meet the increased demands on storage media resulting from the exponential growth of data. The existing DNA storage encoding schemes tend to achieve high-density storage but do not fully consider the local and global stability of DNA sequences and the read and write accuracy of the stored information. To address these problems, this article presents a graph-based De Bruijn Trim Rotation Graph (DBTRG) encoding scheme. Through XOR between the proposed dynamic binary sequence and the original binary sequence, k-mers can be divided into the De Bruijn Trim graph, and the stored information can be compressed according to the overlapping relationship. The simulated experimental results show that DBTRG ensures base balance and diversity, reduces the likelihood of undesired motifs, and improves the stability of DNA storage and data recovery. Furthermore, the maintenance of an encoding rate of 1.92 while storing 510 KB images and the introduction of novel approaches and concepts for DNA storage encoding methods are achieved.
Text Material Recommendation Method Combining Label Classification and Semantic QueryExpansion
MENG Yiyue, PENG Rong, LYU Qibiao
In the process of preparing various planning and research reports,researchers often need to collect and read a large amount of text materials according to the proposed catalog or title,not only the workload is large,but the quality cannot be gua-ranteed.To this end,in the field of digital government planning documentation,a text material recommendation method combining label classification and semantic query expansion is proposed.From the perspective of information retrieval,the titles at all levels in the catalog are regarded as query sentences,and the referenced text materials are used as target documents,so as to retrieve and recommend text materials.This method is based on the differential evolution algorithm,organically combining the text material recommendation method based on word vector average,semantic query expansion and label classification,which makes up the shortcoming of the traditional text material recommendation method and achieves to retrieve the text materials with the granularity of paragraphs through the title of catalog.After experimental verification on 10 datasets,the results show that the performance of the proposed method is significantly improved.It can greatly reduce the workload of manual material selection and material classification,as well as reduce the difficulty of documentation.
Computer software, Technology (General)
Unveiling the Life Cycle of User Feedback: Best Practices from Software Practitioners
Ze Shi Li, Nowshin Nawar Arony, Kezia Devathasan
et al.
User feedback has grown in importance for organizations to improve software products. Prior studies focused primarily on feedback collection and reported a high-level overview of the processes, often overlooking how practitioners reason about, and act upon this feedback through a structured set of activities. In this work, we conducted an exploratory interview study with 40 practitioners from 32 organizations of various sizes and in several domains such as e-commerce, analytics, and gaming. Our findings indicate that organizations leverage many different user feedback sources. Social media emerged as a key category of feedback that is increasingly critical for many organizations. We found that organizations actively engage in a number of non-trivial activities to curate and act on user feedback, depending on its source. We synthesize these activities into a life cycle of managing user feedback. We also report on the best practices for managing user feedback that we distilled from responses of practitioners who felt that their organization effectively understood and addressed their users' feedback. We present actionable empirical results that organizations can leverage to increase their understanding of user perception and behavior for better products thus reducing user attrition.
Moringa oleifera Seed Treated Sanitized Water Effect on Growth and Morpho-physiology of Commonly Consumed Vegetables of Malaysia
Md. Amirul Alam, Suhara B. Alias, Januarius Gobilik
et al.
Moringa oleifera seed solution was used in this study to treat municipal wastewater that were used as the treatment in this study. There were 3 treatments used; treated wastewater, normal tape water and untreated wastewater. The wastewater were collected at main drainage at Batu 7 (5o52’57.2’’N 118o02’39.7”E) and diagnosed based on the pH and EC. Data on plant height (cm), number of leaves, leaves length (cm), chlorophyll, and number of primary branches were taken every week until week 4. For root length (cm), fresh weight (g), dry weight (g) and moisture were taken after the harvesting. The data collected were analyzed by using Statistical Analysis Software (SAS) version 9.4 computer program with experimental design was Randomized Complete Block Design (RCBD). The means were separated and compared using Duncan’s Multiple Range Test (DMRT) at 0.05 significant level. M. oleifera seeds solution treated irrigation exhibited positive outcomes for most of the parameters recorded, but response of different vegetables were also different on varied parameters. The increase of pH from untreated waste water (6.40) to sanitized/treated waste water (6.73) and reduction of EC from untreated waste water (367.9) to sanitized/treated waste water (359.1) is the proof of making nutrients more available for plants uptake. From the overall study it is proved that M. oleifera seeds are suitable as the replacement and an alternative besides chemical coagulant to treat wastewater which is cheaper, eco-friendly and sustainable to be used in agricultural irrigation based on all the parameters evaluated in this study.
The Implementation of E-justice within the Framework of the Right to a Fair Trial in Ukraine: Problems and Prospects
Maksym Maika
Problems and prospects for the implementation of the concept of e-justice within the framework of the right to a fair trial in Ukraine are especially relevant today due to the digitalisation of state and legal relations. The components of the right to a fair trial and their relationship to the implementation of e-justice; a system of legal regulation, recent legislative changes, current conditions, and prospects for the development of e-justice in Ukraine require further research.
The author used the following methods to solve the relevant tasks: dialectical – problems in the functioning of e-justice in Ukraine; historical analysis –the evolution of the legal regulation and the scientific, legal doctrine of e-justice; analysis and synthesis – analysis of legal regulation, recent legislative changes, the current state of and prospects for the development of e-justice in Ukraine; deduction – allowed the author to move from the general provisions of legal theory to the application of these postulates in the study of e-justice; system analysis – suggesting ways to overcome the problems in the functioning of e-justice in Ukraine; formal and dogmatic – providing an analysis of the norms of current legislation; theoretical modelling – formulating the draft of legislative changes; comparative – a study of foreign experience in the legal regulation of e-governance, taking into account the practice of justice in Ukraine.
The author has identified problems in the functioning of e-justice in Ukraine and normative, legal, material, technical, and organisational problems in realising the principles of the right to a fair trial for citizens of Ukraine, taking into account the concept of e-justice as a component of e-governance. To solve these problems, the following are proposed: normative regulation of the procedure for submission and examination of e-evidence; certification and standardisation of computer equipment and software in the field of e-justice; legal education activities of the state in terms of promoting e-governance; improving the computer literacy of citizens and civil servants.
Automatic Recall of Software Lessons Learned for Software Project Managers
Tamer Mohamed Abdellatif, Luiz Fernando Capretz, Danny Ho
Lessons learned (LL) records constitute the software organization memory of successes and failures. LL are recorded within the organization repository for future reference to optimize planning, gain experience, and elevate market competitiveness. However, manually searching this repository is a daunting task, so it is often disregarded. This can lead to the repetition of previous mistakes or even missing potential opportunities. This, in turn, can negatively affect the profitability and competitiveness of organizations. We aim to present a novel solution that provides an automatic process to recall relevant LL and to push those LL to project managers. This will dramatically save the time and effort of manually searching the unstructured LL repositories and thus encourage the LL exploitation. We exploit existing project artifacts to build the LL search queries on-the-fly in order to bypass the tedious manual searching. An empirical case study is conducted to build the automatic LL recall solution and evaluate its effectiveness. The study employs three of the most popular information retrieval models to construct the solution. Furthermore, a real-world dataset of 212 LL records from 30 different software projects is used for validation. Top-k and MAP well-known accuracy metrics are used as well. Our case study results confirm the effectiveness of the automatic LL recall solution. Also, the results prove the success of using existing project artifacts to dynamically build the search query string. This is supported by a discerning accuracy of about 70% achieved in the case of top-k. The automatic LL recall solution is valid with high accuracy. It will eliminate the effort needed to manually search the LL repository. Therefore, this will positively encourage project managers to reuse the available LL knowledge, which will avoid old pitfalls and unleash hidden business opportunities.
Women's Participation in Open Source Software: A Survey of the Literature
Bianca Trinkenreich, Igor Wiese, Anita Sarma
et al.
Participation of women in Open Source Software (OSS) is very unbalanced, despite various efforts to improve diversity. This is concerning not only because women do not get the chance of career and skill developments afforded by OSS, but also because OSS projects suffer from a lack of diversity of thoughts because of a lack of diversity in their projects. Studies that characterize women's participation and investigate how to attract and retain women are spread across multiple fields, including information systems, software engineering, and social science. This paper systematically maps, aggregates, and synthesizes the state-of-the-art on women's participation in Open Source Software. It focuses on women's representation and the demographics of women who contribute to OSS, how they contribute, the acceptance rates of their contributions, their motivations and challenges, and strategies employed by communities to attract and retain women. We identified 51 articles (published between 2005 and 2021) that investigate women's participation in OSS. According to the literature, women represent about 9.8\% of OSS contributors; most of them are recent contributors, 20-37 years old, devote less than 5h/week to OSS, and make both non-code and code contributions. Only 5\% of projects have women as core developers, and women author less than 5\% of pull-requests but have similar or even higher rates of merge acceptance than men. Besides learning new skills and altruism, reciprocity and kinship are motivations especially relevant for women but can leave if they are not compensated for their contributions. Women's challenges are mainly social, including lack of peer parity and non-inclusive communication from a toxic culture. The literature reports ten strategies, which were mapped to six of the seven challenges. Based on these results, we provide guidelines for future research and practice.
Tumour growth prediction of follow‐up lung cancer via conditional recurrent variational autoencoder
Ning Xiao, Yan Qiang, Zijuan Zhao
et al.
The prediction of lung tumour growth is the key to early treatment of lung cancer. However, the lack of intuitive and clear judgments about the future development of the tumour often leads patients to miss the best treatment opportunities. Combining the characteristics of the variational autoencoder and recurrent neural networks, this study proposes a tumour growth prediction via a conditional recurrent variational autoencoder. The proposed model uses a variational autoencoder to reconstruct tumour images at different times. Meanwhile, the recurrent units are proposed to infer the relationship between tumour images according to the chronological order. The different tumour development varies in different patients, patients' condition is adopted to achieve personalised prediction. To solve the problem of blurred results, the authors add the total variation regularisation term into the object function. The proposed method was tested on longitudinal studies, National Lung Screening Trial and cooperative hospital dataset, with three points on lung tumours. The precision, recall, and dice similarity coefficient reach 82.22, 79.89 and 82.49%, respectively. Both quantitative and qualitative experimental results show that the proposed method can produce realistic tumour images.
Photography, Computer software