Explainable artificial intelligence in air traffic control: effects of expertise on workload, acceptance, and usage intentions
Giulia Cartocci, Alexandre Veyrié, Nicola Cavagnetto
et al.
Abstract Explainability is crucial for establishing user trust in Artificial Intelligence (AI), particularly within safety-critical domains such as Air Traffic Management (ATM) and Air Traffic Control (ATC). This study empirically investigates the effects of Explainable AI (XAI), specifically HeatMap-based visual explanations, on cognitive workload, user acceptance, and intention to use AI-driven decision-support systems among Air Traffic Control Officers (ATCOs). Despite significant theoretical advancements in the broader XAI domain, empirical evidence addressing the specific impact of visual explanations on human-AI interactions in safety-critical environments like ATC remains limited. To address these critical gaps, an experimental comparison was conducted between explainable (HeatMap) and non-explainable (BlackBox) AI conditions, involving two user groups: expert and student ATCOs. Both objective neurophysiological measures (Electroencephalography) and subjective questionnaires were employed to capture comprehensive user responses. Key findings revealed that the presence of visual explanations significantly reduced cognitive workload and enhanced users’ willingness to adopt the AI system, regardless of participants’ level of expertise. However, explicit perceptions of AI’s impact on work performance were predominantly influenced by expertise, with less experienced controllers reporting a greater perceived impact than their expert counterparts. By combining objective neurometrics with subjective user assessments, this research advances methodological rigor in evaluating human-AI interactions and highlights the importance of tailored, user-centric explanations. These findings directly contribute to practical guidelines for designing cognitively compatible and trustworthy AI tools in ATC, providing nuanced insights for targeted training and deployment strategies based on user expertise.
Computer applications to medicine. Medical informatics, Computer software
Psychological safety in software workplaces: A systematic literature review
Beatriz Santana, Lidivânio Monte, Bianca Santana de Araújo Silva
et al.
Context: Psychological safety (PS) is an important factor influencing team well-being and performance, particularly in collaborative and dynamic domains such as software development. Despite its acknowledged significance, research on PS within the field of software engineering remains limited. The socio-technical complexities and fast-paced nature of software development present challenges to cultivating PS. To the best of our knowledge, no systematic secondary study has synthesized existing knowledge on PS in the context of software engineering. Objective: This study aims to systematically review and synthesize the existing body of knowledge on PS in software engineering. Specifically, it seeks to identify the potential antecedents and consequences associated with the presence or absence of PS among individuals involved in the software development process. Methods: A systematic literature review was conducted, encompassing studies retrieved from four digital libraries. The extracted data were subjected to both quantitative and qualitative analyses. Results: The findings indicate a growing academic interest in PS within software engineering, with the majority of studies grounded in Edmondson's framework. Factors antecedents of PS were identified at the individual, team, and organizational levels, including team autonomy, agile methodologies, and leadership behaviors. Conclusion: PS fosters innovation, learning, and team performance within software development. However, significant gaps persist in understanding the contextual factors influencing PS, its underlying mechanisms, and effective strategies for its enhancement. Future research should address these gaps by investigating the practical applications of PS within diverse organizational settings in the software engineering domain.
Software Vulnerability Management in the Era of Artificial Intelligence: An Industry Perspective
M. Mehdi Kholoosi, Triet Huynh Minh Le, M. Ali Babar
Artificial Intelligence (AI) has revolutionized software development, particularly by automating repetitive tasks and improving developer productivity. While these advancements are well-documented, the use of AI-powered tools for Software Vulnerability Management (SVM), such as vulnerability detection and repair, remains underexplored in industry settings. To bridge this gap, our study aims to determine the extent of the adoption of AI-powered tools for SVM, identify barriers and facilitators to the use, and gather insights to help improve the tools to meet industry needs better. We conducted a survey study involving 60 practitioners from diverse industry sectors across 27 countries. The survey incorporates both quantitative and qualitative questions to analyze the adoption trends, assess tool strengths, identify practical challenges, and uncover opportunities for improvement. Our findings indicate that AI-powered tools are used throughout the SVM life cycle, with 69% of users reporting satisfaction with their current use. Practitioners value these tools for their speed, coverage, and accessibility. However, concerns about false positives, missing context, and trust issues remain prevalent. We observe a socio-technical adoption pattern in which AI outputs are filtered through human oversight and organizational governance. To support safe and effective use of AI for SVM, we recommend improvements in explainability, contextual awareness, integration workflows, and validation practices. We assert that these findings can offer practical guidance for practitioners, tool developers, and researchers seeking to enhance secure software development through the use of AI.
Creative Problem-Solving: A Study with Blind and Low Vision Software Professionals
Karina Kohl, Yoonha Cha, Victoria Jackson
et al.
Background: Software engineering requires both technical skills and creative problem-solving. Blind and low-vision software professionals (BLVSPs) encounter numerous workplace challenges, including inaccessible tools and collaboration hurdles with sighted colleagues. Objective: This study explores the innovative strategies employed by BLVSPs to overcome these accessibility barriers, focusing on their custom solutions and the importance of supportive communities. Methodology: We conducted semi-structured interviews with 30 BLVSPs and used reflexive thematic analysis to identify key themes. Results: Findings reveal that BLVSPs are motivated to develop creative and adaptive solutions, highlighting the vital role of collaborative communities in fostering shared problem-solving. Conclusion: For BLVSPs, creative problem-solving is essential for navigating inaccessible work environments, in contrast to sighted peers, who pursue optimization. This study enhances understanding of how BLVSPs navigate accessibility challenges through innovation.
Mapping of the system of software-related emissions and shared responsibilities
Laura Partanen, Antti Sipila, Md Sanaul Haque
et al.
The global climate is experiencing a rapid and unprecedented warming trend. The ICT sector is a notable contributor to global greenhouse gas emissions, with its environmental impact continuing to expand. Addressing this issue is vital for achieving the objectives of the Paris Agreement, particularly the goal of limiting global temperature rise to 1.5°C. At the European Union level, regulatory measures such as the CSRD and the CSDD impose obligations on companies, including those within the ICT sector, to recognize and mitigate their environmental footprint. This study provides a comprehensive system mapping aimed at enhancing the awareness and understanding of software-related emissions and the corresponding responsibilities borne by the ICT sector. The mapping identifies the primary sources of carbon emissions and energy consumption within the ICT domain while also outlining the key responsibilities of the stakeholders accountable throughout the software lifecycle.
Embracing Experiential Learning: Hackathons as an Educational Strategy for Shaping Soft Skills in Software Engineering
Allysson Allex Araújo, Marcos Kalinowski, Maria Teresa Baldassarre
In recent years, Software Engineering (SE) scholars and practitioners have emphasized the importance of integrating soft skills into SE education. However, teaching and learning soft skills are complex, as they cannot be acquired passively through raw knowledge acquisition. On the other hand, hackathons have attracted increasing attention due to their experiential, collaborative, and intensive nature, which certain tasks could be similar to real-world software development. This paper aims to discuss the idea of hackathons as an educational strategy for shaping SE students' soft skills in practice. Initially, we overview the existing literature on soft skills and hackathons in SE education. Then, we report preliminary empirical evidence from a seven-day hybrid hackathon involving 40 students. We assess how the hackathon experience promoted innovative and creative thinking, collaboration and teamwork, and knowledge application among participants through a structured questionnaire designed to evaluate students' self-awareness. Lastly, our findings and new directions are analyzed through the lens of Self-Determination Theory, which offers a psychological lens to understand human behavior. This paper contributes to academia by advocating the potential of hackathons in SE education and proposing concrete plans for future research within SDT. For industry, our discussion has implications around developing soft skills in future SE professionals, thereby enhancing their employability and readiness in the software market.
Robust Internet of Things Multidimensional Time Series Data Prediction Method
SHEN Chen, HE Yong, PENG Anlang
In Internet of Things (IoT) scenarios, data are susceptible to noise during collection and transmission, resulting in outliers and missing data. Existing temporal regularized matrix factorization models typically consider the squared loss as a measure of reconstruction errors, ignoring the fact that the quality of matrix factorization is also a key factor affecting a model's prediction performance when dealing with multidimensional time series in the presence of anomalous data. Therefore, this paper proposes a Time Aware Robust Non-negative Matrix Factorization multidimensional temporal prediction framework (TARNMF) based on the L<sub>2, log</sub> norm. TARNMF establishes the spatiotemporal correlation of multidimensional time series data through Nonnegative Matrix Factorization (NMF) and autoregressive temporal regular terms with learnable parameters. In the presence of outliers, data obey the Laplace distribution. Based on this assumption, the L<sub>2, log</sub> norm is used to estimate the error between the original data and the reconstructed matrices in the nonnegative robust matrix factorization to minimize the interference of the anomalous data on the prediction model. The L<sub>2, log</sub> norm is as robust as existing metric functions, solves the problem of approximating the L<sub>1</sub> loss, and reduces its effect on the objective function by compressing the residuals of the outliers. The paper also proposes a projected gradient descent-based optimization method to optimize the model. Experiments on a high-dimensional Solar dataset show that TARNMF is scalable and robust, and the relative mean absolute error of the suboptimal results is reduced by 8.64%. Meanwhile, results on noisy data verify that TARNMF can efficiently process and predict IoT time series data in the presence of anomalous data.
Computer engineering. Computer hardware, Computer software
Evaluation criteria of centralization options in the architecture of multicomputer systems with traps and baits
Antonina Kashtalian, Sergii Lysenko, Anatoliy Sachenko
et al.
Independent restructuring of the architecture of multicomputer systems during their operation is a complex task, since such systems are distributed. One of the tasks in this restructuring is to change the architecture of system centers. That is, the system can be rebuilt without changes in its center. But the specifics of the tasks of systems for detecting malicious software and computer attacks require such an organization of systems that it is difficult for attackers to understand their behavior. Therefore, the current task considered in the work is the development of rules for ensuring the restructuring of system centers according to different types of architecture. The aim of the work is to develop criteria for evaluating potential options for centralization in the architecture of multicomputer systems with traps and decoys. To ensure such an assessment, the work analyzed known solutions and established the insufficiency of mathematical support for organizing the restructuring of system centers during their operation. Taking into account the specifics of the tasks for such systems, no parameters were determined that could be taken into account for the formation of the restructuring of system centers. The analyzed works establish the main types of centralization used in the architecture of systems: centralized, partially centralized, partially decentralized, decentralized. However, algorithms and methods for the transition of systems from one type to another in the process of their functioning are not provided. Subject. The work defines characteristic properties that can be used when synthesizing systems. They determine the number of potential variants of the system architecture to which it will switch at the next step when making a decision on restructuring the architecture. With an increase in the number of characteristic properties, the number of possible variants will increase. When approving the variants for the transition, it was necessary to evaluate them taking into account the previous experience of the systems' functioning. To evaluate potential centralization variants in the architecture of systems, evaluation criteria were developed. A feature of the evaluation criteria is that according to them, it is possible to take into account the experience of using the centralization variant in the case of repetition and evaluate the prepared variants that are offered for the first time. That is, the evaluation criteria include the previous experience of the functioning of multi-computer systems. This experience made it possible to evaluate the repeated option based on the results of its previous use. This made it possible to diversify the choice of system centers. Methods. The work developed an objective function for evaluating the next centralization option in the system architecture. The objective function takes into account four evaluation criteria for operational efficiency, stability, integrity and security. All these criteria are focused on evaluating potential options for system centers. New mathematical models were developed for the criteria for operational efficiency, stability, integrity and security in relation to the system center, which, unlike the known mathematical models for evaluating system centers for selecting the next options for centralization, are presented in analytical expressions that take into account the features of the types of centralization in the system architecture, indicators of operational efficiency, stability, integrity and security in relation to the system center and allow forming on their basis an objective function for evaluating options for centralization in systems, the feature of which is the hiding of components with the system center from detection by attackers. Results. The work analyzed the results of an experiment conducted with a prototype of the system. The convergence of the experimental results and the results obtained by the theoretical method has been established. Conclusion. The study introduces mathematical models for evaluating system centers based on operational efficiency, stability, integrity, and security criteria. Unlike existing models, these are presented as analytical expressions that account for various centralization types within system architectures. The models enable the creation of objective functions to evaluate centralization options, emphasizing the concealment of system center components from attackers. Experimental results with a system prototype confirm the theoretical models' validity, showing minimal deviations in function graphs. Significant deviations in specific time intervals are addressed to achieve optimal centralization options.
Computer engineering. Computer hardware, Electronic computers. Computer science
Blockchain-based Highly Trusted Query Verification Scheme for Streaming Data
YANG Fan, SUN Yi, LIN Wei, GAO Qi
With the popularization of intelligent IoT applications,IoT devices are required to continuously collect a large amount of streaming data for real-time processing.Due to their resource constraints,a large amount of stream data must be outsourced to server storage management.How to ensure the integrity of stream data with strong real-time and infinite growth is a complex and challenging problem.Although research has proposed schemes for streaming data integrity verification,the correctness and data integrity of query results returned by malicious servers in untrustworthy outsourced storage service environments are still not guaranteed.Recently,the emergence of blockchain technology based on distributed consensus implementation brings new solution ideas and methods to the data integrity verification problem,therefore,this paper proposes a highly trustworthy streaming data query verification scheme based on the immutability of blockchain,and designs a low-maintenance data structure CS-DCAT on the blockchain,which only stores the root node hash value of the authentication tree on the blockchain.It is suitable for processing streaming data with unpredictable data volume and can realize range query verification of streaming data.The security analysis proves the correctness and security of this scheme,and the performance evaluation shows that this scheme can realize low gas overhead on the blockchain,and the computational complexity of range query and verification is only related to the current data volume,which does not introduce too much extra computational cost and communication overhead.
Computer software, Technology (General)
Seasonal PM2.5 Concentration Prediction Based on SARIMA-SVM Model
SONG Yinghua, XU Yaan, ZHANG Yuanjin
Air pollution is one of the primary challenges in urban environmental governance, with PM<sub>2.5</sub> being a significant contributor that affects air quality. As the traditional time-series prediction models for PM<sub>2.5</sub> often lack seasonal factor analysis and sufficient prediction accuracy, a fusion model based on machine learning, Seasonal Autoregressive Integrated Moving Average (SARIMA)-Support Vector Machine (SVM), is proposed in this paper. The fusion model is a tandem fusion model, which splits the data into linear and nonlinear parts. Based on the Autoregressive Integral Moving Average (ARIMA) model, the SARIMA model adds seasonal factor extraction parameters, to effectively analyze and predict the future linear seasonal trend of PM<sub>2.5</sub> data. Combined with the SVM model, the sliding step size prediction method is used to determine the optimal prediction step size for the residual series, thereby optimizing the residual sequence of the predicted data. The optimal model parameters are further determined through grid search, leading to the long-term predictions of PM<sub>2.5</sub> data and improves overall prediction accuracy. The analysis of the PM<sub>2.5</sub> monitoring data in Wuhan for the past five years shows that prediction accuracy of the fusion model is significantly higher than that of the single model. In the same experimental environment, the accuracy of the fusion model is improved by 99%, 99%, and 98% compared with those of ARIMA, Auto ARIMA, and SARIMA models, respectively and the stability of the model is also better, thus providing a new direction for the prediction of PM<sub>2.5</sub>.
Computer engineering. Computer hardware, Computer software
Lightweight Image Classification Algorithm Based on Domain Generalization
ZHANG Changchang, LÜ Weidong, CAI Zijie, LIU Yankui
To address the lack of Sleeping on Duty datasets, poor generalization of current classification algorithms, and slow inference speeds, a Sleeping on Duty dataset containing 4 708 images is constructed to verify the recognition accuracy and generalization ability of the model. Additionally, a lightweight image classification algorithm, Stable_MobileNet, based on domain generalization, is proposed. First, the input images are padded along the shorter edges to maintain the aspect ratio of people within the images, followed by image enhancement and random erasure to expand the dataset. Second, the Efficient Channel Attention (ECA) module is introduced to improve the MobileNetv3_large network. Finally, the stable learning method, StableNet, is applied to enhance the generalization of the model by learning the weights of the training samples, reducing feature dependency, and allowing the model to focus more on character features rather than environmental factors. Experimental results on the Sleeping on Duty dataset indicate that Stable_MobileNet achieves faster average inference compared to MobileNetv3_large, with a recognition accuracy of 93.56%, which is 2.23% higher than that of MobileNetv3_large. In the test set, where the sample distribution differed from that of the training set, the recognition accuracy of Stable_MobileNet is improved by 2.23%.
Computer engineering. Computer hardware, Computer software
Performance Optimization Method for Domestic Cryptographic Algorithm SM9
XIE Zhenjie, LIU Yiming, CAI Ruijie, LUO Youqiang
To address the challenge of computational performance optimization in the domestic cryptographic algorithm SM9,a suite of performance enhancement techniques has been developed and applied.These methods include fixed-point scalar multiplication precomputation on elliptic curves,an improved Miller algorithm with precomputation,an optimized construction for the hard part of final exponentiation,modular exponentiation within the cyclotomic subgroup,and modular exponentiation employing a Comb-based fixed-base strategy.Through these tailored approaches,significant enhancements have been achieved in the computation of the SM9 algorithm,especially in the time-consuming steps,such as scalar multiplication on elliptic curves,bilinear pairing,and modular exponentiationin the 12th extension field.The seven fundamental SM9 algorithms,encompassing digital signature generation and verification,key exchange,key encapsulation and decapsulation,as well as encryption and decryption,have been effectively implemented in Python.Comprehensive testing reveals that the integration of these optimization techniques yields performance improvements ranging from 32% to 352% for the SM9 algorithms,marking a substantial advance in their computational efficiency.
Computer software, Technology (General)
Insights Towards Better Case Study Reporting in Software Engineering
Sergio Rico
Case studies are a popular and noteworthy type of research study in software engineering, offering significant potential to impact industry practices by investigating phenomena in their natural contexts. This potential to reach a broad audience beyond the academic community is often undermined by deficiencies in reporting, particularly in the context description, study classification, generalizability, and the handling of validity threats. This paper presents a reflective analysis aiming to share insights that can enhance the quality and impact of case study reporting. We emphasize the need to follow established guidelines, accurate classification, and detailed context descriptions in case studies. Additionally, particular focus is placed on articulating generalizable findings and thoroughly discussing generalizability threats. We aim to encourage researchers to adopt more rigorous and communicative strategies, ensuring that case studies are methodologically sound, resonate with, and apply to software engineering practitioners and the broader academic community. The reflections and recommendations offered in this paper aim to ensure that insights from case studies are transparent, understandable, and tailored to meet the needs of both academic researchers and industry practitioners. In doing so, we seek to enhance the real-world applicability of academic research, bridging the gap between theoretical research and practical implementation in industry.
The Potential of Citizen Platforms for Requirements Engineering of Large Socio-Technical Software Systems
Jukka Ruohonen, Kalle Hjerppe
Participatory citizen platforms are innovative solutions to digitally better engage citizens in policy-making and deliberative democracy in general. Although these platforms have been used also in an engineering context, thus far, there is no existing work for connecting the platforms to requirements engineering. The present paper fills this notable gap. In addition to discussing the platforms in conjunction with requirements engineering, the paper elaborates potential advantages and disadvantages, thus paving the way for a future pilot study in a software engineering context. With these engineering tenets, the paper also contributes to the research of large socio-technical software systems in a public sector context, including their implementation and governance.
Developing and Sustaining a Student-Driven Software Solutions Center -- An Experience Report
Saheed Popoola, Vineela Kunapareddi, Hazem Said
This paper presents an experience report on the establishment and sustenance of a student-driven software solutions center named Information Technology Solutions Center (ITSC), a unit within the School of Information Technology at the University of Cincinnati. A student-driven solution center empowers students to drive the design, development, execution, and maintenance of software solutions for industrial clients. This exposes the students to real-world projects and ensures that students are fully prepared to meet the demands of the ever-changing industrial landscape. The ITSC was established over a decade ago, has trained over 100 students, and executes about 20 projects annually with several industrial partners including Fortune 500 companies, government institutions, and research agencies. This paper discusses the establishment and maintenance of the center with the goal of motivating and providing a clear blueprint for computing programs that want to establish a similar student-driven software solutions center.
Challenges Faced by Women in New Zealand's Construction Industry: Impact of Demographic Factors
Funmilayo Ebun Rotimi, Marcela Brauner, Megan Burfoot
et al.
Diversity and inclusion of the construction workforce are considered fundamental to disrupting the perception of the male-dominated construction industry. Despite efforts to increase diversity and inclusion, the construction industry continues to record a slow increase in women’s representation, resulting in the industry missing significant potential talent. Therefore, identifying women's work environment challenges is vital for promoting construction careers. This study examines three categories of challenges: benevolent sexism, hostile sexism, and job conditions and the influences of demographic factors. The study adopted a quantitative research method, with 65 structured questionnaires completed by women working in the industry. The study found that benevolent sexism challenges, such as stereotyping and pressure to prove oneself and hostile sexism challenges, like masculine culture, sexual harassment, and lack of respect, are significant for women in construction. A lack of female role models and work overload are two job condition-related challenges that affect women in the industry. The findings from this study are an important contribution to the existing literature, as they highlight the need to consider demographic factors when creating initiatives to address the challenges faced by women in the construction industry.
Engineering economy, Building construction
Trust in Software Supply Chains: Blockchain-Enabled SBOM and the AIBOM Future
Boming Xia, Dawen Zhang, Yue Liu
et al.
The robustness of critical infrastructure systems is contingent upon the integrity and transparency of their software supply chains. A Software Bill of Materials (SBOM) is pivotal in this regard, offering an exhaustive inventory of components and dependencies crucial to software development. However, prevalent challenges in SBOM sharing, such as data tampering risks and vendors' reluctance to fully disclose sensitive information, significantly hinder its effective implementation. These challenges pose a notable threat to the security of critical infrastructure and systems where transparency and trust are paramount, underscoring the need for a more secure and flexible mechanism for SBOM sharing. To bridge the gap, this study introduces a blockchain-empowered architecture for SBOM sharing, leveraging verifiable credentials to allow for selective disclosure. This strategy not only heightens security but also offers flexibility. Furthermore, this paper broadens the remit of SBOM to encompass AI systems, thereby coining the term AI Bill of Materials (AIBOM). The advent of AI and its application in critical infrastructure necessitates a nuanced understanding of AI software components, including their origins and interdependencies. The evaluation of our solution indicates the feasibility and flexibility of the proposed SBOM sharing mechanism, positing a solution for safeguarding (AI) software supply chains, which is essential for the resilience and reliability of modern critical infrastructure systems.
Recursive recurrent neural network: A novel model for manipulator control with different levels of physical constraints
Zhan Li, Shuai Li
Abstract Manipulators actuate joints to let end effectors to perform precise path tracking tasks. Recurrent neural network which is described by dynamic models with parallel processing capability, is a powerful tool for kinematic control of manipulators. Due to physical limitations and actuation saturation of manipulator joints, the involvement of joint constraints for kinematic control of manipulators is essential and critical. However, current existing manipulator control methods based on recurrent neural networks mainly handle with limited levels of joint angular constraints, and to the best of our knowledge, methods for kinematic control of manipulators with higher order joint constraints based on recurrent neural networks are not yet reported. In this study, for the first time, a novel recursive recurrent network model is proposed to solve the kinematic control issue for manipulators with different levels of physical constraints, and the proposed recursive recurrent neural network can be formulated as a new manifold system to ensure control solution within all of the joint constraints in different orders. The theoretical analysis shows the stability and the purposed recursive recurrent neural network and its convergence to solution. Simulation results further demonstrate the effectiveness of the proposed method in end‐effector path tracking control under different levels of joint constraints based on the Kuka manipulator system. Comparisons with other methods such as the pseudoinverse‐based method and conventional recurrent neural network method substantiate the superiority of the proposed method.
Computational linguistics. Natural language processing, Computer software
Review of Trajectory Prediction Technology in Autonomous Driving Scenes
LI Xuesong, ZHANG Qieshi, SONG Chengqun, KANG Yuhang, CHENG Jun
Trajectory prediction is a key technology in the fields of autonomous driving and intelligent transportation. The accurate prediction of trajectories for vehicles and moving pedestrians can improve the perception of environmental changes in autonomous driving systems,thereby ensuring overall safety.The data-driven trajectory prediction method accurately captures interaction characteristics between agents,analyzes the historical motion and static environment information of all agents within a scene,and predicts the agents' future trajectories.The mathematical models of trajectory prediction are introduced and categorized as traditional and data-driven trajectory prediction methods.The four main challenges faced by mainstream data-driven trajectory prediction methods include intelligent agent interaction modeling,motion behavior intention prediction,trajectory diversity prediction,and static environmental information fusion within a scene.Herein,starting from the use of trajectory prediction datasets,the performance evaluation indicators,model characteristics,and other aspects of typical data-driven trajectory prediction methods are analyzed and compared. On this basis,the solutions and application scenarios of the said methods to address the abovementioned challenges are summarized,and future development directions of trajectory prediction technology in autonomous driving are proposed.
Computer engineering. Computer hardware, Computer software
Quantum Machine Learning for Software Supply Chain Attacks: How Far Can We Go?
Mohammad Masum, Mohammad Nazim, Md Jobair Hossain Faruk
et al.
Quantum Computing (QC) has gained immense popularity as a potential solution to deal with the ever-increasing size of data and associated challenges leveraging the concept of quantum random access memory (QRAM). QC promises quadratic or exponential increases in computational time with quantum parallelism and thus offer a huge leap forward in the computation of Machine Learning algorithms. This paper analyzes speed up performance of QC when applied to machine learning algorithms, known as Quantum Machine Learning (QML). We applied QML methods such as Quantum Support Vector Machine (QSVM), and Quantum Neural Network (QNN) to detect Software Supply Chain (SSC) attacks. Due to the access limitations of real quantum computers, the QML methods were implemented on open-source quantum simulators such as IBM Qiskit and TensorFlow Quantum. We evaluated the performance of QML in terms of processing speed and accuracy and finally, compared with its classical counterparts. Interestingly, the experimental results differ to the speed up promises of QC by demonstrating higher computational time and lower accuracy in comparison to the classical approaches for SSC attacks.