Hasil untuk "Computer software"

Menampilkan 20 dari ~8152304 hasil · dari CrossRef, DOAJ, Semantic Scholar, arXiv

JSON API
DOAJ Open Access 2025
Evaluating Educational Game Design Through Human–Machine Pair Inspection: Case Studies in Adaptive Learning Environments

Ioannis Sarlis, Dimitrios Kotsifakos, Christos Douligeris

Educational games often fail to effectively merge game mechanics with educational goals, lacking adaptive feedback and real-time performance monitoring. This study explores how Human–Computer Interaction principles and adaptive feedback can enhance educational game design to improve learning outcomes and user experience. Four educational games were analyzed using a mixed-methods approach and evaluated through established frameworks, such as the Serious Educational Games Evaluation Framework, the Assessment of Learning and Motivation Software, the Learning Object Evaluation Scale for Students, and Universal Design for Learning guidelines. In addition, a novel Human–Machine Pair Inspection protocol was employed to gather real-time data on adaptive feedback, cognitive load, and interactive behavior. Findings suggest that Human–Machine Pair Inspection-based adaptive mechanisms significantly boost personalized learning, knowledge retention, and student motivation by better aligning games with learning objectives. Although the sample size is small, this research provides practical insights for educators and designers, highlighting the effectiveness of adaptive Game-Based Learning. The study proposes the Human–Machine Pair Inspection methodology as a valuable tool for creating educational games that successfully balance user experience with learning goals, warranting further empirical validation with larger groups.

Technology, Science
arXiv Open Access 2025
The EmpathiSEr: Development and Validation of Software Engineering Oriented Empathy Scales

Hashini Gunatilake, John Grundy, Rashina Hoda et al.

Empathy plays a critical role in software engineering (SE), influencing collaboration, communication, and user-centred design. Although SE research has increasingly recognised empathy as a key human aspect, there remains no validated instrument specifically designed to measure it within the unique socio-technical contexts of SE. Existing generic empathy scales, while well-established in psychology and healthcare, often rely on language, scenarios, and assumptions that are not meaningful or interpretable for software practitioners. These scales fail to account for the diverse, role-specific, and domain-bound expressions of empathy in SE, such as understanding a non-technical user's frustrations or another practitioner's technical constraints, which differ substantially from empathy in clinical or everyday contexts. To address this gap, we developed and validated two domain-specific empathy scales: EmpathiSEr-P, assessing empathy among practitioners, and EmpathiSEr-U, capturing practitioner empathy towards users. Grounded in a practitioner-informed conceptual framework, the scales encompass three dimensions of empathy: cognitive empathy, affective empathy, and empathic responses. We followed a rigorous, multi-phase methodology, including expert evaluation, cognitive interviews, and two practitioner surveys. The resulting instruments represent the first psychometrically validated empathy scales tailored to SE, offering researchers and practitioners a tool for assessing empathy and designing empathy-enhancing interventions in software teams and user interactions.

en cs.SE
arXiv Open Access 2025
Calculating Software's Energy Use and Carbon Emissions: A Survey of the State of Art, Challenges, and the Way Ahead

Priyavanshi Pathania, Nikhil Bamby, Rohit Mehra et al.

The proliferation of software and AI comes with a hidden risk: its growing energy and carbon footprint. As concerns regarding environmental sustainability come to the forefront, understanding and optimizing how software impacts the environment becomes paramount. In this paper, we present a state-of-the-art review of methods and tools that enable the measurement of software and AI-related energy and/or carbon emissions. We introduce a taxonomy to categorize the existing work as Monitoring, Estimation, or Black-Box approaches. We delve deeper into the tools and compare them across different dimensions and granularity - for example, whether their measurement encompasses energy and carbon emissions and the components considered (like CPU, GPU, RAM, etc.). We present our observations on the practical use (component wise consolidation of approaches) as well as the challenges that we have identified across the current state-of-the-art. As we start an initiative to address these challenges, we emphasize active collaboration across the community in this important field.

en cs.SE, cs.CY
DOAJ Open Access 2024
MCM- $$V_b$$ V b F: dance hand gestures recognition with vision based features

Mampi Devi, Sarat Saharia, Dhruba Kumar Bhattacharyya et al.

Abstract To digitize and preserve the cultural heritage in the form of Indian classical dance become apparent area of research. Sattriya classical dance of North-East India (Assam) is one of the eight Indian classical dance forms that requires immediate preservation. Sattriya classical dance consists of 29 Asamyukta hastas (single-hand gestures) and 14 Samyukta hastas (double-hand gestures). Moreover, the foundation of Samyukta hasta depends on understanding Asamyukta hasta. Therefore, the paper aims to classify single-hand gestures of Sattriya classical dance only. Although, a solution based on two level classification method to classify the Sattriya classical dance is available in recent literature, but it requires a trial and error method to select the optimized features. Since, Asamyukta hastas can appear closely similar to each other and therefore misclassification chances are very high. Thus, accuracy rate obtained for the two level classification method was only 75.45%. So, to address this issues in this paper, a Multilevel Classification Model with Vision based Features (MCM- $$V_b$$ V b F) has been proposed to classify the Asamyukta hastas of Sattriya classical dance. This model uses two types of feature matching, viz., high-level feature matching and low-level feature matching. To extract the high-level features and low-level features different algorithm has been proposed. In this model, features are automatically selected. This proposed MCM- $$V_b$$ V b F model is also tested on Asamyukta hasta mudras of Bharatanatyam classical dance of South India (Tamil Nadu). This model obtain an accuracy 94.12%, 87.14% for Sattriya classical dance Single-Hand Gestures (SSHG) dataset and Bharatnatyam classical dance Single-Hand Gestures (BHSG) dataset respectively. This paper also provides the comparative study of the proposed model MCM- $$V_b$$ V b F with traditional bench-mark classifier model such as Naive Bayes, Decision Tree and Support Vector Classifier (SVM) etc.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2024
Stroke Lesion Segmentation and Deep Learning: A Comprehensive Review

Mishaim Malik, Benjamin Chong, Justin Fernandez et al.

Stroke is a medical condition that affects around 15 million people annually. Patients and their families can face severe financial and emotional challenges as it can cause motor, speech, cognitive, and emotional impairments. Stroke lesion segmentation identifies the stroke lesion visually while providing useful anatomical information. Though different computer-aided software are available for manual segmentation, state-of-the-art deep learning makes the job much easier. This review paper explores the different deep-learning-based lesion segmentation models and the impact of different pre-processing techniques on their performance. It aims to provide a comprehensive overview of the state-of-the-art models and aims to guide future research and contribute to the development of more robust and effective stroke lesion segmentation models.

Technology, Biology (General)
arXiv Open Access 2024
BinaryAI: Binary Software Composition Analysis via Intelligent Binary Source Code Matching

Ling Jiang, Junwen An, Huihui Huang et al.

While third-party libraries are extensively reused to enhance productivity during software development, they can also introduce potential security risks such as vulnerability propagation. Software composition analysis, proposed to identify reused TPLs for reducing such risks, has become an essential procedure within modern DevSecOps. As one of the mainstream SCA techniques, binary-to-source SCA identifies the third-party source projects contained in binary files via binary source code matching, which is a major challenge in reverse engineering since binary and source code exhibit substantial disparities after compilation. The existing binary-to-source SCA techniques leverage basic syntactic features that suffer from redundancy and lack robustness in the large-scale TPL dataset, leading to inevitable false positives and compromised recall. To mitigate these limitations, we introduce BinaryAI, a novel binary-to-source SCA technique with two-phase binary source code matching to capture both syntactic and semantic code features. First, BinaryAI trains a transformer-based model to produce function-level embeddings and obtain similar source functions for each binary function accordingly. Then by applying the link-time locality to facilitate function matching, BinaryAI detects the reused TPLs based on the ratio of matched source functions. Our experimental results demonstrate the superior performance of BinaryAI in terms of binary source code matching and the downstream SCA task. Specifically, our embedding model outperforms the state-of-the-art model CodeCMR, i.e., achieving 22.54% recall@1 and 0.34 MRR compared with 10.75% and 0.17 respectively. Additionally, BinaryAI outperforms all existing binary-to-source SCA tools in TPL detection, increasing the precision from 73.36% to 85.84% and recall from 59.81% to 64.98% compared with the well-recognized commercial SCA product.

en cs.SE
arXiv Open Access 2024
Artificial intelligence for context-aware visual change detection in software test automation

Milad Moradi, Ke Yan, David Colwell et al.

Automated software testing is integral to the software development process, streamlining workflows and ensuring product reliability. Visual testing, particularly for user interface (UI) and user experience (UX) validation, plays a vital role in maintaining software quality. However, conventional techniques such as pixel-wise comparison and region-based visual change detection often fail to capture contextual similarities, subtle variations, and spatial relationships between UI elements. In this paper, we propose a novel graph-based approach for context-aware visual change detection in software test automation. Our method leverages a machine learning model (YOLOv5) to detect UI controls from software screenshots and constructs a graph that models their contextual and spatial relationships. This graph structure is then used to identify correspondences between UI elements across software versions and to detect meaningful changes. The proposed method incorporates a recursive similarity computation that combines structural, visual, and textual cues, offering a robust and holistic model of UI changes. We evaluate our approach on a curated dataset of real-world software screenshots and demonstrate that it reliably detects both simple and complex UI changes. Our method significantly outperforms pixel-wise and region-based baselines, especially in scenarios requiring contextual understanding. We also discuss current limitations related to dataset diversity, baseline complexity, and model generalization, and outline planned future improvements. Overall, our work advances the state of the art in visual change detection and provides a practical solution for enhancing the reliability and maintainability of evolving software interfaces.

en cs.SE, cs.AI
arXiv Open Access 2024
Dirty-Waters: Detecting Software Supply Chain Smells

Raphina Liu, Sofia Bobadilla, Benoit Baudry et al.

Using open-source dependencies is essential in modern software development. However, this practice implies significant trust in third-party code, while there is little support for developers to assess this trust. As a consequence, attacks have been increasingly occurring through third-party dependencies. These are called software supply chain attacks. In this paper, we target the problem of projects that use dependencies while unaware of the potential risks posed by their software supply chain. We define the novel concept of software supply chain smell and present Dirty-Waters, a novel tool for detecting software supply chain smells. We evaluate Dirty-Waters on three JavaScript projects across nine versions and demonstrate the prevalence of all proposed software supply chain smells. Not only are there smells in all projects, but there are many of them, which immediately reveal potential risks and provide clear indicators for developers to act on the security of their supply chain.

en cs.SE, cs.CR
DOAJ Open Access 2023
EfficientRMT-Net—An Efficient ResNet-50 and Vision Transformers Approach for Classifying Potato Plant Leaf Diseases

Kashif Shaheed, Imran Qureshi, Fakhar Abbas et al.

The primary objective of this study is to develop an advanced, automated system for the early detection and classification of leaf diseases in potato plants, which are among the most cultivated vegetable crops worldwide. These diseases, notably early and late blight caused by <i>Alternaria solani</i> and <i>Phytophthora infestans</i>, significantly impact the quantity and quality of global potato production. We hypothesize that the integration of Vision Transformer (ViT) and ResNet-50 architectures in a new model, named EfficientRMT-Net, can effectively and accurately identify various potato leaf diseases. This approach aims to overcome the limitations of traditional methods, which are often labor-intensive, time-consuming, and prone to inaccuracies due to the unpredictability of disease presentation. EfficientRMT-Net leverages the CNN model for distinct feature extraction and employs depth-wise convolution (DWC) to reduce computational demands. A stage block structure is also incorporated to improve scalability and sensitive area detection, enhancing transferability across different datasets. The classification tasks are performed using a global average pooling layer and a fully connected layer. The model was trained, validated, and tested on custom datasets specifically curated for potato leaf disease detection. EfficientRMT-Net’s performance was compared with other deep learning and transfer learning techniques to establish its efficacy. Preliminary results show that EfficientRMT-Net achieves an accuracy of 97.65% on a general image dataset and 99.12% on a specialized Potato leaf image dataset, outperforming existing methods. The model demonstrates a high level of proficiency in correctly classifying and identifying potato leaf diseases, even in cases of distorted samples. The EfficientRMT-Net model provides an efficient and accurate solution for classifying potato plant leaf diseases, potentially enabling farmers to enhance crop yield while optimizing resource utilization. This study confirms our hypothesis, showcasing the effectiveness of combining ViT and ResNet-50 architectures in addressing complex agricultural challenges.

Chemical technology
DOAJ Open Access 2023
Lattice QCD Calculation and Optimization on ARM Processors

SUN Wei, BI Yujiang, CHENG Yaodong

Lattice quantum chromodynamics(lattice QCD) is one of the most important applications of large-scale parallel computing in high energy physics,researches in this field usually consume a large amount of computing resources,and its core is to solve the large scale sparse linear equations.Based on the domestic Kunpeng 920 ARM processor,this paper studies the hot spot of lattice QCD calculation,the Dslash,which is applied on up to 64 nodes(6 144 cores) and show the linear scalability.Based on the roofline performance analysis model,we find that lattice QCD is a typical memory bound application,and by using the compression of 3&times;3 complex unitary matrices in Dslash based on symmetry,we can improve the performance of Dslash by 22%.For the solving of large scale sparse linear equations,we also explore the usual Krylov subspace iterative algorithm such as BiCGStab and the newly developed state-of-art multigrid algorithm on the same ARM processor,and find that in the practical physics calculation the multigrid algorithm is several times to a magnitude faster than BiCGStab,even including the multigrid setup time.Moreover,we consider the NEON vectorization instructions on Kunpeng 920,and there is up to 20% improvement for multigrid algorithm.Therefore,the use of multigrid algorithm on ARM processors can speed up the physics research tremendously.

Computer software, Technology (General)
DOAJ Open Access 2022
Specific absorption rate reduction for sub-6 frequency range using polarization dependent metamaterial with high effective medium ratio

Tayaallen Ramachandran, Mohammad Rashed Iqbal Faruque, Mohammad Tariqul Islam

Abstract This research study introduces a multi-layered square-shaped metamaterial (MSM) structure for the electromagnetic (EM) absorption reduction in wireless mobile devices. Usually, wireless devices, for example, a cellular phone emits radiofrequency (RF) energy to the surroundings when used it. Moreover, fast-growing wireless communication technologies that support cellular data networks have also motivated this study. Hence, the focus of the research was to reduce the Specific Absorption Rate (SAR) for the Sub-6 frequency range by designing a multi-layered and compact, 10 × 10mm2 sized metamaterial structure that can be attached inside a mobile phone by avowing any overlapping with existing parts. Overall, six distinct square-shaped metamaterials were constructed on 0.25 mm thick Rogers RO3006 substrate material to reach the target of this investigation. Furthermore, numerical simulations of the proposed metamaterial electromagnetic properties and SAR reduction values were performed by adopting Computer Simulation Technology (CST) Microwave Studio 2019 software. From these simulations, the proposed MSM structure exhibited multi-band resonance frequencies accurately at 1.200, 1.458, 1.560, 1.896 GHz (at L-band), 2.268, 2.683 2.940, 3.580 GHz (at S-band) and 5.872 GHz (at C-band). Simultaneously, the proposed MSM structure was simulated in High-Frequency Structure Simulator (HFSS) to authenticate the numerical simulation data. The comparison of simulation data shows that only the primary and last resonance frequencies were reduced by 0.02 and 0.012 GHz, whereas the rest of the frequencies were increased by 0.042, 0.030, 0.040, 0.032, 0.107, 0.080, and 0.020 GHz in sequential order. In addition, the introduced MSM structure manifests left-handed behaviour at all the resonance frequencies. Nevertheless, the highest recorded SAR values were 98.136% and 98.283% at 1.560 GHz for 1 g and 10 g of tissue volumes. In conclusion, the proposed MSM met the objectives of this research study and can be employed in EM absorption reduction applications.

Medicine, Science
DOAJ Open Access 2022
Survey of Directed Acyclic Graph Based Blockchain Technology

WANG Jinsong, YANG Weizheng, ZHAO Zening, WEI Jiajia

Blockchain technology has been widely used in finance, public services, the Internet of Things(IoT), network security, supply chains, and other fields.However, the traditional blockchain with a single chain structure has some deficiencies in throughput, transaction confirmation speed, and scalability, which makes it difficult to apply it in some short-term and high concurrency data scenarios.In this paper, the Directed Acyclic Graph(DAG) based blockchain technology has attracted extensive attention and studied by scholars because of its advantages, such as concurrent transaction confirmation, high throughput, and strong scalability.By analyzing and studying the development and evolution, evaluation methods, optimization direction, and application scenarios of the existing DAG based blockchains, this paper explores the feasibility of DAG based blockchains in landing applications.Through the development of a mainstream DAG based blockchain, it compares the advantages and disadvantages of traditional blockchains and DAG based blockchains, analyzes the existing blockchain attribute evaluation methods, and summarizes the current DAG based blockchain evaluation results.On this basis, this paper summarizes the optimization methods of the existing DAG based blockchain from the aspects of transaction confirmation speed, system throughput, system security, and storage structure, and summarizes the application of a DAG based blockchain in data management, data sharing based on edge computing and federated learning, and data security for access control and privacy protection.Finally, it points out the main problems and challenges in the current studies, and provides further research directions.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2022
Integrating Healthcare Services Using Blockchain-Based Telehealth Framework

Narmeen Zakaria Bawany, Tehreem Qamar, Hira Tariq et al.

Blockchain technology (BT) has a wide range of built-in features, such as decentralization, transparency, data provenance, security, immutability, and has moved beyond the hype to practical applications in the healthcare industry. Telehealth turns out to be the most efficient and effective way of dispensing healthcare services even in remote areas. Though telehealth has the proven potential to improve the quality of healthcare, its implementation and adoption remain far from ideal. There is a need for a telehealth system that not only ensures the privacy and security of the users but also provides authentic services that enhance the trust to the highest level. Existing applications are leveraging BT and providing limited telehealth services. These applications are focused on a few services; hence do not cover all aspects of healthcare. To unveil the true potential of telehealth, this paper aims to develop an effective telehealth framework-BlockHeal which integrates all essential healthcare services under one platform and ensures a full-fledged trusted environment. The methodology employed includes survey of existing telehealth system, identifying their weaknesses and critical reasons behind their lack of widespread adoption. The proposed framework addresses the limitations and includes all stakeholders of the healthcare system instigating a consolidated platform to ensure authentic, safe, and timely healthcare facilities. Additionally, as the proposed framework is based on BT, it ensures the provision of secure, fault-tolerant, transparent, and tamper-proof data. Moreover, it offers decentralized storage via hyperledger fabric and a collection of decentralized applications (DApps). Finally, the effectiveness of the BlockHeal framework is validated by demonstrating several use-cases.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2021
Cryptosystem Identification Scheme Combining Feature Selection and Ensemble Learning

WANG Xu, CHEN Yongle, WANG Qingsheng, CHEN Junjie

In cyphertext identification,the encryption algorithm is the prerequisite for further analysis of ciphertext.The existing identification schemes are constructed in a single form,and thus often fail to cope with the differences between different cryptosystems when identifying multiple cryptosystems.To address the problem,this paper studies how different ciphertext features influence the performance of identification schemes,then combines the Relief feature selection algorithm and heterogeneous ensemble learning to propose a dynamic feature identification scheme that can adapt to the scenario of multiple cryptosystem identification.Experiments are carried out on ciphertext data sets generated by thirty-six encryption algorithms,and results show that,compared with the existing hierarchical cryptosystem identification schemes based on random forest,the proposed scheme increases the identification accuracy by 6.41%,10.03% and 11.40% respectively in three different cryptosystem identification scenarios.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2021
A Greedy Scheduling Approach for Peripheral Mobile Intelligent Systems

Ghassan Fadlallah, Djamal Rebaine, Hamid Mcheick

Smart, pervasive devices have recently experienced accelerated technological development in the fields of hardware, software, and wireless connections. The promotion of various kinds of collaborative mobile computing requires an upgrade in network connectivity with wireless technologies, as well as enhanced peer-to-peer communication. Mobile computing also requires appropriate scheduling methods to speed up the implementation and processing of various computing applications by better managing network resources. Scheduling techniques are relevant to the modern architectural models that support the IoT paradigm, particularly smart collaborative mobile computing architectures at the network periphery. In this regard, load-balancing techniques have also become necessary to exploit all the available capabilities and thus the speed of implementation. However, since the problem of scheduling and load-balancing, which we addressed in this study, is known to be NP-hard, the heuristic approach is well justified. We thus designed and validated a greedy scheduling and load-balancing algorithm to improve the utilization of resources. We conducted a comparison study with the longest cloudlet fact processing (LCFP), shortest cloudlet fact processing (SCFP), and Min-Min heuristic algorithms. The choice of those three algorithms is based on the efficiency and simplicity of their mechanisms, as reported in the literature, for allocating tasks to devices. The simulation we conducted showed the superiority of our approach over those algorithms with respect to the overall completion time criterion.

Computer software, Technology
DOAJ Open Access 2021
Searching Deterministic Chaotic Properties in System-Wide Vulnerability Datasets

Ioannis Tsantilis, Thomas K. Dasaklis, Christos Douligeris et al.

Cybersecurity is a never-ending battle against attackers, who try to identify and exploit misconfigurations and software vulnerabilities before being patched. In this ongoing conflict, it is important to analyse the properties of the vulnerability time series to understand when information systems are more vulnerable. We study computer systems’ software vulnerabilities and probe the relevant National Vulnerability Database (NVD) time-series properties. More specifically, we show through an extensive experimental study based on the National Institute of Standards and Technology (NIST) database that the relevant systems software time series present significant chaotic properties. Moreover, by defining some systems based on open and closed source software, we compare their chaotic properties resulting in statistical conclusions. The contribution of this novel study is focused on the prepossessing stage of vulnerabilities time series forecasting. The strong evidence of their chaotic properties as derived by this research effort could lead to a deeper analysis to provide additional tools to their forecasting process.

Information technology
DOAJ Open Access 2021
AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine

Tetiana Habuza, Alramzana Nujum Navaz, Faiza Hashim et al.

Background: AI in medicine has been recognized by both academia and industry in revolutionizing how healthcare services will be offered by providers and perceived by all stakeholders. Objectives: We aim to review recent tendencies in building AI applications for medicine and foster its further development by outlining obstacles. Sub-objectives: (1) to highlight AI techniques that we have identified as key areas of AI-related research in healthcare; (2) to offer guidelines on building reliable AI-based CAD-systems for medicine; and (3) to reveal open research questions, challenges, and directions for future research. Methods: To address the tasks, we performed a systematic review of the references on the main branches of AI applications for medical purposes. We focused primarily on limitations of the reviewed studies. Conclusions: This study provides a summary of AI-related research in healthcare, it discusses the challenges and proposes open research questions for further research. Robotics has taken huge leaps in improving the healthcare services in a variety of medical sectors, including oncology and surgical interventions. In addition, robots are now replacing human assistants as they learn to become more sociable and reliable. However, there are challenges that must still be addressed to enable the use of medical robots in diagnostics and interventions. AI for medical imaging eliminates subjectivity in a visual diagnostic procedure and allows for the combining of medical imaging with clinical data, lifestyle risks and demographics. Disadvantages of AI solutions for radiology include both a lack of transparency and dedication to narrowed diagnostic questions. Designing an optimal automatic classifier should incorporate both expert knowledge on a disease and state-of-the-art computer vision techniques. AI in precision medicine and oncology allows for risk stratification due to genomics aberrations discovered on molecular testing. To summarize, AI cannot substitute a medical doctor. However, medicine may benefit from robotics, a CAD, and AI-based personalized approach.

Computer applications to medicine. Medical informatics
arXiv Open Access 2021
Qualitative Research on Software Development: A Longitudinal Case Study Methodology

Laurie McLeod, Stephen G. MacDonell, Bill Doolin

This paper reports the use of a qualitative methodology for conducting longitudinal case study research on software development. We provide a detailed description and explanation of appropriate methods of qualitative data collection and analysis that can be utilized by other researchers in the software engineering field. Our aim is to illustrate the utility of longitudinal case study research, as a complement to existing methodologies for studying software development, so as to enable the community to develop a fuller and richer understanding of this complex, multi-dimensional phenomenon. We discuss the insights gained and lessons learned from applying a longitudinal qualitative approach to an empirical case study of a software development project in a large multi-national organization. We evaluate the methodology used to emphasize its strengths and to address the criticisms traditionally made of qualitative research.

arXiv Open Access 2021
Proceedings of the 9th International Symposium on Symbolic Computation in Software Science

Temur Kutsia

This volume contains papers presented at the Ninth International Symposium on Symbolic Computation in Software Science, SCSS 2021. Symbolic Computation is the science of computing with symbolic objects (terms, formulae, programs, representations of algebraic objects, etc.). Powerful algorithms have been developed during the past decades for the major subareas of symbolic computation: computer algebra and computational logic. These algorithms and methods are successfully applied in various fields, including software science, which covers a broad range of topics about software construction and analysis. Meanwhile, artificial intelligence methods and machine learning algorithms are widely used nowadays in various domains and, in particular, combined with symbolic computation. Several approaches mix artificial intelligence and symbolic methods and tools deployed over large corpora to create what is known as cognitive systems. Cognitive computing focuses on building systems that interact with humans naturally by reasoning, aiming at learning at scale. The purpose of SCSS is to promote research on theoretical and practical aspects of symbolic computation in software science, combined with modern artificial intelligence techniques. These proceedings contain the keynote paper by Bruno Buchberger and ten contributed papers. Besides, the conference program included three invited talks, nine short and work-in-progress papers, and a special session on computer algebra and computational logic. Due to the COVID-19 pandemic, the symposium was held completely online. It was organized by the Research Institute for Symbolic Computation (RISC) of the Johannes Kepler University Linz on September 8--10, 2021.

en cs.SC, cs.AI

Halaman 39 dari 407616