Hasil untuk "Computer software"

Menampilkan 20 dari ~8152338 hasil · dari DOAJ, CrossRef, Semantic Scholar, arXiv

JSON API
DOAJ Open Access 2025
Yacht design in the era of digital transition

Lucica Iconaru, Carmen Gasparotti

The design of ships has changed dramatically since the 1970s. We have shifted from manual drafting to digital tools and computers, mostly because computer technology has greatly improved. Nowadays, with the growth of smart digitalization in Industry 4.0, using modern digital software and tools makes ship design more efficient and enhances its quality throughout a ship's entire lifespan. However, this shift has also made operations more complex and requires users of the software to have more specialized training. Today, technologies like automated optimization, simulation-based design, managing the entire product lifecycle, digital twins, and artificial intelligence are commonly used in the shipping industry. These technologies are applied during both the design and construction phases, as well as in preparing and inspecting ships. This paper reviews major advances in these areas and discusses how the industry can address current and future challenges.

Ocean engineering, Naval architecture. Shipbuilding. Marine engineering
DOAJ Open Access 2025
Information Technology and Its Impact on Modern Classroom Dynamics: A Computer Science Perspective

Kusnadi Kusnadi, Muhammad Hatta, Goenawan Brotosaputro et al.

The integration of Information Technology (IT) has significantly transformed modern education, particularly classroom dynamics, by enhancing accessibility to information and enabling personalized learning experiences. This paper aims to explore both the positive impacts and challenges of IT adoption in the classroom, focusing on the importance of Computer Science in shaping effective teaching practices. The study analyzes tools like Learning Management Systems (LMS), simulation software, and data analysis platforms, which improve engagement between students and teachers but also highlight challenges such as the digital divide and less interactive learning. Understanding fundamental Computer Science concepts, including algorithms, programming, and networking, is key to developing innovative solutions that enhance classroom learning. The results show that while IT has revolutionized education by facilitating online learning and collaboration, it also presents challenges that must be addressed, such as access to resources and the need for more interactive learning experiences. To optimize IT’s impact, the paper recommends continuous teacher training, better integration of technology with curricula, and improved access to devices and internet connectivity, ensuring a more inclusive and innovative learning environment in the digital age.

Industries. Land use. Labor, Commerce
arXiv Open Access 2025
The Road to Hybrid Quantum Programs: Characterizing the Evolution from Classical to Hybrid Quantum Software

Vincenzo De Maio, Ivona Brandic, Ewa Deelman et al.

Quantum computing exhibits the unique capability to natively and efficiently encode various natural phenomena, promising theoretical speedups of several orders of magnitude. However, not all computational tasks can be efficiently executed on quantum machines, giving rise to hybrid systems, where some portions of an application run on classical machines, while others utilize quantum resources. Efforts to identify quantum candidate code fragments that can meaningfully execute on quantum machines primarily rely on static code analysis. Yet, the state-of-the-art in static code analysis for quantum candidates remains in its infancy, with limited applicability to specific frameworks and languages, and a lack of generalizability. Existing methods often involve a trial-and-error approach, relying on the intuition and expertise of computer scientists, resulting in varying identification durations ranging from minutes to days for a single application. This paper aims to systematically formalize the process of identifying quantum candidates and their proper encoding within classical programs. Our work addresses the critical initial step in the development of automated reasoning techniques for code-to-code translation, laying the foundation for more efficient quantum software engineering. Particularly, this study investigates a sociotechnical phenomenon where the starting point is not a problem directly solvable with QC, but rather an existing classical program that addresses the problem. In doing so, it underscores the interdisciplinary nature of QC application development, necessitating collaboration between domain experts, computer scientists, and physicists to harness the potential of quantum computing effectively.

en cs.SE
arXiv Open Access 2025
Manifestations of Empathy in Software Engineering: How, Why, and When It Matters

Hashini Gunatilake, John Grundy, Rashina Hoda et al.

Empathy plays a crucial role in software engineering (SE), influencing collaboration, communication, and decision-making. While prior research has highlighted the importance of empathy in SE, there is limited understanding of how empathy manifests in SE practice, what motivates SE practitioners to demonstrate empathy, and the factors that influence empathy in SE work. Our study explores these aspects through 22 interviews and a large scale survey with 116 software practitioners. Our findings provide insights into the expression of empathy in SE, the drivers behind empathetic practices, SE activities where empathy is perceived as useful or not, and the other factors that influence empathy. In addition, we offer practical implications for SE practitioners and researchers, offering a deeper understanding of how to effectively integrate empathy into SE processes.

en cs.SE
arXiv Open Access 2025
Software Bills of Materials in Maven Central

Yogya Gamage, Nadia Gonzalez Fernandez, Martin Monperrus et al.

Software Bills of Materials (SBOMs) are essential to ensure the transparency and integrity of the software supply chain. There is a growing body of work that investigates the accuracy of SBOM generation tools and the challenges for producing complete SBOMs. Yet, there is little knowledge about how developers distribute SBOMs. In this work, we mine SBOMs from Maven Central to assess the extent to which developers publish SBOMs along with the artifacts. We develop our work on top of the Goblin framework, which consists of a Maven Central dependency graph and a Weaver that allows augmenting the dependency graph with additional data. For this study, we select a sample of 10% of release nodes from the Maven Central dependency graph and collected 14,071 SBOMs from 7,290 package releases. We then augment the Maven Central dependency graph with the collected SBOMs. We present our methodology to mine SBOMs, as well as novel insights about SBOM publication. Our dataset is the first set of SBOMs collected from a package registry. We make it available as a standalone dataset, which can be used for future research about SBOMs and package distribution.

arXiv Open Access 2025
Causes and Canonicalization of Unreproducible Builds in Java

Aman Sharma, Benoit Baudry, Martin Monperrus

The increasing complexity of software supply chains and the rise of supply chain attacks have elevated concerns around software integrity. Users and stakeholders face significant challenges in validating that a given software artifact corresponds to its declared source. Reproducible Builds address this challenge by ensuring that independently performed builds from identical source code produce identical binaries. However, achieving reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central, and we develop a novel taxonomy of six root causes of unreproducibility. We study actionable mitigations: artifact and bytecode canonicalization using OSS-Rebuild and jNorm respectively. Finally, we present Chains-Rebuild, a tool that achieve successfulcanonicalization for 26.60% on 12,803 unreproducible artifacts To sum up, our contributions are the first large-scale taxonomy of build unreproducibility causes in Java, a publicly available dataset of unreproducible builds, and Chains-Rebuild, a canonicalization tool for mitigating unreproducible builds in Java.

DOAJ Open Access 2024
Hybrid Filtering Method for Multisource Point Cloud Data of Maglev Tracks

ZHANG Yuxin, ZHANG Lei, OU Dongxiu

In the simulation data processing of maglev tracks, the filtering and extraction of maglev track point cloud data is an important link. Thus, practical applications should adopt an efficient filtering method according to the characteristics of the maglev data to be extracted. The point cloud data objects of the maglev track primarily include the image data of the maglev track, which is obtained by Unmanned Aerial Vehicle (UAV) oblique photography and formed into dense point cloud data after 3D reconstruction, and the laser point cloud data, which is obtained by handheld lidar scanning of the maglev track. Based on the data characteristics of these point clouds and considering the complex scenes around the maglev track, the two types of point clouds are mixed and filtered. First, the octree downsampling method is used for laser point cloud data, which effectively reduces the order of magnitude of the point cloud data and saves running time. The Cloth Simulation Filtering (CSF) method is then used on the laser point cloud and dense point cloud data to filter the ground plane point cloud and retain the non-ground point cloud data, respectively. A Statistical Outlier Removal (SOR) filtering method is used to screen a large number of outliers. Based on the characteristics of the maglev track, point clouds outside the coordinate range are filtered through straight-through filtering. On the premise of not changing the structure of the maglev track, the experimental results show that the filtering rates of the proposed method are 86.15% and 64.76% for the octree-downsampled laser point cloud data and the dense point cloud data without octree downsampling, respectively. These two point cloud datasets have similar structural ranges after hybrid filtering and a number of point clouds of the same order of magnitude, which can be effective for methods such as feature extraction of point clouds in maglev orbits.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2024
Nanofluid cooling of a hot rotating circular cylinder employing cross-flow channel cooling on the upper part and multi-jet impingement cooling on the lower part

Fatih Selimefendigil, Samia Larguech, Kaouther Ghachem et al.

This study explores the convective cooling features of a hot rotating cylinder by using the combined utilization of cross-flow on the upper part and multi-jet impingement on the bottom part. The analysis is performed for a range of jet Reynolds number (Re) values (between 100 and 500), cross-flow Re values (between 100 and 1000), rotational Re values (between −1000 and 1000), cylinder size (between 0.25wj and 3wj in radius), and center placement in the y direction (between −1.5wj and 1.5wj). When the cylinder is not rotating, the average Nu increment becomes 102% at the highest jet Re, while it becomes 140.82% at the highest cross-flow Re. When rations become active, the impacts of cross-flow and jet impingement cooling become slight. As compared to a motionless cylinder, at the highest speed of the rotating cylinder, the average Nu rises by about 357% to 391%. For clockwise rotation of the cylinder, a lager cylinder results an increase in the average Nu by about 86.3%. At the lowest and highest cross-flow impinging jet Re value combinations, cooling performance improvement becomes a factor of 8.1 and 2, respectively. When the size of the cylinder changes, entropy generation becomes significant, while the vertical location of the cylinder has a slight impact on entropy generation.

arXiv Open Access 2024
Research Artifacts in Software Engineering Publications: Status and Trends

Mugeng Liu, Xiaolong Huang, Wei He et al.

The Software Engineering (SE) community has been embracing the open science policy and encouraging researchers to disclose artifacts in their publications. However, the status and trends of artifact practice and quality remain unclear, lacking insights on further improvement. In this paper, we present an empirical study to characterize the research artifacts in SE publications. Specifically, we manually collect 1,487 artifacts from all 2,196 papers published in top-tier SE conferences (ASE, FSE, ICSE, and ISSTA) from 2017 to 2022. We investigate the common practices (e.g., URL location and format, storage websites), maintenance activities (e.g., last update time and URL validity), popularity (e.g., the number of stars on GitHub and characteristics), and quality (e.g., documentation and code smell) of these artifacts. Based on our analysis, we reveal a rise in publications providing artifacts. The usage of Zenodo for sharing artifacts has significantly increased. However, artifacts stored in GitHub tend to receive few stars, indicating a limited influence on real-world SE applications. We summarize the results and provide suggestions to different stakeholders in conjunction with current guidelines.

en cs.SE
arXiv Open Access 2023
Documentation Practices in Agile Software Development: A Systematic Literature Review

Md Athikul Islam, Rizbanul Hasan, Nasir U. Eisty

Context: Agile development methodologies in the software industry have increased significantly over the past decade. Although one of the main aspects of agile software development (ASD) is less documentation, there have always been conflicting opinions about what to document in ASD. Objective: This study aims to systematically identify what to document in ASD, which documentation tools and methods are in use, and how those tools can overcome documentation challenges. Method: We performed a systematic literature review of the studies published between 2010 and June 2021 that discusses agile documentation. Then, we systematically selected a pool of 74 studies using particular inclusion and exclusion criteria. After that, we conducted a quantitative and qualitative analysis using the data extracted from these studies. Results: We found nine primary vital factors to add to agile documentation from our pool of studies. Our analysis shows that agile practitioners have primarily developed their documentation tools and methods focusing on these factors. The results suggest that the tools and techniques in agile documentation are not in sync, and they separately solve different challenges. Conclusions: Based on our results and discussion, researchers and practitioners will better understand how current agile documentation tools and practices perform. In addition, investigation of the synchronization of these tools will be helpful in future research and development.

en cs.SE
arXiv Open Access 2023
Instance Space Analysis of Search-Based Software Testing

Neelofar Neelofar, Kate Smith-Miles, Mario Andres Munoz et al.

Search-based software testing (SBST) is now a mature area, with numerous techniques developed to tackle the challenging task of software testing. SBST techniques have shown promising results and have been successfully applied in the industry to automatically generate test cases for large and complex software systems. Their effectiveness, however, is problem-dependent. In this paper, we revisit the problem of objective performance evaluation of SBST techniques considering recent methodological advances -- in the form of Instance Space Analysis (ISA) -- enabling the strengths and weaknesses of SBST techniques to be visualized and assessed across the broadest possible space of problem instances (software classes) from common benchmark datasets. We identify features of SBST problems that explain why a particular instance is hard for an SBST technique, reveal areas of hard and easy problems in the instance space of existing benchmark datasets, and identify the strengths and weaknesses of state-of-the-art SBST techniques. In addition, we examine the diversity and quality of common benchmark datasets used in experimental evaluations.

DOAJ Open Access 2022
Hyperspectral Image Classification—Traditional to Deep Models: A Survey for Future Prospects

Muhammad Ahmad, Sidrah Shabbir, Swalpa Kumar Roy et al.

Hyperspectral imaging (HSI) has been extensively utilized in many real-life applications because it benefits from the detailed spectral information contained in each pixel. Notably, the complex characteristics, i.e., the nonlinear relation among the captured spectral information and the corresponding object of HSI data, make accurate classification challenging for traditional methods. In the last few years, deep learning (DL) has been substantiated as a powerful feature extractor that effectively addresses the nonlinear problems that appeared in a number of computer vision tasks. This prompts the deployment of DL for HSI classification (HSIC) which revealed good performance. This survey enlists a systematic overview of DL for HSIC and compared state-of-the-art strategies of the said topic. Primarily, we will encapsulate the main challenges of TML for HSIC and then we will acquaint the superiority of DL to address these problems. This article breaks down the state-of-the-art DL frameworks into spectral-features, spatial-features, and together spatial–spectral features to systematically analyze the achievements (future research directions as well) of these frameworks for HSIC. Moreover, we will consider the fact that DL requires a large number of labeled training examples whereas acquiring such a number for HSIC is challenging in terms of time and cost. Therefore, this survey discusses some strategies to improve the generalization performance of DL strategies which can provide some future guidelines.

Ocean engineering, Geophysics. Cosmic physics
DOAJ Open Access 2022
People-centeredness and Community Engagement based on "Each Home as a Health Post" initiative to Control COVID-19 in I.R. Iran: The Fourth Phase of National Mobilization against COVID-19

Ardeshir Khosravi, Elham Rashidian, Alireza Raeisi et al.

Background. In December 2019, a new disease was reported in China that spread rapidly worldwide. This disease is called COVID-19, a viral infection of the coronavirus family. COVID-19 has caused health, social and economic problems around the world. In Iran, the first disease cases were reported in February 2020. This article aimed to describe the results of the fourth step of the National Mobilization Plan against COVID-19 pandemic. Methods.The information used in this cross-sectional-descriptive study is based on the data recorded in the computer program (Portal) of the Network Management Center of the Ministry of Health and Medical Education.The fourth step was devised to manage and control the COVID-19 pandemic with public participation and coordination between departments. It was formed of four teams, including contact tracing, home care, supervisory, and support teams. Excel 2016 software was used to analyze the present study's data, and ArcMap software version 10.8 was applied to draw thermal maps. Results. In this study, there were 3.2 members per contact tracing team. This number was equal to 3.2, 2.9, and 3.8 people per team for supportive, home care, and supervisory teams, respectively. Also, on average, the contact tracing teams tracked 135.9 cases per team. This number was 518.6 visits per team for supervisory teams, 75.3 for home care teams, and 52.2 households for support teams. During the program's implementation, 3065.3 PCR tests and 3596.7 rapid tests were taken per 100,000 population, of which 15.5% were positive. The average contact tracing in people with close contact with the infected people was 4.87 per patient with a positive test. Conclusion. The COVID-19 pandemic challenged all political, economic, social, and health policies and exposed the weaknesses of the governments. According to statistics and information, the measures taken to manage and control COVID-19 in Iran have been valuable and effective. Still, the continuation of this process depends on the persistence of policies and protocols by the government and people in the society.

Medicine (General)
DOAJ Open Access 2022
Task Migration Policy for Thermal-Aware Dynamic Performance Optimization in Many-Core Systems

Behnaz Pourmohseni, Stefan Wildermann, Fedor Smirnov et al.

The steady downsizing of semiconductor technology nodes in recent years has led to a rapid increase in the density of power consumption on chips which, in turn, renders <italic>temperature</italic> a major issue for many-core systems, adversely affecting their performance, reliability, leakage, cost, etc. In this context, <italic>task migration</italic> is a powerful technique that is widely used for controlling the temperature profile of many-core systems under dynamic workloads with the goal to improve their performance, utilization, reliability, etc. In this paper, we present a task migration policy for thermal-aware performance optimization in heterogeneous many-core systems. The proposed policy is developed based on an analytical and thermally safe power-budgeting scheme and uses Dynamic Voltage and Frequency Scaling (DVFS) for power and thermal management of the system. Our migration policy aims at maximizing the system&#x2019;s performance and, at the same time, proactively enforcing thermal safety using DVFS. To that end, it iteratively adapts the distribution of active cores in the system (through proper migration decisions) to maximize the thermally safe power budget of active cores and, thereby, enable them to operate on higher frequencies without violating their safe thermal threshold. Experimental results demonstrate that the proposed policy offers <inline-formula> <tex-math notation="LaTeX">$2\times $ </tex-math></inline-formula> higher performance gain in comparison to existing approaches which aim at greedily reducing the average, variance, or gradient of temperature as an indirect means to enhance performance.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2022
Semantic Information Enhanced Network Embedding with Completely Imbalanced Labels

FU Kun, GUO Yun-peng, ZHUO Jia-ming, LI Jia-ning, LIU Qi

The problem of data incompleteness has become an intractable problem for network representation learning(NRL) methods,which makes existing NRL algorithms fail to achieve the expected results.Despite numerous efforts have done to solve the issue,most of previous methods mainly focused on the lack of label information,and rarely consider data imbalance phenomenon,especially the completely imbalance problem that a certain class labels are completely missing.Learning algorithms to solve such problems are still explored,for example,some neighborhood feature aggregation process prefers to focus on network structure information,while disregarding relationships between attribute features and semantic features,of which utilization may enhance representation results.To address the above problems,a semantic information enhanced network embedding with completely imbalanced labels(SECT)method that combines attribute features and structural features is proposed in this paper.Firstly,SECT introduces attention mechanism in the supervised learning for obtaining the semantic information vector on precondition of considering the relationship between the attribute space and the semantic space.Secondly,a variational autoencoder is applied to extract structural features under an unsupervised mode to enhance the robustness of the algorithm.Finally,both semantic and structural information are integrated in the embedded space.Compared with two state-of-the-art algorithms,the node classification results on public data sets Cora and Citeseer indicate the network vector obtained by SECT algorithm outperforms others and increases by 0.86%~1.97% under Mirco-F1.As well as the node visualization results exhibit that compared with other algorithms,the vector distances among different-class clusters obtained by SECT are larger,the clusters of same class are more compact,and the class boundaries are more obvious.All these experimental results demonstrate the effectiveness of SECT,which mainly benefited from a better fusion of semantic information in the low-dimensional embedding space,thus extremely improves the performance of node classification tasks under completely imbalanced labels.

Computer software, Technology (General)
DOAJ Open Access 2022
Universal Quantum Computing with Twist-Free and Temporally Encoded Lattice Surgery

Christopher Chamberland, Earl T. Campbell

Lattice-surgery protocols allow for the efficient implementation of universal gate sets with two-dimensional topological codes where qubits are constrained to interact with one another locally. In this work, we first introduce a decoder capable of correcting spacelike and timelike errors during lattice-surgery protocols. Subsequently, we compute the logical failure rates of a lattice-surgery protocol for a biased circuit-level noise model. We then provide a protocol for performing twist-free lattice surgery, where we avoid twist defects in the bulk of the lattice. Our twist-free protocol eliminates the extra circuit components and gate-scheduling complexities associated with the measurement of higher weight stabilizers when using twist defects. We also provide a protocol for temporally encoded lattice surgery that can be used to reduce both the run times and the total space-time costs of quantum algorithms. Lastly, we propose a layout for a quantum processor that is more efficient for rectangular surface codes exploiting noise bias and that is compatible with the other techniques mentioned above.

Physics, Computer software
DOAJ Open Access 2022
Complex Network Community Detection Algorithm Based on Node Similarity and Network Embedding

YANG Xu-hua, WANG Lei, YE Lei, ZHANG Duan, ZHOU Yan-bo, LONG Hai-xia

The community detection algorithm is very important for analyzing the topology and hierarchical structure of complex networks and predicting the evolution trend of complex networks.Traditional community detection algorithm does not have high accuracy and ignores the importance of network embedding.Aiming at such problems,a parameter-free community detection algorithm based on node similarity and network embedding Node2Vec method is proposed.First,we use the network embedding Node2Vec method to map network nodes into data points represented by low-dimensional vectors in Euclidean space,calculate the cosine similarity between the data points represented by the low-dimensional vector,construct a preference network according to the maximum similarity between the corresponding nodes,obtain the initial community detection,and use the maximum degree node of each initial community as a candidate node.Then we find the central node among the candidate nodes according to the average degree of the network and the average shortest path.Finally,the data points and their numbers corresponding to the central node are used as the initial centroid and cluster number,and the data represented by the low-dimensional vector are calculated by K-Means algorithm.The points are clustered,and the corresponding network nodes are divided into communities.This algorithm is a method of community division without parameters,which can independently extract parameters from the network without setting different hyper-parameters according to different networks,so that it can automatically and quickly identify the community structure of complex networks.In 8 real networks and artificial networks above,by comparing with other 5 well-known community discovery algorithms,numerical simulation experiments show that the proposed algorithm has good community discovery effect.

Computer software, Technology (General)
DOAJ Open Access 2022
Defense Method Against Code Reuse Attack Based on Real-time Code Loading and Unloading

HOU Shang-wen, HUANG Jian-jun, LIANG Bin, YOU Wei, SHI Wen-chang

In recent years,code reuse attack has become a mainstream attack against binary programs.The code reuse attack such as ROP uses the instruction gadgets in the memory space to construct an instruction sequence that can realize specific functions and achieve malicious purposes.According to the basic principle of the code reuse attack,this paper proposes a defense method based on real-time function loading and unloading.More specifically,the method shrinks the code space by the dynamic loading/unloading,to reduce the attack surface and defend the code reuse.First,it extracts sufficient function information in the dependent libraries of the target program by static analysis,and uses this information in the form of replacement libraries.Second,it introduces real-time loading in the dynamic loader in Linux,and proposes an auto-triggerable and auto-restorable loading/unloading.In order to reduce the high overhead caused by frequent unloading,a randomized batch unloading mechanism is designed.Finally,experiments are carried out in a real environment to verify the effectiveness of the scheme against code reuse attacks,and the significance of the randomized unloading strategy is demonstrated.

Computer software, Technology (General)
arXiv Open Access 2022
A Systematic Literature Review of Soft Computing Techniques for Software Maintainability Prediction: State-of-the-Art, Challenges and Future Directions

Gokul Yenduri, Thippa Reddy Gadekallu

The software is changing rapidly with the invention of advanced technologies and methodologies. The ability to rapidly and successfully upgrade software in response to changing business requirements is more vital than ever. For the long-term management of software products, measuring software maintainability is crucial. The use of soft computing techniques for software maintainability prediction has shown immense promise in software maintenance process by providing accurate prediction of software maintainability. To better understand the role of soft computing techniques for software maintainability prediction, we aim to provide a systematic literature review of soft computing techniques for software maintainability prediction. Firstly, we provide a detailed overview of software maintainability. Following this, we explore the fundamentals of software maintainability and the reasons for adopting soft computing methodologies for predicting software maintainability. Later, we examine the soft computing approaches employed in the process of software maintainability prediction. Furthermore, we discuss the difficulties and potential solutions associated with the use of soft computing techniques to predict software maintainability. Finally, we conclude the review with some promising future directions to drive further research innovations and developments in this promising area.

en cs.SE, cs.AI

Halaman 42 dari 407617