G. Coulouris, J. Dollimore
Hasil untuk "Computer software"
Menampilkan 20 dari ~8152153 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
M. Baker, G. Cerniglia, Aziza Zaman
J. Brooks
D. Lea
Debashis Saha, A. Mukherjee
Pervasive computing promises to make life simpler via digital environments that sense, adapt, and respond to human needs. Yet we still view computers as machines that run programs in a virtual environment. Pervasive computing presumes a different vision. A device can be a portal into an application-data space, not just a repository of custom software a user must manage. An application is a means by which a user performs a task, not software written to exploit a device's capabilities. And a computing environment is an information-enhanced physical space, not a virtual environment that exists to store and run software. Pervasive computing is close to technical and economic viability.
X. Xia
L. Padgham, M. Winikoff
Doug Johnson
A. Alessio, A. Bemporad
H. Lieberman, F. Paternò, Markus Klann et al.
LI Yongjun, ZHU Yuefei, WU Wei, BAI Lifang
In view of the existing dummy location selection methods in LBS snapshot location privacy protection,the background knowledge attack caused by the time factor of the location itself is ignored,and the sensitive locations are treated equally.Based on this,a multi-factor dummy location selection algorithm(MFDLS) is proposed,which comprehensively considers the factors that affect privacy leakage,including background knowledge such as geographical attributes,semantic attributes,time attributes of the location and query probability as well as the users' sensitive preferences.To ensure that the selected dummy locations can not only effectively resist location homogeneity attack,location semantic attack and query probability distribution attack,but also deal with multiple threats such as location distribution attack,sensitive homogeneity attack and link attack.The algorithm selects the dummy locations that meet the requirements of query probability close to the initiating time,semantic diversification,large anonymous space and relatively consistent time,non-outlier and central point.Compared with the existing dummy location selection algorithm,the security analysis and simulation results show that the proposed algorithm improves the adversary error by at least 16% and reduces the quality loss by at least 30%,which can more effectively resist the background knowledge attack and meet the users' privacy requirements.
Jiandan Zhong, Lingfeng Liu, Fei Song et al.
Ship orientation detection is essential for maritime navigation, traffic monitoring, and defense, yet existing methods face challenges with rotational invariance in large-angle scenarios, difficulties in multi-scale feature fusion, and the limitations of traditional IoU when detecting oriented objects and predicting objects’ orientation. In this article, we propose a novel ship orientation detection (RACR-ShipDet) network based on rotation-adaptive ConvNeXt and Enhanced RepBiFPAN in remote sensing images. To equip the model with rotational invariance, ConvNeXt is first improved so that it can dynamically adjust the rotation angle and convolution kernel through adaptive rotation convolution, namely, ARRConv, forming a new architecture called RotConvNeXt. Subsequently, the RepBiFPAN, enhanced with the Weighted Feature Aggregation module, is employed to prioritize informative features by dynamically assigning adaptive weights, effectively reducing the influence of redundant or irrelevant features and improving feature representation. Moreover, a more stable version of KFIoU is proposed, named SCKFIoU, which improves the accuracy and stability of overlap calculation by introducing a small perturbation term and utilizing Cholesky decomposition for efficient matrix inversion and determinant calculation. Evaluations using the DOTA-ORShip dataset demonstrate that RACR-ShipDet outperforms current state-of-the-art models, achieving an mAP of 95.3%, representing an improvement of 5.3% over PSC (90.0%) and of 1.9% over HDDet (93.4%). Furthermore, it demonstrates a superior orientation accuracy of 96.9%, exceeding HDDet by a margin of 5.0%, establishing itself as a robust solution for ship orientation detection in complex environments.
Carin Strydom, Stephan van der Merwe
Orientation: In her 2024 study, the researcher, C.S., applied a process of manual qualitative data analysis on data from different scientific fields and did not want to take risk by using AI (artificial intelligence) or CAQDAS (computer-assisted qualitative data analysis systems) analysing this complex data, as human intervention was required to fully comprehend all nuances of the reasons for survival. Research purpose: The study aimed to as certain whether a manual method of data analysis incorporating the techniques and methods of well-known scholars was still feasible and would yield usable results. Motivation for the study: The study had to find a substitute way to analyse data, contradictory to the current popular trend of AI analysis, as businesses surviving the COVID-19 pandemic, had to be analysed from a human perspective. The data that had to be analysed were spanning various scientific fields, facts and emotions. Research design, approach and method: The empirical part of this qualitative exploratory study consisted of 16 face-to-face semi-structured interviews with successful small-, medium- and micro- enterprise (SMME) owners from the Western Cape in South Africa and used snowball sampling. Main findings: A framework for SMME survival was developed using this manual data analysis method. Practical/managerial implications: This study indicated that it is still possible to utilise a manual method for complex data analysis when a human perspective is required. Contribution/value-add: It was proven that CAQDAS programmes or AI-generated software are not the only solutions for analysing complex qualitative data.
Xiaoyu Guo, Shinobu Saito, Jianjun Zhao
This paper introduces QuanUML, an extension of the Unified Modeling Language (UML) tailored for quantum software systems. QuanUML integrates quantum-specific constructs, such as qubits and quantum gates, into the UML framework, enabling the modeling of both quantum and hybrid quantum-classical systems. We apply QuanUML to Efficient Long-Range Entanglement using Dynamic Circuits and Shor's Algorithm, demonstrating its utility in designing and visualizing quantum algorithms. Our approach supports model-driven development of quantum software and offers a structured framework for quantum software design. We also highlight its advantages over existing methods and discuss future improvements.
Tobias Eisenreich, Nicholas Friedlaender, Stefan Wagner
Use case modeling employs user-centered scenarios to outline system requirements. These help to achieve consensus among relevant stakeholders. Because the manual creation of use case models is demanding and time-consuming, it is often skipped in practice. This study explores the potential of Large Language Models (LLMs) to assist in this tedious process. The proposed method integrates an open-weight LLM to systematically extract actors and use cases from software requirements with advanced prompt engineering techniques. The method is evaluated using an exploratory study conducted with five professional software engineers, which compares traditional manual modeling to the proposed LLM-based approach. The results show a substantial acceleration, reducing the modeling time by 60\%. At the same time, the model quality remains on par. Besides improving the modeling efficiency, the participants indicated that the method provided valuable guidance in the process.
YE Zhiqi, ZHANG Guobao, ZHU Hongwei
Eliminating the interference of dynamic pedestrians in real-time mapping is a core challenge in laser Simultaneous Localization And Mapping (SLAM) algorithms, particularly in complex indoor environments. Most existing SLAM algorithms focus primarily on static scenes and overlook the presence of moving objects. However, in indoor environments, the frequent appearance of moving pedestrians significantly degrades the quality of the global point-cloud map and increases uncertainty in subsequent localization and navigation tasks. To address this issue, this study proposes a tightly coupled laser SLAM algorithm specifically designed for dynamic pedestrian scenarios in indoor environments, with the aim of better adapting to such complex scenarios. In addition to the traditional SLAM framework, this study introduces a pre-processing module based on point-cloud clustering and segmentation to accurately eliminate dynamic pedestrian point clouds. Our algorithm first applies an enhanced two-stage clustering algorithm based on the Euclidean distance to cluster and segment point clouds. Subsequently, multidimensional slice and intensity features are extracted from the clustering results and combined with the classification results of a Support Vector Machine (SVM) to identify pedestrian instances at the scene. Meanwhile, the algorithm utilizes a static point cloud to estimate ego motion in real time and constructs a high-resolution point cloud map. To evaluate the performance of the algorithm, assessments are performed on both the Hilti public dataset and real-world scenario data, specifically focusing on the effectiveness of dynamic point-cloud removal and real-time capability. Experimental results demonstrate that the algorithm significantly improves the point cloud map construction quality and remarkably reduces the proportion of dynamic pedestrian points compared to state-of-the-art laser SLAM algorithms such as Removert and Dynablox. The processing time of the system for a single frame image does not exceed 100 ms, meeting real-time requirements.
TANG Zhikang, WU Yuqi, LI Chunying, TANG Yong
Aiming at random sampling and the selection of neighborhoods that may lead to unstable recommendation results in existing Knowledge Graph Convolutional Network(KGCN) models, this study constructs a sampling model for Structural Holes and Common Neighbors(SHCN) importance ranking. SHCN leverages the advantages of KGCN in processing higher-dimensional heterogeneous data. This study proposes a KGCN recommendation model based on SHCN, named KGCN-SHCN. First, the SHCN sampling method is used to sort the receiving domain of each entity in a Knowledge Graph(KG). Then, the entity information and information collected from the entity neighborhood are aggregated according to a Graph Convolutional Network(GCN) to obtain the feature representation of the learning resources. Finally, the feature representations of learners and learning resources are obtained using a prediction function to obtain the interaction probabilities. Experiments are conducted on three datasets, and the experimental results show that the proposed model, especially using the sum aggregation, yields better results in terms of the AUC and ACC evaluation indexes than the KGCN, RippleNet, and other recommendation models based on KG. These results prove that the proposed model is superior.
Lucas Franke, Huayu Liang, Sahar Farzanehpour et al.
Background: Governments worldwide are considering data privacy regulations. These laws, e.g. the European Union's General Data Protection Regulation (GDPR), require software developers to meet privacy-related requirements when interacting with users' data. Prior research describes the impact of such laws on software development, but only for commercial software. Open-source software is commonly integrated into regulated software, and thus must be engineered or adapted for compliance. We do not know how such laws impact open-source software development. Aims: To understand how data privacy laws affect open-source software development. We studied the European Union's GDPR, the most prominent such law. We investigated how GDPR compliance activities influence OSS developer activity (RQ1), how OSS developers perceive fulfilling GDPR requirements (RQ2), the most challenging GDPR requirements to implement (RQ3), and how OSS developers assess GDPR compliance (RQ4). Method: We distributed an online survey to explore perceptions of GDPR implementations from open-source developers (N=56). We further conducted a repository mining study to analyze development metrics on pull requests (N=31462) submitted to open-source GitHub repositories. Results: GDPR policies complicate open-source development processes and introduce challenges for developers, primarily regarding the management of users' data, implementation costs and time, and assessments of compliance. Moreover, we observed negative perceptions of GDPR from open-source developers and significant increases in development activity, in particular metrics related to coding and reviewing activity, on GitHub pull requests related to GDPR compliance. Conclusions: Our findings motivate policy-related resources and automated tools to support data privacy regulation implementation and compliance efforts in open-source software.
Chinenye Okafor, Taylor R. Schorlemmer, Santiago Torres-Arias et al.
This paper systematizes knowledge about secure software supply chain patterns. It identifies four stages of a software supply chain attack and proposes three security properties crucial for a secured supply chain: transparency, validity, and separation. The paper describes current security approaches and maps them to the proposed security properties, including research ideas and case studies of supply chains in practice. It discusses the strengths and weaknesses of current approaches relative to known attacks and details the various security frameworks put out to ensure the security of the software supply chain. Finally, the paper highlights potential gaps in actor and operation-centered supply chain security techniques
Yousef Abuseta
The complexity of IoT, owing to the inherent distributed and dynamic nature of such systems, brings more challenges to the software development process. A vast number of devices with different communication protocols and data formats is involved and needs to be connected and exchange data with each other in a seamless manner. Traditional software architectures fall short of addressing the requirements of IoT systems and, therefore, a new approach to software architecture is required. This paper presents an attempt to lay out the foundation for a quality attribute driven software architecture for the development of IoT systems. This architecture accommodates the appropriate architectural styles and design patterns necessary for the development of a robust IoT system. These include edge computing, microservices and event driven architectures. The proposed architecture treats IoT systems as autonomic systems which require a closed control loop to regulate and orchestrate the operational aspect of the IoT system.
Halaman 26 dari 407608