K. Schittkowski
Hasil untuk "Computer software"
Menampilkan 20 dari ~8152160 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef
A. Tanenbaum
D. Straub
N. Jennings, M. Wooldridge
Krzysztof Końca, A. Lankoff, A. Banasik et al.
Dedalo Marchetti, Daniele Bailo, Giuseppe Falcone et al.
The study of earthquake preparation phases often relies on fragmented approaches, limiting reproducibility and comparison between methods. To address this, we developed a Virtual Research Environment (VRE) for multiparametric and multidisciplinary earthquake investigations. Built as a Jupyter Notebook with MATLAB and Python kernels, the VRE integrates seismic, geodetic, atmospheric, and ionospheric data into a unified and automated workflow. Users can define spatial, temporal and other parameters to retrieve and process data across layers. Its effectiveness is demonstrated through the analysis of the 2016 Central Italy and 2025 Marmara earthquakes, where the tool proved capability to easy reproduce cross-domain results.
Martin Obaidi, Marc Herrmann, Jendrik Martensen et al.
Communication is a crucial social factor in the success of software projects, as positively or negatively perceived statements can influence how recipients feel and affect team collaboration through emotional contagion. Whether a developer perceives a written message as positive, negative, or neutral is likely shaped by multiple factors. In this paper, we investigate how mood traits and states, life circumstances, project phases, and group dynamics relate to the perception of text-based messages in software development. We conducted a four-round survey study with 81 students in team-based software projects. Across rounds, participants reported these factors and labeled 30 decontextualized statements for sentiment, including meta-data on labeling rationale and uncertainty. Our results show: (1) Sentiment perception is only moderately stable within individuals, and label changes concentrate on ambiguity-prone statements; (2) Correlation-level signals are small and do not survive global multiple-testing correction; (3) In statement-level repeated-measures models (GEE), higher mood trait and reactivity are associated with more positive (and less neutral) labeling, while predictors of negative labeling are weaker and at most trend-level (e.g., task conflict); (4) We find no clear evidence of systematic project-phase effects. Overall, sentiment perception varies within persons and is strongly statement-dependent. Although our study was conducted in an academic setting, the observed variability and ambiguity effects suggest caution when interpreting sentiment analysis outputs and motivate future work with contextualized, in-project communication.
Jiamao Yu, Hexuan Hu
Abstract Crowd counting aims to estimate the number, density, and distribution of crowds in an image. While CNN-based crowd counting methods have been effective, head-scale variation and complex background remain two major challenges for crowd counting. Therefore, we propose a multiscale region calibration network called MRCNet to effectively address these challenges. To address the former challenge, we design a multiscale aware module that utilizes multi-branch dilated convolutional parallelism to obtain multiscale receptive fields and cope with drastic changes in head size. For the latter challenge, we design a regional calibration module that calibrates the attention weights of each region after obtaining the attention map to effectively handle challenges in complex contexts. Additionally, we improve the loss function by combining L2 loss and binary cross-entropy loss to help MRCNet achieve excellent results. Extensive experiments were conducted on three mainstream datasets to demonstrate the robustness and competitiveness of our approach.
LI Hanqiao, ZHAO Yuanjun
With the rapid growth of graph computing,modern graph platforms routinely execute a large number of concurrent graph analytics tasks to extract the latent value in massive datasets.Consequently,concurrent graph processing has been widely adopted in domains,including intelligent education,public administration,and news media.However,most existing graph proces-sing systems are originally designed for single-task execution and suffer from excessive redundant data accesses when handling concurrent workloads.Although prior studies have observed significant redundancy in in-memory graph data across concurrent tasks and have attempted to exploit temporal and spatial locality to share underlying graph data,they largely overlook the data locality in private state updates.This limitation leads to low cache utilization and,ultimately,degraded system throughput.To address this challenge,this paper proposes CCG,a locality-aware cache management strategy for concurrent graph analysis,which fully exploits both temporal and spatial locality across tasks to reduce redundant data accesses and synchronization overhead.Specifically,CCG efficiently buffers and incrementally merges redundant updates,leveraging data locality to perform high-throughput batch updates in memory.This design minimizes access costs,mitigates cache thrashing,and significantly improves concurrency performance.Moreover,CCG employs a multi-level cache hierarchy to enable layered buffering and merging,thereby eliminating synchronization and locking overhead during private state updates.Experimental results show that CCG improves system throughput by 2.3×~7.8× over GRASP.
Mohamed Abdel Hamed
This study examines the evolution of advanced computer software in its various versions, stages, techniques, parametric design, algorithms, and BIM technologies, as well as their positive impacts, advantages, and the challenges they entail in enhancing interior design quality and, consequently, the quality of life within residential buildings. The study focuses on three-dimensional modeling technology and modern design and drafting software, highlighting their role in enhancing precision and detail in the design of interior spaces. Additionally, it reviews the benefits of employing virtual reality techniques and virtual tours to foster user engagement and facilitate design exploration before implementation, thereby reducing time and costs associated with modifications during the early stages of a project. The study concludes that advancements in computer software represent a fundamental shift in the field of interior design, contributing to increased work efficiency and improved experiences for both users and designers, which positively influences housing quality and enriches the residents' living experience. The study aims to promote the adoption of advanced digital tools in the design process while emphasizing the need to bridge the gap between traditional methods and modern software-based approaches. It also highlights the importance of integrating architects and all related professionals to acquire the necessary expertise for achieving sound design, stressing the imperative of involving architects in software development processes to ensure effective harmony between design and technical aspects. تتناول هذه الدراسة التطور الذي شهدته برمجيات الحاسوب المتقدمة بمختلف اصداراتها واطوارها وتقنياتها والتصميم البارامتري والخواريزمات وتقنيات BIM وتأثيرها الإيجابي ومميزاتها والتحديات التي تتضمنها في تحسين جودة التصميم الداخلي وبالتالي جودة الحياة في المبني . وتركز الدراسة على تكنولوجيا النمذجة ثلاثية الأبعاد وبرمجيات التصميم والرسم الحديثة، موضحةً دورها في تعزيز الدقة والتفاصيل في تصميم المساحات والفراغات الداخلية. كما تستعرض الدراسة فوائد استخدام تقنيات الواقع الافتراضي والجولات الافتراضية في تعزيز تفاعل المستخدم وتيسير استكشاف التصاميم قبل بدء التنفيذ، مما يسهم في تقليل الوقت والتكاليف الناتجة عن التعديلات خلال المراحل المبكرة من المشروع.وتخلص الدراسة إلى أن التطور في برمجيات الحاسوب يمثل تحولًا جوهريًا في مجال التصميم الداخلي، إذ يساهم في رفع كفاءة العمل وتحسين تجربة المستخدم والمصمم على حد سواء، مما ينعكس إيجابيًا على جودة المسكن ويُثري التجربة المعيشية للسكان. وتهدف الدراسة إلى تعزيز تبني الأدوات الرقمية المتطورة في العملية التصميمية، مع التأكيد على ضرورة سد الفجوة بين الأساليب التقليدية والأساليب المعتمدة على البرمجيات الحديثة. كما تسلط الضوء على أهمية التكامل بين المعماريين وكل المتخصصين المرتبطين بالمجال المعماري لاكتساب الخبرات المطلوبه للوصول الي التصميم الجيد ، مشددة على ضرورة إشراك المعماريين في عمليات تطوير البرمجيات لضمان تحقيق انسجام فعال بين الجوانب التصميمية والتقنية.
S.V. Korzun
The article comprehensively examines the formation and genesis of the conceptual and categorical apparatus of the state criminal and legal policy of countering cybercrimes. Given the growing global threat of cybercrime, the most effective are criminal means of combating it, which is why the clarification of the conceptual and categorical apparatus is extremely relevant. Having analyzed the scientific achievements of previous years, we have come to an understanding of the concepts of «cybercrime», «cybercrime» through the prism of multi-vectority, using various approaches, including considering international and national legislation. Special attention in the article is devoted to the identification of signs of cybercrimes and their classification, in particular, examples of classifications of various scientists are given. The types of cybercrimes contained in the Council of Europe Convention on Cybercrime are comprehensively characterized. Based on the processed material, our own approaches to the classification of cybercrime for the purposes of state criminal and legal policy, which has a multi-level nature, have been developed. Regarding the identified approaches to the classification, signs and content of cybercrime, certain positions have been highlighted regarding the consideration of these crimes: first, as a socially dangerous criminal act consisting in the manufacture, financing, use, sale, exchange and distribution of malicious software products; second, as a socially dangerous criminal act committed with the use of information and computer technologies. The presented study consists in the analysis of the conceptual and categorical apparatus of the state criminal law policy to combat cybercrime, which allowed to form a set of theoretical provisions and is the basis for the formation and implementation of the state criminal law policy to combat cybercrime.
YI Peng, YANG Ye, YAN Shijia
To address the challenge of inter-individual variability and improve the universality of gesture recognition technology, this study proposes a migration learning strategy based on Multi Parallel Conventional Neural Network (MPCNN), which aims to achieve efficient gesture recognition based on surface Electromyogram (sEMG) signals through a parallel architecture and an optimized migration learning mechanism. With a parallel architecture and optimized migration learning mechanism, MPCNN can deal with physiological differences between individuals more efficiently than previous CNN migration frameworks, which improves the model's adaptability to new users and recognition accuracy. In addition, MPCNN significantly enhances the utility of the system by reducing the model training time and improving the generalization ability. Through multiple sets of experiments, including multiplicative cross-validation, ablation experiments, and robustness tests, this study validates the effectiveness of the proposed strategy in several respects. The experimental results demonstrate that MPCNN significantly improves the accuracy of gesture recognition compared to traditional CNN models, and the proposed MPCNN migration learning strategy achieves a recognition rate of 94.95% in Ninapro DB7 compared to previous CNN migration learning frameworks, with an improvement of 4.38 percentage points, with the training time reduced by more than 50%. These experiments validate the advantages of the MPCNN migration model in reducing the training burden, enhancing the generalization ability, and improving anti-interference. The human-computer interaction capability is validated based on an experimental model, which verifies its promising potential for myoelectric control applications.
Christoph Treude, Marco A. Gerosa
Artificial intelligence (AI), including large language models and generative AI, is emerging as a significant force in software development, offering developers powerful tools that span the entire development lifecycle. Although software engineering research has extensively studied AI tools in software development, the specific types of interactions between developers and these AI-powered tools have only recently begun to receive attention. Understanding and improving these interactions has the potential to enhance productivity, trust, and efficiency in AI-driven workflows. In this paper, we propose a taxonomy of interaction types between developers and AI tools, identifying eleven distinct interaction types, such as auto-complete code suggestions, command-driven actions, and conversational assistance. Building on this taxonomy, we outline a research agenda focused on optimizing AI interactions, improving developer control, and addressing trust and usability challenges in AI-assisted development. By establishing a structured foundation for studying developer-AI interactions, this paper aims to stimulate research on creating more effective, adaptive AI tools for software development.
Maja Franz, Lukas Schmidbauer, Joshua Ammermann et al.
Quantum simulation is a leading candidate for demonstrating practical quantum advantage over classical computation, as it is believed to provide exponentially more compute power than any classical system. It offers new means of studying the behaviour of complex physical systems, for which conventionally software-intensive simulation codes based on numerical high-performance computing are used. Instead, quantum simulations map properties and characteristics of subject systems, for instance chemical molecules, onto quantum devices that then mimic the system under study. Currently, the use of these techniques is largely limited to fundamental science, as the overall approach remains tailored for specific problems: We lack infrastructure and modelling abstractions that are provided by the software engineering community for other computational domains. In this paper, we identify critical gaps in the quantum simulation software stack-particularly the absence of general-purpose frameworks for model specification, Hamiltonian construction, and hardware-aware mappings. We advocate for a modular model-driven engineering (MDE) approach that supports different types of quantum simulation (digital and analogue), and facilitates automation, performance evaluation, and reusability. Through an example from high-energy physics, we outline a vision for a quantum simulation framework capable of supporting scalable, cross-platform simulation workflows.
T. E. Hutchinson, K. White, W. Martin et al.
Danissa V Rodriguez, Katharine Lawrence, Javier Gonzalez et al.
BackgroundGenerative artificial intelligence has the potential to revolutionize health technology product development by improving coding quality, efficiency, documentation, quality assessment and review, and troubleshooting. ObjectiveThis paper explores the application of a commercially available generative artificial intelligence tool (ChatGPT) to the development of a digital health behavior change intervention designed to support patient engagement in a commercial digital diabetes prevention program. MethodsWe examined the capacity, advantages, and limitations of ChatGPT to support digital product idea conceptualization, intervention content development, and the software engineering process, including software requirement generation, software design, and code production. In total, 11 evaluators, each with at least 10 years of experience in fields of study ranging from medicine and implementation science to computer science, participated in the output review process (ChatGPT vs human-generated output). All had familiarity or prior exposure to the original personalized automatic messaging system intervention. The evaluators rated the ChatGPT-produced outputs in terms of understandability, usability, novelty, relevance, completeness, and efficiency. ResultsMost metrics received positive scores. We identified that ChatGPT can (1) support developers to achieve high-quality products faster and (2) facilitate nontechnical communication and system understanding between technical and nontechnical team members around the development goal of rapid and easy-to-build computational solutions for medical technologies. ConclusionsChatGPT can serve as a usable facilitator for researchers engaging in the software development life cycle, from product conceptualization to feature identification and user story development to code generation. Trial RegistrationClinicalTrials.gov NCT04049500; https://clinicaltrials.gov/ct2/show/NCT04049500
Milin Zhang, Mohammad Abdi, Venkat R. Dasari et al.
Semantic Edge Computing (SEC) and Semantic Communications (SemComs) have been proposed as viable approaches to achieve real-time edge-enabled intelligence in sixth-generation (6G) wireless networks. On one hand, SemCom leverages the strength of Deep Neural Networks (DNNs) to encode and communicate the semantic information only, while making it robust to channel distortions by compensating for wireless effects. Ultimately, this leads to an improvement in the communication efficiency. On the other hand, SEC has leveraged distributed DNNs to divide the computation of a DNN across different devices based on their computational and networking constraints. Although significant progress has been made in both fields, the literature lacks a systematic view to connect both fields. In this work, we fulfill the current gap by unifying the SEC and SemCom fields. We summarize the research problems in these two fields and provide a comprehensive review of the state of the art with a focus on their technical strengths and challenges.
Ahmed Fawzy, Amjed Tahir, Matthias Galster et al.
Context: Managing data related to a software product and its development poses significant challenges for software projects and agile development teams. These include integrating data from diverse sources and ensuring data quality amidst continuous change and adaptation. Objective: The paper systematically explores data management challenges and potential solutions in agile projects, aiming to provide insights into data management challenges and solutions for both researchers and practitioners. Method: We employed a mixed-methods approach, including a systematic literature review (SLR) to understand the state-of-research followed by a survey with practitioners to reflect on the state-of-practice. The SLR reviewed 45 studies, identifying and categorizing data management aspects along with their associated challenges and solutions. The practitioner survey captured practical experiences and solutions from 32 industry practitioners who were significantly involved in data management to complement the findings from the SLR. Results: Our findings identified major data management challenges in practice, such as managing data integration processes, capturing diverse data, automating data collection, and meeting real-time analysis requirements. To address the challenges, solutions such as automation tools, decentralized data management practices, and ontology-based approaches have been identified. The solutions enhance data integration, improve data quality, and enable real-time decision-making by providing flexible frameworks tailored to agile project needs. Conclusion: The study pinpointed significant challenges and actionable solutions in data management for agile software development. Our findings provide practical implications for practitioners and researchers, emphasizing the development of effective data management practices and tools to address those challenges and improve project success.
J. Cooper
Muntadher H. Al-Hadaad, Rasha Thabit, Khamis A. Zidan
Recently, researchers focused on face image manipulation detection and localization techniques because of their importance in image security applications. The previous research has not highlighted the recovery of the face region after manipulation detection. This paper presents a new face region recovery algorithm (FRRA) to be included in the face image manipulation detection algorithms (FIMD). The proposed FRRA consists of two main algorithms: face data generation algorithm and face region restoration algorithm. Both algorithms start by detecting the face region using Multi-task Cascaded Neural Network followed by a face window selection process. In the face data generation algorithm, the recovery information is generated from the shirked face window using bicubic interpolation technique. In the face region restoration algorithm, the face region zoomed using bicubic interpolation technique. The proposed FRRA has been tested and compared with previous recovery methods for different color face images, and the results proved that the FRRA could recover the face region with better visual quality at the same data length compared to previous methods. The main contributions of this research are a) the suggestion of including a face region recovery algorithm to FIMD, b) the study of previous recovery data generation algorithms for color face images, and c) introducing a new algorithm for generating the recovery data based on bicubic interpolation. In the future, the proposed algorithm can be included in the recent FIMD algorithms to recover the face region, which can be very useful in practical applications, especially those used in data forensics systems.
Halaman 27 dari 407608