J., F. Kelley, T. J. Watson
Hasil untuk "Computer software"
Menampilkan 20 dari ~8152180 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
K. Konolige
Yunus Emre ESEN, Berk Kaan ÇETİNCAN, Kübra YAYAN et al.
XR4MCR is a mixed reality training platform for collaborative industrial robot maintenance. It enables multi-user, no-code scenario creation through a node-based visual editor in a mixed reality environment. Built with Unity and deployed on the VIROO platform, XR4MCR supports synchronized sessions with virtual tools anchored in physical space. Scenarios use interactive nodes, serialized in XML. The platform employs a modular Model-View-Presenter architecture implemented via the Zenject dependency injection framework. It enhances safety, reduces costs, and improves procedural learning. XR4MCR enables hands-on training without real robots or coding, making it an effective platform for early-stage training sessions, supporting vocational education, industrial onboarding, and collaborative XR-based research.
GAO Hongkui, MA Ruixiang, BAO Qihao, XIA Shaojie, QU Chongxiao
In the vanguard of knowledge retrieval,particularly in scenarios involving large language models(LLMs),research emphasis has shifted toward employing pure vector retrieval techniques for efficient capture of pertinent information.This information is then fed into large language models for comprehensive distillation and summarization.However,the limitations of this approach lie in its potential inability to fully encompass the intricacies of retrieval through vector representations alone,coupling with an absence of effective ranking mechanisms.This often leads to an overabundance of irrelevant information,thereby diluting the alignment between the final response and the user's actual needs.To address this conundrum,this paper introduces a hybrid retrieval-augmented dual-tower model.This model innovatively integrates a multi-path recall strategy,ensuring that the retrieval results are both comprehensive and highly relevant through complementary recall mechanisms.Architecturally,it adopts a dual-la-yer structure,combining bidirectional recurrent neural networks with text convolutional neural networks.This allows the model to perform multi-level ranking optimization on retrieval results,significantly enhancing the relevance and the precision of top-ranking outcomes.Moreover,the high-quality information,efficiently ranked,is integrated with the original query and fed into a large language model.This exploits the model's deep analytical capabilities to generate more accurate and credible responses.Experimental findings affirm that the proposed method effectively improves retrieval accuracy and system performance overall,markedly enhancing the precision and practicality of large language models in real-world applications.
Hanif Al Fatta, Zulisman Maksom, Mohd Hafiz Zakaria
This study aimed to establish a model for assessing the pedagogical quality of mobile game-based learning (GBL), which seeks to convey educational content to users. Evaluating the educational effectiveness of GBL necessitates a robust model tailored for this purpose. Current models can be improved to better address various educational challenges associated with mobile GBL. The LECOPELESE (LEarning COntent, PEdagogy and LEarning StyLE) model was developed by integrating relevant constructs identified in existing literature. To validate this model, a qualitative research approach was employed, drawing a sample from 270 undergraduate students. The analysis utilized Structural Equation Modeling (SEM) and resulted in a final model based on rigorous factor analysis. The findings indicated that the proposed model effectively measures educational quality in game-based learning. This new model includes more comprehensive constructs and items, addressing the educational aspects of game-based learning. Specifically, the model introduces a pedagogy construct to evaluate game-based learning quality, reflecting criteria for outstanding educational content and delivery through mobile applications. It assesses how effectively GBL provides real-world learning experiences. Additionally, the research highlights that the quality of pedagogy is influenced by two key factors: the GBL's ability to accommodate learners' unique characteristics (learning styles) and the quality of the learning content that adapts to learners' needs. Ultimately, the study demonstrates that both learning content and style significantly impact the pedagogy construct, suggesting that enhancing these areas can improve the overall pedagogical quality of game-based learning.
O. Shuba, M. Shuba, M. Sahura
The global IT services market plays an important role in the development of Ukraine's economy: it increases GDP, contributes to job creation, attracts investment, and strengthens integration into the global economy. At the same time, the Ukrainian IT sector faces challenges such as economic instability, brain drain, and cybersecurity threats. This article examines the leading companies in the global IT services market: Accenture, Tata Consultancy Services, Infosys, IBM, and Capgemini. These companies specialize in cloud solutions, software development, automation, artificial intelligence implementation, business digitalization, and IT consulting. Key global trends have been identified, including rising demand for cloud technologies, generative AI, and cybersecurity services. The export and import dynamics of Ukraine’s IT services for 2019–2023 are analyzed. Before the full-scale invasion, Ukraine’s IT exports showed growth in line with global trends. After 2022, some companies reduced their operations. The majority of provided services consisted of software development and consulting. The import structure is similar, dominated by computer, telecommunication, and information services. About 70% of IT service exports are concentrated in ten countries. Key domestic trends include an increase in the number of market participants (until 2023), predominance of individual entrepreneurs, rapid development of AI companies (117% growth during 2013–2023, with a slowdown after 2022), and substantial growth in product companies (+273%). Ukraine ranks second in Central and Eastern Europe in terms of the number of IT companies, behind only Poland.
XU Ying, FU Ziwei, ZHANG Wei, CHEN Yunfang
Currently, in deep learning-based smart contract vulnerability detection solutions, the direct use of bytecode or source code for textual sequence feature representation lacks a comprehensive understanding of program semantics. The smart contract vulnerability detection technology based on Abstract Syntax Tree (AST) embedding fully considers the syntax and semantic features needed for contract vectorization and appropriate processing granularity, enabling more accurate capturing of smart contract vulnerability features. First, it employs Solidity syntax tree parsing to design a smart-contract vectorization method based on AST embedding. It partitions node types recursively at the statement level to generate sequences of statement trees. Subsequently, a recursive neural network is employed to encode each statement tree from the bottom up, transforming the intricate AST structure into statement-level feature vectors. Building on this foundation, a Bidirectional Gated Recurrent neural network model with an Attention mechanism (BiGRU-ATT) is constructed. This facilitates the learning of features from the sequences of statement trees and accomplishes the detection and categorization of five typical vulnerabilities: re-entrancy, unchecked return values, timestamp dependency, access control, and denial-of-service attacks. Experimental results demonstrate that the proposed method improves the micro-F1 and macro-F1 metrics by 13 and 10 percentage points, respectively, compared to the direct vectorization of source code as a text sequence. In tasks related to timestamp dependence, access control, and denial-of-service attack vulnerability classification, the BiGRU-ATT model with the attention mechanism achieves an F1 value of over 88%.
Júlia Rocha Fortunato, Luana Ribeiro Soares, Gabriela Silva Alves et al.
Context: Women face many challenges in their lives, which affect their daily experiences and influence major life decisions, starting before they enroll in bachelor's programs, setting a difficult path for those aspiring to enter the software development industry. Goal: To explore the challenges that women face across three different life stages, beginning as high school students, continuing as university undergraduates, and extending into their professional lives, as well as potential solutions to address these challenges. Research Method: We conducted a literature review followed by workshops to understand the perspectives of high school women, undergraduates, and practitioners regarding the same set of challenges and solutions identified in the literature. Results: Regardless of the life stage, women feel discouraged in a toxic environment often characterized by a lack of inclusion, harassment, and the exhausting need to prove themselves. We also discovered that some challenges are specific to certain life stages; for example, issues related to maternity were mentioned only by practitioners. Conclusions: Gender-related challenges arise before women enter the software development field when the proportion of men and women is still similar. While the need to prove themselves is mentioned at all three stages, high school women's challenges are more often directed toward convincing their parents that they are mature enough to handle their responsibilities. As they progress, the emphasis shifts to proving their competence in managing responsibilities for which they have received training. Increasing the inclusion of women in the field should, therefore, start earlier, and profound societal changes may be necessary to boost women's participation.
Hugo Villamizar, Jannik Fischbach, Alexander Korn et al.
Developers now routinely interact with large language models (LLMs) to support a range of software engineering (SE) tasks. This prominent role positions prompts as potential SE artifacts that, like other artifacts, may require systematic development, documentation, and maintenance. However, little is known about how prompts are actually used and managed in LLM-integrated workflows, what challenges practitioners face, and whether the benefits of systematic prompt management outweigh the associated effort. To address this gap, we propose a research programme that (a) characterizes current prompt practices, challenges, and influencing factors in SE; (b) analyzes prompts as software artifacts, examining their evolution, traceability, reuse, and the trade-offs of systematic management; and (c) develops and empirically evaluates evidence-based guidelines for managing prompts in LLM-integrated workflows. As a first step, we conducted an exploratory survey with 74 software professionals from six countries to investigate current prompt practices and challenges. The findings reveal that prompt usage in SE is largely ad-hoc: prompts are often refined through trial-and-error, rarely reused, and shaped more by individual heuristics than standardized practices. These insights not only highlight the need for more systematic approaches to prompt management but also provide the empirical foundation for the subsequent stages of our research programme.
Ziyuan Zhou, Long Chen, Yekang Zhao et al.
The proliferation of Internet of Things (IoT) technology has significantly enhanced smart healthcare systems, enabling the collection and processing of vast healthcare datasets such as electronic medical records (EMRs) and remote health monitoring (RHM) data. However, this rapid expansion has also introduced critical challenges related to data security, privacy, and system reliability. To address these challenges, we propose a retrieval integrity verification and multi-system data interoperability mechanism for a Blockchain Oracle in smart healthcare with IoT Integration (RIVMD-BO). The mechanism uses the cuckoo filter technology to effectively reduce the computational complexity and ensures the authenticity and integrity of data transmission and use through data retrieval integrity verification. The experimental results and security analysis show that the proposed method can improve system performance while ensuring security.
A. R. Teplyakova, R. V. Shershnev, S. O. Starkov et al.
With the increasing routine workload on radiologists associated with the need to analyze large numbers of images, there is a need to automate part of the analysis process. Sarcopenia is a condition in which there is a loss of muscle mass. To diagnose sarcopenia, computed tomography is most often used, from the images of which the volume of muscle tissue can be assessed. The first stage of the analysis is its contouring, which is performed manually, takes a long time and is not always performed with sufficient quality affecting the accuracy of estimates and, as a result, the patient’s treatment plan. The subject of the study is the use of computer vision approaches for accurate segmentation of muscle tissue from computed tomography images for the purpose of sarcometry. The purpose of the study is to develop an approach to solving the problem of segmentation of collected and annotated images. An approach is presented that includes the stages of image pre-processing, segmentation using neural networks of the U-Net family, and post-processing. In total, 63 different configurations of the approach are considered, which differ in terms of data supplied to the input models and model architectures. The influence of the proposed method of post-processing the resulting binary masks on the segmentation accuracy is also evaluated. The approach, which includes pre-processing with table masking and anisotropic diffusion filtering, segmentation with an Inception U-Net architecture model, and post-processing based on contour analysis, achieves a Dice similarity coefficient of 0.9379 and Intersection over Union of 0.8824. Nine other configurations, the experimental results for which are reflected in the article, also demonstrated high values of these metrics (in the ranges of 0.9356–0.9374 and 0.8794–0.8822, respectively). The approach proposed in the article based on preprocessed three-channel images allows us to achieve metrics of 0.9364 and 0.8802, respectively, using the lightweight U-Net segmentation model. In accordance with the described approach, a software module was implemented in Python. The results of the study confirm the feasibility of using computer vision to assess muscle tissue parameters. The developed module can be used to reduce the routine workload on radiologists.
Yablokova Alena, Kovalev Igor, Kovalev Dmitry et al.
The paper examines aspects of developing and formalizing the task of applying computer vision methods and algorithms using OpenCV (implemented in Python version 3.13 notation) for automatic detection and classification of objects in decision support systems. A software implementation of a modular example is provided, enabling automatic detection and classification for the detection of plant diseases based on their external characteristics in decision support systems in agriculture. This approach will facilitate prompt response to plant diseases and the implementation of necessary measures for their treatment.
Ismaila Temitayo Sanusi, Friday Joseph Agbo, Oluwaseun Alexander Dada et al.
The integration of artificial intelligence (AI) as a subject into K-12 education worldwide is still in its early stages and undoubtedly needs further investigation. There is limited effort on understanding policymakers, teachers and students’ viewpoints on AI learning within the school system. This study gathered the thoughts of key stakeholders, including policymakers, higher education and K-12 teachers, and students in Nigeria, to understand their conceptions, concerns, and dispositions, with the aim of aiding the implementation of AI in schools. We further explored the needs of the diverse stakeholders, how they can be supported and juxtaposed their views to identify their priorities and how their opinions combined could give a holistic approach to the effective implementation of AI education. This research employed a qualitative methodology using semi-structured interviews as the means of data collection. The thematic analysis of the interview data from the 21 participants indicates their conceptions, what they considered the priorities for including AI in the school system, concerns and support needed to implement AI in schools. The findings of this study contribute to the ongoing conversation on how to effectively integrate AI into school curriculum.
Yoonha Cha, Victoria Jackson, Isabela Figueira et al.
Context: Scholars in the software engineering (SE) research community have investigated career advancement in the software industry. Research topics have included how individual and external factors can impact career mobility of software professionals, and how gender affects career advancement. However, the community has yet to look at career mobility from the lens of accessibility. Specifically, there is a pressing need to illuminate the factors that hinder the career mobility of blind and low vision software professionals (BLVSPs). Objective: This study aims to understand aspects of the workplace that impact career mobility for BLVSPs. Methods: We interviewed 26 BLVSPs with different roles, years of experience, and industry sectors. Thematic analysis was used to identify common factors related to career mobility. Results: We found four factors that impacted the career mobility of BLVSPs: (1) technical challenges, (2) colleagues' perceptions of BLVSPs, (3) BLVSPs' own perceptions on managerial progression, and (4) BLVSPs' investment in accessibility at the workplace. Conclusion: We suggest implications for tool designers, organizations, and researchers towards fostering more accessible workplaces to support the career mobility of BLVSPs.
Kaylea Champion, Benjamin Mako Hill
Because open source software relies on individuals who select their own tasks, it is often underproduced -- a term used by software engineering researchers to describe when a piece of software's relative quality is lower than its relative importance. We examine the social and technical factors associated with underproduction through a comparison of software packaged by the Debian GNU/Linux community. We test a series of hypotheses developed from a reading of prior research in software engineering. Although we find that software age and programming language age offer a partial explanation for variation in underproduction, we were surprised to find that the association between underproduction and package age is weaker at high levels of programming language age. With respect to maintenance efforts, we find that additional resources are not always tied to better outcomes. In particular, having higher numbers of contributors is associated with higher underproduction risk. Also, contrary to our expectations, maintainer turnover and maintenance by a declared team are not associated with lower rates of underproduction. Finally, we find that the people working on bugs in underproduced packages tend to be those who are more central to the community's collaboration network structure, although contributors' betweenness centrality (often associated with brokerage in social networks) is not associated with underproduction.
Samuel Abedu, SayedHassan Khatoonabadi, Emad Shihab
Software repositories contain valuable information for understanding the development process. However, extracting insights from repository data is time-consuming and requires technical expertise. While software engineering chatbots support natural language interactions with repositories, chatbots struggle to understand questions beyond their trained intents and to accurately retrieve the relevant data. This study aims to improve the accuracy of LLM-based chatbots in answering repository-related questions by augmenting them with knowledge graphs. We use a two-step approach: constructing a knowledge graph from repository data, and synergizing the knowledge graph with an LLM to handle natural language questions and answers. We curated 150 questions of varying complexity and evaluated the approach on five popular open-source projects. Our initial results revealed the limitations of the approach, with most errors due to the reasoning ability of the LLM. We therefore applied few-shot chain-of-thought prompting, which improved accuracy to 84%. We also compared against baselines (MSRBot and GPT-4o-search-preview), and our approach performed significantly better. In a task-based user study with 20 participants, users completed more tasks correctly and in less time with our approach, and they reported that it was useful. Our findings demonstrate that LLMs and knowledge graphs are a viable solution for making repository data accessible.
Aleksander Ogonowski, Michał Żebrowski, Arkadiusz Ćwiek et al.
Most of the intrusion detection methods in computer networks are based on traffic flow characteristics. However, this approach may not fully exploit the potential of deep learning algorithms to directly extract features and patterns from raw packets. Moreover, it impedes real-time monitoring due to the necessity of waiting for the processing pipeline to complete and introduces dependencies on additional software components. In this paper, we investigate deep learning methodologies capable of detecting attacks in real-time directly from raw packet data within network traffic. We propose a novel approach where packets are stacked into windows and separately recognised, with a 2D image representation suitable for processing with computer vision models. Our investigation utilizes the CIC IDS-2017 dataset, which includes both benign traffic and prevalent real-world attacks, providing a comprehensive foundation for our research.
K. Lange
Asif Imran
Software security requirements have been traditionally considered as a non-functional attribute of the software. However, as more software started to provide services online, existing mechanisms of using firewalls and other hardware to secure software have lost their applicability. At the same time, under the current world circumstances, the increase of cyber-attacks on software is ever increasing. As a result, it is important to consider the security requirements of software during its design. To design security in the software, it is important to obtain the views of the developers and managers of the software. Also, it is important to evaluate if their viewpoints match or differ regarding the security. Conducting this communication through a specific model will enable the developers and managers to eliminate any doubts on security design and adopt an effective strategy to build security into the software. In this paper, we analyzed the viewpoints of developers and managers regarding their views on security design. We interviewed a team of 7 developers and 2 managers, who worked in two teams to build a real-life software product that was recently compromised by a cyber-attack. We obtained their views on the reasons for the successful attack by the malware and took their recommendations on the important aspects to consider regarding security. Based on their feedback, we coded their open-ended responses into 4 codes, which we recommended using for other real-life software as well.
Halaman 30 dari 407609