Matheus Alexandre de Vasconcelos, Júlia Audrem Gomes de Oliveira Fadul, Vitor da Silva de Souza
et al.
This study investigated the fabrication and characterization of biodegradable monofilament suture threads based on cellulose acetate (CA), plasticized with triethyl citrate (TC) and reinforced with poly(lactic acid) (PLA), produced by twin-screw micro-extrusion for potential dental surgical applications. The effects of TC content (25, 30, and 35 wt%) and PLA incorporation (15 wt%) on processability, thermal behavior, morphology, mechanical properties, and biocompatibility were systematically evaluated. The sutures were characterized by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC), scanning electron microscopy (SEM), tensile testing, and in vitro biological assays using RAW 264.7 macrophages. The results showed that TC effectively reduced the glass transition temperature of CA, enhancing flexibility and processability, while PLA incorporation influenced thermal stability and promoted morphological changes, including increased porosity and elasticity. SEM analyses revealed smooth and homogeneous surfaces for CA/TC monofilaments and slightly rougher, more porous morphologies for CA/PLA/TC blends, which may favor tissue interaction. In vitro assays confirmed cytocompatibility, with no cytotoxic effects or nitric oxide induction observed over a 7-day exposure period. Although most formulations did not fully meet the tensile strength requirements specified by NBR 13904/2003, the formulation containing 25 wt% TC and 15 wt% PLA (A4) exhibited the best mechanical performance, approaching normative values while maintaining suitable handling characteristics. Overall, the results indicate that CA/TC/PLA-based sutures represent a promising, sustainable, and biocompatible alternative for absorbable surgical sutures, although further optimization is required to fully comply with mechanical regulatory standards.
Empirical research in reverse engineering and software protection is crucial for evaluating the efficacy of methods designed to protect software against unauthorized access and tampering. However, conducting such studies with professional reverse engineers presents significant challenges, including access to professionals and affordability. This paper explores the use of students as participants in empirical reverse engineering experiments, examining their suitability and the necessary training; the design of appropriate challenges; strategies for ensuring the rigor and validity of the research and its results; ways to maintain students' privacy, motivation, and voluntary participation; and data collection methods. We present a systematic literature review of existing reverse engineering experiments and user studies, a discussion of related work from the broader domain of software engineering that applies to reverse engineering experiments, an extensive discussion of our own experience running experiments ourselves in the context of a master-level software hacking and protection course, and recommendations based on this experience. Our findings aim to guide future empirical studies in RE, balancing practical constraints with the need for meaningful, reproducible results.
Esteban Parra, Sonia Haiduc, Preetha Chatterjee
et al.
Peer review is the main mechanism by which the software engineering community assesses the quality of scientific results. However, the rapid growth of paper submissions in software engineering venues has outpaced the availability of qualified reviewers, creating a growing imbalance that risks constraining and negatively impacting the long-term growth of the Software Engineering (SE) research community. Our vision of the Future of the SE research landscape involves a more scalable, inclusive, and resilient peer review process that incorporates additional mechanisms for: 1) attracting and training newcomers to serve as high-quality reviewers, 2) incentivizing more community members to serve as peer reviewers, and 3) cautiously integrating AI tools to support a high-quality review process.
The discussion around AI-Engineering, that is, Software Engineering (SE) for AI-enabled Systems, cannot ignore a crucial class of software systems that are increasingly becoming AI-enhanced: Those used to enable or support the SE process, such as Computer-Aided SE (CASE) tools and Integrated Development Environments (IDEs). In this paper, we study the energy efficiency of these systems. As AI becomes seamlessly available in these tools and, in many cases, is active by default, we are entering a new era with significant implications for energy consumption patterns throughout the Software Development Lifecycle (SDLC). We focus on advanced Machine Learning (ML) capabilities provided by Large Language Models (LLMs). Our proposed approach combines Retrieval-Augmented Generation (RAG) with Prompt Engineering Techniques (PETs) to enhance both the quality and energy efficiency of LLM-based code generation. We present a comprehensive framework that measures real-time energy consumption and inference time across diverse model architectures ranging from 125M to 7B parameters, including GPT-2, CodeLlama, Qwen 2.5, and DeepSeek Coder. These LLMs, chosen for practical reasons, are sufficient to validate the core ideas and provide a proof of concept for more in-depth future analysis.
The heterogeneity in the organization of software engineering (SE) research historically exists, i.e., funded research model and hands-on model, which makes software engineering become a thriving interdisciplinary field in the last 50 years. However, the funded research model is becoming dominant in SE research recently, indicating such heterogeneity has been seriously and systematically threatened. In this essay, we first explain why the heterogeneity is needed in the organization of SE research, then present the current trend of SE research nowadays, as well as the consequences and potential futures. The choice is at our hands, and we urge our community to seriously consider maintaining the heterogeneity in the organization of software engineering research.
Large language model-specific inference engines (in short as \emph{LLM inference engines}) have become a fundamental component of modern AI infrastructure, enabling the deployment of LLM-powered applications (LLM apps) across cloud and local devices. Despite their critical role, LLM inference engines are prone to bugs due to the immense resource demands of LLMs and the complexities of cross-platform compatibility. However, a systematic understanding of these bugs remains lacking. To bridge this gap, we present the first empirical study on bugs in LLM inference engines. We mine official repositories of 5 widely adopted LLM inference engines, constructing a comprehensive dataset of 929 real-world bugs. Through a rigorous open coding process, we analyze these bugs to uncover their symptoms, root causes, commonality, fix effort, fix strategies, and temporal evolution. Our findings reveal six bug symptom types and a taxonomy of 28 root causes, shedding light on the key challenges in bug detection and location within LLM inference engines. Based on these insights, we propose a series of actionable implications for researchers, inference engine vendors, and LLM app developers, along with general guidelines for developing LLM inference engines.
Large Language Models (LLMs) are increasingly integrated into software applications, giving rise to a broad class of prompt-enabled systems, in which prompts serve as the primary 'programming' interface for guiding system behavior. Building on this trend, a new software paradigm, promptware, has emerged, which treats natural language prompts as first-class software artifacts for interacting with LLMs. Unlike traditional software, which relies on formal programming languages and deterministic runtime environments, promptware is based on ambiguous, unstructured, and context-dependent natural language and operates on LLMs as runtime environments, which are probabilistic and non-deterministic. These fundamental differences introduce unique challenges in prompt development. In practice, prompt development remains largely ad hoc and relies heavily on time-consuming trial-and-error, a challenge we term the promptware crisis. To address this, we propose promptware engineering, a new methodology that adapts established Software Engineering (SE) principles to prompt development. Drawing on decades of success in traditional SE, we envision a systematic framework encompassing prompt requirements engineering, design, implementation, testing, debugging, evolution, deployment, and monitoring. Our framework re-contextualizes emerging prompt-related challenges within the SE lifecycle, providing principled guidance beyond ad-hoc practices. Without the SE discipline, prompt development is likely to remain mired in trial-and-error. This paper outlines a comprehensive roadmap for promptware engineering, identifying key research directions and offering actionable insights to advance the development of prompt-enabled systems.
The rapid emergence of generative AI models like Large Language Models (LLMs) has demonstrated its utility across various activities, including within Requirements Engineering (RE). Ensuring the quality and accuracy of LLM-generated output is critical, with prompt engineering serving as a key technique to guide model responses. However, existing literature provides limited guidance on how prompt engineering can be leveraged, specifically for RE activities. The objective of this study is to explore the applicability of existing prompt engineering guidelines for the effective usage of LLMs within RE. To achieve this goal, we began by conducting a systematic review of primary literature to compile a non-exhaustive list of prompt engineering guidelines. Then, we conducted interviews with RE experts to present the extracted guidelines and gain insights on the advantages and limitations of their application within RE. Our literature review indicates a shortage of prompt engineering guidelines for domain-specific activities, specifically for RE. Our proposed mapping contributes to addressing this shortage. We conclude our study by identifying an important future line of research within this field.
Workers and vehicles are the key objects of coal yard safety management. It is of great significance to obtain high-precision positioning information in real time to avoid safety accidents. Studying the detection and positioning method of personnel and vehicle based on ultra-WB and visual integration, according to the scene characteristics of the closed coal yard, the method based on flight time-time arrival difference is proposed to realize the high-precision positioning of label targets in the coal yard; In order to solve the interruption of UWB location caused by occlusion, UWB detection and identification method based on Transformer and Yolov7 is proposed to estimate the position in the image, and the Kalman filter method is used to realize the fusion positioning. This method was tested in the coal yard of Ningxia Lingwu Power Plant of National Energy. By collecting the data images of operation scenes in the coal field, the data set of operators and vehicles was made. The average accuracy index of the target detection model mAP@0.5 is 0.925, and the image detection speed of the model deployed on the site is 30 frames per second. The experimental results of the detection model show that the model has high detection accuracy and fast detection speed for the main motion targets in the coal storage field, and meets the actual requirements of the coal field application in real time. The comparative experimental results show that the proposed target detection model improves the mAP@0.5 by 1.4% over the original Yolov7 without sacrificing the processing frame rate. The positioning accuracy range in the coal yard is 0.5~0.9 m, and the positioning refresh frequency is 10 frames per second, which effectively solves the problem of UWB positioning interruption caused by occlusion.
Lack of data on which to perform experimentation is a recurring issue in many areas of research, particularly in machine learning. The inability of most automated data mining techniques to be generalized to all types of data is inherently related with their dependency on those types which deems them ineffective against anything slightly different. Meta-heuristics are algorithms which attempt to optimize some solution independently of the type of data used, whilst classifiers or neural networks focus on feature extrapolation and dimensionality reduction to fit some model onto data arranged in a particular way. These two algorithmic fields encompass a group of characteristics which when combined are seemingly capable of achieving data mining regardless of how it is arranged. To this end, this work proposes a means of combining meta-heuristic methods with conventional classifiers and neural networks in order to perform automated data mining. Experiments on the MNIST dataset for handwritten digit recognition were performed and it was empirically observed that using a ground truth labeled dataset's validation accuracy is inadequate for correcting labels of other previously unseen data instances.
Dialogical Argument Mining(DialAM) is an important branch of Argument Mining(AM). DialAM-2024 is a shared task focusing on dialogical argument mining, which requires us to identify argumentative relations and illocutionary relations among proposition nodes and locution nodes. To accomplish this, we propose a two-stage pipeline, which includes the Two-Step S-Node Prediction Model in Stage 1 and the YA-Node Prediction Model in Stage 2. We also augment the training data in both stages and introduce context in Stage 2. We successfully completed the task and achieved good results. Our team Pokemon ranked 1st in the ARI Focused score and 4th in the Global Focused score.