Hasil untuk "Low temperature engineering. Cryogenic engineering. Refrigeration"

Menampilkan 20 dari ~8474229 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2023
An Empirical Study of the Non-Determinism of ChatGPT in Code Generation

Shuyin Ouyang, J Zhang, M. Harman et al.

There has been a recent explosion of research on Large Language Models (LLMs) for software engineering tasks, in particular code generation. However, results from LLMs can be highly unstable; non-deterministically returning very different code for the same prompt. Such non-determinism affects the correctness and consistency of the generated code, undermines developers’ trust in LLMs, and yields low reproducibility in LLM-based papers. Nevertheless, there is no work investigating how serious this non-determinism threat is. To fill this gap, this article conducts an empirical study on the non-determinism of ChatGPT in code generation. We chose to study ChatGPT because it is already highly prevalent in the code generation research literature. We report results from a study of 829 code generation problems across three code generation benchmarks (i.e., CodeContests, APPS and HumanEval) with three aspects of code similarities: semantic similarity, syntactic similarity, and structural similarity. Our results reveal that ChatGPT exhibits a high degree of non-determinism under the default setting: the ratio of coding tasks with zero equal test output across different requests is 75.76%, 51.00% and 47.56% for three different code generation datasets (i.e., CodeContests, APPS and HumanEval), respectively. In addition, we find that setting the temperature to 0 does not guarantee determinism in code generation, although it indeed brings less non-determinism than the default configuration (temperature \(=\) 1). In order to put LLM-based research on firmer scientific foundations, researchers need to take into account non-determinism in drawing their conclusions.

248 sitasi en Computer Science
arXiv Open Access 2026
Mining the YARA Ecosystem: From Ad-Hoc Sharing to Data-Driven Threat Intelligence

Dectot--Le Monnier de Gouville Esteban, Mohammad Hamdaqa, Moataz Chouchen

YARA has established itself as the de facto standard for "Detection as Code," enabling analysts and DevSecOps practitioners to define signatures for malware identification across the software supply chain. Despite its pervasive use, the open-source YARA ecosystem remains characterized by ad-hoc sharing and opaque quality. Practitioners currently rely on public repositories without empirical evidence regarding the ecosystem's structural characteristics, maintenance and diffusion dynamics, or operational reliability. We conducted a large-scale mixed-method study of 8.4 million rules mined from 1,853 GitHub repositories. Our pipeline integrates repository mining to map supply chain dynamics, static analysis to assess syntactic quality, and dynamic benchmarking against 4,026 malware and 2,000 goodware samples to measure operational effectiveness. We reveal a highly centralized structure where 10 authors drive 80% of rule adoption. The ecosystem functions as a "static supply chain": repositories show a median inactivity of 782 days and a median technical lag of 4.2 years. While static quality scores appear high (mean = 99.4/100), operational benchmarking uncovers significant noise (false positives) and low recall. Furthermore, coverage is heavily biased toward legacy threats (Ransomware), leaving modern initial access vectors (Loaders, Stealers) severely underrepresented. These findings expose a systemic "double penalty": defenders incur high performance overhead for decayed intelligence. We argue that public repositories function as raw data dumps rather than curated feeds, necessitating a paradigm shift from ad-hoc collection to rigorous rule engineering. We release our dataset and pipeline to support future data-driven curation tools.

en cs.SE, cs.CR
arXiv Open Access 2026
Evaluating and Improving Automated Repository-Level Rust Issue Resolution with LLM-based Agents

Jiahong Xiang, Wenxiao He, Xihua Wang et al.

The Rust programming language presents a steep learning curve and significant coding challenges, making the automation of issue resolution essential for its broader adoption. Recently, LLM-powered code agents have shown remarkable success in resolving complex software engineering tasks, yet their application to Rust has been limited by the absence of a large-scale, repository-level benchmark. To bridge this gap, we introduce Rust-SWE-bench, a benchmark comprising 500 real-world, repository-level software engineering tasks from 34 diverse and popular Rust repositories. We then perform a comprehensive study on Rust-SWE-bench with four representative agents and four state-of-the-art LLMs to establish a foundational understanding of their capabilities and limitations in the Rust ecosystem. Our extensive study reveals that while ReAct-style agents are promising, i.e., resolving up to 21.2% of issues, they are limited by two primary challenges: comprehending repository-wide code structure and complying with Rust's strict type and trait semantics. We also find that issue reproduction is rather critical for task resolution. Inspired by these findings, we propose RUSTFORGER, a novel agentic approach that integrates an automated test environment setup with a Rust metaprogramming-driven dynamic tracing strategy to facilitate reliable issue reproduction and dynamic analysis. The evaluation shows that RUSTFORGER using Claude-Sonnet-3.7 significantly outperforms all baselines, resolving 28.6% of tasks on Rust-SWE-bench, i.e., a 34.9% improvement over the strongest baseline, and, in aggregate, uniquely solves 46 tasks that no other agent could solve across all adopted advanced LLMs.

arXiv Open Access 2025
A Multi-Stage Hybrid Framework for Automated Interpretation of Multi-View Engineering Drawings Using Vision Language Model

Muhammad Tayyab Khan, Zane Yong, Lequn Chen et al.

Engineering drawings are fundamental to manufacturing communication, serving as the primary medium for conveying design intent, tolerances, and production details. However, interpreting complex multi-view drawings with dense annotations remains challenging using manual methods, generic optical character recognition (OCR) systems, or traditional deep learning approaches, due to varied layouts, orientations, and mixed symbolic-textual content. To address these challenges, this paper proposes a three-stage hybrid framework for the automated interpretation of 2D multi-view engineering drawings using modern detection and vision language models (VLMs). In the first stage, YOLOv11-det performs layout segmentation to localize key regions such as views, title blocks, and notes. The second stage uses YOLOv11-obb for orientation-aware, fine-grained detection of annotations, including measures, GD&T symbols, and surface roughness indicators. The third stage employs two Donut-based, OCR-free VLMs for semantic content parsing: the Alphabetical VLM extracts textual and categorical information from title blocks and notes, while the Numerical VLM interprets quantitative data such as measures, GD&T frames, and surface roughness. Two specialized datasets were developed to ensure robustness and generalization: 1,000 drawings for layout detection and 1,406 for annotation-level training. The Alphabetical VLM achieved an overall F1 score of 0.672, while the Numerical VLM reached 0.963, demonstrating strong performance in textual and quantitative interpretation, respectively. The unified JSON output enables seamless integration with CAD and manufacturing databases, providing a scalable solution for intelligent engineering drawing analysis.

en cs.CV, cs.AI
arXiv Open Access 2025
Reasonable Experiments in Model-Based Systems Engineering

Johan Cederbladh, Loek Cleophas, Eduard Kamburjan et al.

With the current trend in Model-Based Systems Engineering towards Digital Engineering and early Validation & Verification, experiments are increasingly used to estimate system parameters and explore design decisions. Managing such experimental configuration metadata and results is of utmost importance in accelerating overall design effort. In particular, we observe it is important to 'intelligent-ly' reuse experiment-related data to save time and effort by not performing potentially superfluous, time-consuming, and resource-intensive experiments. In this work, we present a framework for managing experiments on digital and/or physical assets with a focus on case-based reasoning with domain knowledge to reuse experimental data efficiently by deciding whether an already-performed experiment (or associated answer) can be reused to answer a new (potentially different) question from the engineer/user without having to set up and perform a new experiment. We provide the general architecture for such an experiment manager and validate our approach using an industrial vehicular energy system-design case study.

en cs.SE, eess.SY
arXiv Open Access 2025
Design for Sensing and Digitalisation (DSD): A Modern Approach to Engineering Design

Daniel N. Wilke

This paper introduces Design for Sensing and Digitalisation (DSD), a new engineering design paradigm that integrates sensor technology for digitisation and digitalisation from the earliest stages of the design process. Unlike traditional methodologies that treat sensing as an afterthought, DSD emphasises sensor integration, signal path optimisation, and real-time data utilisation as core design principles. The paper outlines DSD's key principles, discusses its role in enabling digital twin technology, and argues for its importance in modern engineering education. By adopting DSD, engineers can create more intelligent and adaptable systems that leverage real-time data for continuous design iteration, operational optimisation and data-driven predictive maintenance.

en eess.SY, cs.CE
arXiv Open Access 2025
Automated Parsing of Engineering Drawings for Structured Information Extraction Using a Fine-tuned Document Understanding Transformer

Muhammad Tayyab Khan, Zane Yong, Lequn Chen et al.

Accurate extraction of key information from 2D engineering drawings is crucial for high-precision manufacturing. Manual extraction is slow and labor-intensive, while traditional Optical Character Recognition (OCR) techniques often struggle with complex layouts and overlapping symbols, resulting in unstructured outputs. To address these challenges, this paper proposes a novel hybrid deep learning framework for structured information extraction by integrating an Oriented Bounding Box (OBB) detection model with a transformer-based document parsing model (Donut). An in-house annotated dataset is used to train YOLOv11 for detecting nine key categories: Geometric Dimensioning and Tolerancing (GD&T), General Tolerances, Measures, Materials, Notes, Radii, Surface Roughness, Threads, and Title Blocks. Detected OBBs are cropped into images and labeled to fine-tune Donut for structured JSON output. Fine-tuning strategies include a single model trained across all categories and category-specific models. Results show that the single model consistently outperforms category-specific ones across all evaluation metrics, achieving higher precision (94.77% for GD&T), recall (100% for most categories), and F1 score (97.3%), while reducing hallucinations (5.23%). The proposed framework improves accuracy, reduces manual effort, and supports scalable deployment in precision-driven industries.

en cs.CV, cs.AI
arXiv Open Access 2025
Combining TSL and LLM to Automate REST API Testing: A Comparative Study

Thiago Barradas, Aline Paes, Vânia de Oliveira Neves

The effective execution of tests for REST APIs remains a considerable challenge for development teams, driven by the inherent complexity of distributed systems, the multitude of possible scenarios, and the limited time available for test design. Exhaustive testing of all input combinations is impractical, often resulting in undetected failures, high manual effort, and limited test coverage. To address these issues, we introduce RestTSLLM, an approach that uses Test Specification Language (TSL) in conjunction with Large Language Models (LLMs) to automate the generation of test cases for REST APIs. The approach targets two core challenges: the creation of test scenarios and the definition of appropriate input data. The proposed solution integrates prompt engineering techniques with an automated pipeline to evaluate various LLMs on their ability to generate tests from OpenAPI specifications. The evaluation focused on metrics such as success rate, test coverage, and mutation score, enabling a systematic comparison of model performance. The results indicate that the best-performing LLMs - Claude 3.5 Sonnet (Anthropic), Deepseek R1 (Deepseek), Qwen 2.5 32b (Alibaba), and Sabia 3 (Maritaca) - consistently produced robust and contextually coherent REST API tests. Among them, Claude 3.5 Sonnet outperformed all other models across every metric, emerging in this study as the most suitable model for this task. These findings highlight the potential of LLMs to automate the generation of tests based on API specifications.

en cs.SE, cs.AI
arXiv Open Access 2024
Looking back and forward: A retrospective and future directions on Software Engineering for systems-of-systems

Everton Cavalcante, Thais Batista, Flavio Oquendo

Modern systems are increasingly connected and more integrated with other existing systems, giving rise to \textit{systems-of-systems} (SoS). An SoS consists of a set of independent, heterogeneous systems that interact to provide new functionalities and accomplish global missions through emergent behavior manifested at runtime. The distinctive characteristics of SoS, when contrasted to traditional systems, pose significant research challenges within Software Engineering. These challenges motivate the need for a paradigm shift and the exploration of novel approaches for designing, developing, deploying, and evolving these systems. The \textit{International Workshop on Software Engineering for Systems-of-Systems} (SESoS) series started in 2013 to fill a gap in scientific forums addressing SoS from the Software Engineering perspective, becoming the first venue for this purpose. This article presents a study aimed at outlining the evolution and future trajectory of Software Engineering for SoS based on the examination of 57 papers spanning the 11 editions of the SESoS workshop (2013-2023). The study combined scoping review and scientometric analysis methods to categorize and analyze the research contributions concerning temporal and geographic distribution, topics of interest, research methodologies employed, application domains, and research impact. Based on such a comprehensive overview, this article discusses current and future directions in Software Engineering for SoS.

en cs.SE, eess.SY
arXiv Open Access 2024
Federated Learning in Chemical Engineering: A Tutorial on a Framework for Privacy-Preserving Collaboration Across Distributed Data Sources

Siddhant Dutta, Iago Leal de Freitas, Pedro Maciel Xavier et al.

Federated Learning (FL) is a decentralized machine learning approach that has gained attention for its potential to enable collaborative model training across clients while protecting data privacy, making it an attractive solution for the chemical industry. This work aims to provide the chemical engineering community with an accessible introduction to the discipline. Supported by a hands-on tutorial and a comprehensive collection of examples, it explores the application of FL in tasks such as manufacturing optimization, multimodal data integration, and drug discovery while addressing the unique challenges of protecting proprietary information and managing distributed datasets. The tutorial was built using key frameworks such as $\texttt{Flower}$ and $\texttt{TensorFlow Federated}$ and was designed to provide chemical engineers with the right tools to adopt FL in their specific needs. We compare the performance of FL against centralized learning across three different datasets relevant to chemical engineering applications, demonstrating that FL will often maintain or improve classification performance, particularly for complex and heterogeneous data. We conclude with an outlook on the open challenges in federated learning to be tackled and current approaches designed to remediate and improve this framework.

en cs.LG, cs.DC
S2 Open Access 2021
Interface engineering: PSS-PPy wrapping amorphous Ni-Co-P for enhancing neutral-pH hydrogen evolution reaction performance

Fenyang Tian, S. Geng, Lin He et al.

Abstract Transition metal phosphides have shown Pt-like hydrogen evolution reaction (HER) activity in acid and alkaline solution, while their HER performance in neutral-pH is really poor owing to the high resistance and inefficient mass transfer in neutral-pH electrolyte. In this work, the well-arranged core/shell PSS-PPy/Ni-Co-P growing on Cu foil, in which Ni-Co-P is the core and the mixed polymer of pyrrole and sodium polystyrene sulfonate is the shell, has been demonstrated to be a highly efficient electrocatalyst for HER in neutral-pH electrolyte. The PSS-PPy/Ni-Co-P/CF self-supporting electrode was easily synthesized by electrodeposition and chemical deposition at room temperature. According to the experiment and DFT calculation results, PSS-PPy not only improves the conductivity and hydrophilicity of Ni-Co-P, but also optimizes the electronic structure of Ni-Co-P. Thus, the PSS-PPy/Ni-Co-P shows remarkable HER activity with only overpotential of 106 and 67 mV at the current density of 10 mA cm−2 in neutral-pH and alkaline electrolytes, respectively. Furthermore, PSS-PPy/Ni-Co-P has excellent stability in neutral-pH, alkaline electrolytes and even seawater. The current surface modification method directs us to design more low-cost, high-efficiency and stable electrocatalysts for HER in neutral-pH electrolyte.

89 sitasi en Materials Science
S2 Open Access 2023
Improved snow ablation optimizer with heat transfer and condensation strategy for global optimization problem

Heming Jia, Fangkai You, Di Wu et al.

The Snow Ablation Optimizer (SAO) is a new metaheuristic algorithm proposed in April 2023. It simulates the phenomenon of snow sublimation and melting in nature and has a good optimization effect. The SAO proposes a new two-population mechanism. By introducing Brownian motion to simulate the random motion of gas molecules in space. However, as the temperature factor changes, most water molecules are converted into water vapor. Which breaks the balance between exploration and exploitation, and reduces the optimization ability of the algorithm in the later stage. Especially in the face of high-dimensional problems, it is easy to fall into local optimal. In order to improve the efficiency of the algorithm, this paper proposes an improved Snow Ablation Optimizer with Heat Transfer and Condensation Strategy(SAOHTC). Firstly, this article proposes a heat transfer strategy. Utilizes gas molecules to transfer heat from high to low temperatures and move their positions from low to high temperatures. Causing individuals with lower fitness in the population to move towards individuals with higher fitness, thereby improving the optimization efficiency of the original algorithm. Secondly, a condensation strategy is proposed. Which can transform water vapor into water by simulating condensation in nature, improve the deficiency of the original two-population mechanism. improve the convergence speed. Finally, to verify the performance of SAOHTC. In this paper, two benchmark experiments of IEEE CEC2014 and IEEE CEC2017 and five engineering problems are used to test the superior performance of SAOHTC.

15 sitasi en Computer Science
CrossRef Open Access 2023
Cryogenic technologies of exposure on human body tissues

Antonina V. Butorina

Various types of physical laser, electromagnetic and cryogenic exposures on living tissues and human organs are used to suppress pathology or for destruction. The use of physical exposures is most often empirical. Nevertheless, experience confirms the promise of their use, and at the same time points to the need to study the characteristics of living tissues and the development of complex technologies for the application of physical effects. The paper describes the experience of their application in practice. They have a number of advantages over traditional methods of treatment, which include painlessness, absence of bleeding and a general noticeable reaction of the body, and a high functional effect. Local cryoablation using hand-held portable cryosurgical devices using liquid nitrogen as a refrigerant (minus 196C) is a practically applicable method for treating simple hemangiomas and allows obtaining good functional and cosmetic results. All types of simple small hemangiomas of any localization are subject to treatment, regardless of the age of the child. The presented method makes it possible to completely or partially refuse complex surgical interventions, especially when placing small hemangiomas on the face, neck, ear area, and also to obtain good results (98%). Preliminary microwave irradiation and the use of a laser make it possible to increase the possibilities of cryotherapy by 4-6 times along the depth of the hemangioma compared to simple cryoablation. This method retains all the useful features of cryoablation. It is promising for the treatment of cavernous and combined hemangiomas, which have a pronounced subcutaneous part and often complex localization. This approach is potentially useful in other cryogenic applications.

arXiv Open Access 2023
Sustainability is Stratified: Toward a Better Theory of Sustainable Software Engineering

Sean McGuire, Erin Shultz, Bimpe Ayoola et al.

Background: Sustainable software engineering (SSE) means creating software in a way that meets present needs without undermining our collective capacity to meet our future needs. It is typically conceptualized as several intersecting dimensions or ``pillars'' -- environmental, social, economic, technical and individual. However; these pillars are theoretically underdeveloped and require refinement. Objectives: The objective of this paper is to generate a better theory of SSE. Method: First, a scoping review was conducted to understand the state of research on SSE and identify existing models thereof. Next, a meta-synthesis of qualitative research on SSE was conducted to critique and improve the existing models identified. Results: 961 potentially relevant articles were extracted from five article databases. These articles were de-duplicated and then screened independently by two screeners, leaving 243 articles to examine. Of these, 109 were non-empirical, the most common empirical method was systematic review, and no randomized controlled experiments were found. Most papers focus on ecological sustainability (158) and the sustainability of software products (148) rather than processes. A meta-synthesis of 36 qualitative studies produced several key propositions, most notably, that sustainability is stratified (has different meanings at different levels of abstraction) and multisystemic (emerges from interactions among multiple social, technical, and sociotechnical systems). Conclusion: The academic literature on SSE is surprisingly non-empirical. More empirical evaluations of specific sustainability interventions are needed. The sustainability of software development products and processes should be conceptualized as multisystemic and stratified, and assessed accordingly.

Halaman 25 dari 423712