Hasil untuk "Engineering machinery, tools, and implements"

Menampilkan 20 dari ~6541603 hasil · dari CrossRef, DOAJ, arXiv

JSON API
arXiv Open Access 2026
Revisiting Software Engineering Education in the Era of Large Language Models: A Curriculum Adaptation and Academic Integrity Framework

Mustafa Degerli

The integration of Large Language Models (LLMs), such as ChatGPT and GitHub Copilot, into professional workflows is increasingly reshaping software engineering practices. These tools have lowered the cost of code generation, explanation, and testing, while introducing new forms of automation into routine development tasks. In contrast, most of the software engineering and computer engineering curricula remain closely aligned with pedagogical models that equate manual syntax production with technical competence. This growing misalignment raises concerns regarding assessment validity, learning outcomes, and the development of foundational skills. Adopting a conceptual research approach, this paper proposes a theoretical framework for analyzing how generative AI alters core software engineering competencies and introduces a pedagogical design model for LLM-integrated education. Attention is given to computer engineering programs in Turkey, where centralized regulation, large class sizes, and exam-oriented assessment practices amplify these challenges. The framework delineates how problem analysis, design, implementation, and testing increasingly shift from construction toward critique, validation, and human-AI stewardship. In addition, the paper argues that traditional plagiarism-centric integrity mechanisms are becoming insufficient, motivating a transition toward a process transparency model. While this work provides a structured proposal for curriculum adaptation, it remains a theoretical contribution; the paper concludes by outlining the need for longitudinal empirical studies to evaluate these interventions and their long-term impacts on learning.

en cs.SE, cs.AI
arXiv Open Access 2026
InFusionLayer: a CFA-based ensemble tool to generate new classifiers for learning and modeling

Eric Roginek, Jingyan Xu, D. Frank. Hsu

Ensemble learning is a well established body of methods for machine learning to enhance predictive performance by combining multiple algorithms/models. Combinatorial Fusion Analysis (CFA) has provided method and practice for combining multiple scoring systems, using rank-score characteristic (RSC) function and cognitive diversity (CD), including ensemble method and model fusion. However, there is no general-purpose Python tool available that incorporate these techniques. In this paper we introduce \texttt{InFusionLayer}, a machine learning architecture inspired by CFA at the system fusion level that uses a moderate set of base models to optimize unsupervised and supervised learning multiclassification problems. We demonstrate \texttt{InFusionLayer}'s ease of use for PyTorch, TensorFlow, and Scikit-learn workflows by validating its performance on various computer vision datasets. Our results highlight the practical advantages of incorporating distinctive features of RSC function and CD, paving the way for more sophisticated ensemble learning applications in machine learning. We open-sourced our code to encourage continuing development and community accessibility to leverage CFA on github: https://github.com/ewroginek/Infusion

en cs.LG, cs.AI
arXiv Open Access 2026
An AI Teaching Assistant for Motion Picture Engineering

Deirdre O'Regan, Anil C. Kokaram

The rapid rise of LLMs over the last few years has promoted growing experimentation with LLM-driven AI tutors. However, the details of implementation, as well as the benefit in a teaching environment, are still in the early days of exploration. This article addresses these issues in the context of implementation of an AI Teaching Assistant (AI-TA) using Retrieval Augmented Generation (RAG) for Trinity College Dublin's Master's Motion Picture Engineering (MPE) course. We provide details of our implementation (including the prompt to the LLM, and code), and highlight how we designed and tuned our RAG pipeline to meet course needs. We describe our survey instrument and report on the impact of the AI-TA through a number of quantitative metrics. The scale of our experiment (43 students, 296 sessions, 1,889 queries over 7 weeks) was sufficient to have confidence in our findings. Unlike previous studies, we experimented with allowing the use of the AI-TA in open-book examinations. Statistical analysis across three exams showed no performance differences regardless of AI-TA access (p > 0.05), demonstrating that thoughtfully designed assessments can maintain academic validity. Student feedback revealed that the AI-TA was beneficial (mean = 4.22/5), while students had mixed feelings about preferring it over human tutoring (mean = 2.78/5).

en eess.IV, cs.AI
DOAJ Open Access 2025
Analysis of Gluten Protein After Replacing Some of the Wheat Flour with Amaranth Flour in Muffins

Vesna Gojković Cvjetković, Dragana Škuletić, Željka Marjanović-Balaban et al.

Amaranth belongs to the pseudocereal group. This pseudocereal does not contain gluten, and is suitable for a gluten-free diet. This paper aimed to examine how the partial replacement of wheat flour with amaranth in muffins at different ratios and with different storage times affects gluten proteins. Gluten protein separation was performed by reverse-phase high-pressure liquid chromatography (RP-HPLC). Based on the obtained results, the greatest total quantity of gliadin protein was obtained from muffin samples made from 100% wheat flour and stored for 4 weeks (Xav = 20.33), and the least from muffins made from 50% wheat flour and 50% amaranth and stored for 0 weeks (Xav = 12.00). The greatest total quantity of glutenin protein was obtained from muffin samples made from 100% wheat flour and stored for 4 weeks (Xav = 26.67), and the least from 25% wheat flour and 75% amaranth and stored for 0 weeks (Xav = 17.33).

Engineering machinery, tools, and implements
DOAJ Open Access 2025
Physicochemical Properties of Jet-A/n-Heptane/Alcohol Blends for Turboengine Applications

Sibel Osman, Laurentiu Ceatra, Grigore Cican et al.

This work investigated the physical properties of Jet-A blended with n-heptane and various n-alcohols. The mixtures contained 10%, 20%, and 30% n-alcohols, including n-propanol, n-butanol, n-pentanol, n-hexanol, n-heptanol, and n-octanol. These alcohols are either derived from biomass or have significant potential for bio-based production. The blends were assessed against American Society for Testing and Materials (ASTM) D1655 standards for Jet-A in terms of the density, viscosity, and flash point. Additionally, the refractive index and Fourier Transform Infrared Spectroscopy (FTIR) analysis were employed to gain insights into the blend chemical composition. Density measurements for the blends fell within the ASTM specifications (0.7939 to 0.8075 g·cm<sup>−3</sup>). Viscosity measurements at −20 °C were not directly conducted due to technical limitations. However, extrapolating viscosity–temperature data suggests that the blends would meet the ASTM standard. Flash point measurements revealed that all mixtures exhibited values below the ASTM specification of 38 °C. Regression equations were developed to estimate the density, kinematic viscosity, and refractive index of the studied mixtures as a function of alcohol volume. Furthermore, a correlation study was conducted to estimate density and viscosity from refractive index measurements, given their simplicity, and minimal sample volume requirements. The R<sup>2</sup> values for these correlations exceeded 0.99, indicating a strong relationship between the refractive index and the other properties.

Engineering machinery, tools, and implements, Technological innovations. Automation
DOAJ Open Access 2025
Deep Learning Approach to Cassava Disease Detection Using EfficientNetB0 and Image Augmentation

Jazon Andrei G. Alejandro, James Harvey M. Mausisa, Charmaine C. Paglinawan

Cassava, a vital crop in the Philippines and other tropical regions, is highly susceptible to various diseases that drastically reduce its yield. Traditional inspection methods for detecting these diseases are manual, time-consuming, expensive, and prone to inaccuracies. While recent advances enable improved detection, many approaches focus primarily on leaves and stems, overlooking tubers—one of the most critical parts of the plant. Since tubers are the harvested portion of the cassava and a direct source of food and income, early disease detection in this part is crucial for preventing severe yield losses. Furthermore, symptoms often manifest in the tubers before becoming visible in other parts, making their monitoring essential for timely intervention. To address these challenges and improve accuracy, we employed EfficientNetB0 and data augmentation techniques to enhance disease detection across multiple parts of the cassava plant. The developed system integrates a Raspberry Pi 4B with a camera module LCD screen enclosed in a 3D-printed casing for ease of use by farmers, and this showed detection accuracies of 94% for leaves, 90% for stems, and 92% for tubers. The system’s reliability was validated with <i>p</i>-values at a 0.05 significance level. By reducing the need for expensive manual inspections, the system offers a robust solution for early disease detection, particularly in the tubers, to mitigate yield losses. Its proven accuracy and practical design support better disease management practices, thereby improving crop health while enhancing food security and supporting the livelihoods of cassava farmers.

Engineering machinery, tools, and implements
arXiv Open Access 2025
Dialogue Systems Engineering: A Survey and Future Directions

Mikio Nakano, Hironori Takeuchi, Sadahiro Yoshikawa et al.

This paper proposes to refer to the field of software engineering related to the life cycle of dialogue systems as Dialogue Systems Engineering, and surveys this field while also discussing its future directions. With the advancement of large language models, the core technologies underlying dialogue systems have significantly progressed. As a result, dialogue system technology is now expected to be applied to solving various societal issues and in business contexts. To achieve this, it is important to build, operate, and continuously improve dialogue systems correctly and efficiently. Accordingly, in addition to applying existing software engineering knowledge, it is becoming increasingly important to evolve software engineering tailored specifically to dialogue systems. In this paper, we enumerate the knowledge areas of dialogue systems engineering based on those of software engineering, as defined in the Software Engineering Body of Knowledge (SWEBOK) Version 4.0, and survey each area. Based on this survey, we identify unexplored topics in each area and discuss the future direction of dialogue systems engineering.

en cs.SE, cs.AI
arXiv Open Access 2025
Tether: A Personalized Support Assistant for Software Engineers with ADHD

Aarsh Shah, Cleyton Magalhaes, Kiev Gama et al.

Equity, diversity, and inclusion in software engineering often overlook neurodiversity, particularly the experiences of developers with Attention Deficit Hyperactivity Disorder (ADHD). Despite the growing awareness about that population in SE, few tools are designed to support their cognitive challenges (e.g., sustained attention, task initiation, self-regulation) within development workflows. We present Tether, an LLM-powered desktop application designed to support software engineers with ADHD by delivering adaptive, context-aware assistance. Drawing from engineering research methodology, Tether combines local activity monitoring, retrieval-augmented generation (RAG), and gamification to offer real-time focus support and personalized dialogue. The system integrates operating system level system tracking to prompt engagement and its chatbot leverages ADHD-specific resources to offer relevant responses. Preliminary validation through self-use revealed improved contextual accuracy following iterative prompt refinements and RAG enhancements. Tether differentiates itself from generic tools by being adaptable and aligned with software-specific workflows and ADHD-related challenges. While not yet evaluated by target users, this work lays the foundation for future neurodiversity-aware tools in SE and highlights the potential of LLMs as personalized support systems for underrepresented cognitive needs.

en cs.SE
DOAJ Open Access 2024
Assessment of greenhouse gas reduction through the utilization of remanufactured automotive parts, and consideration of the guidelines for product designs (Case studies of AC compressors, starters, and alternators)

Shuho YAMADA, Fumiya HORIUCHI, Masato INOUE et al.

This study investigated the remanufacturing process of automotive air conditioning compressors, starters, and alternators in Japan, and modeled the remanufacturing process using a functional modeling approach. By studying and observing the inputs and outputs of each remanufacturing process, we collected the data necessary for life cycle assessment and calculated the amount of greenhouse gas (GHG) emissions generated by each process and the associated costs. In addition, by estimating the GHG emissions generated during the manufacture of new parts and subtracting the emissions from the manufacture of remanufactured parts, it was determined that the use of remanufactured parts would reduce the expected GHG emissions by approximately 42-85%. In addition, by identifying the processes that account for a high percentage of the remanufacturing process, it was found that the process of manufacturing repair parts to replace worn parts accounted for the largest percentage. In terms of costs, it was found that in addition to the manufacturing of replacement parts, the processes associated with cleaning are a high percentage of the remanufacturing process. To enhance the GHG reduction effect using remanufactured parts, it was clarified that it would be effective to reduce both the number of replacement parts and cleaning by having the original manufacturer and the remanufacturer cooperate, share information on parts to be improved in terms of durability, and review the design of the parts. It was also found that efficient use of the equipment used in the cleaning, testing and drying processes would be effective in enhancing the GHG reduction effect.

Mechanical engineering and machinery, Engineering machinery, tools, and implements
DOAJ Open Access 2024
Particle Number Concentration and SEM-EDX Analyses of an Auxiliary Heating Device in Operation with Different Fossil and Renewable Fuel

Péter Nagy, Ádám István Szabó, Ibolya Zsoldos et al.

Pollution from road vehicles enters the air environment from many sources. One such source could be if the vehicle is equipped with an auxiliary heater. They can be classified according to whether they work with diesel or gasoline and whether they heat water or air. The subject of our research series is an additional heating system that heats the air, the original fuel is gasoline. This device has been built up in a modern engine test bench, where the environmental parameters can be controlled. The length of the test cycle was chosen to be 30 min. The tested fuels were E10, E30, E100 and B7. A 30-min operating period has been chosen in the NORMAL operating mode of the device as a test cycle. The focus of the tests was particle number concentration and soot composition. The results of the particle number concentration showed that renewable fuel content significantly reduces the number concentration of the emitted particles (9.56 × 10<sup>8</sup> #/cycle for E10 vs. 1.65 × 10<sup>8</sup> #/cycle for E100), while B7 causes a significantly higher number of emissions than E10 (3.92 × 10<sup>10</sup> #/cycle for B7). Based on the elemental analysis, most deposits are elemental carbon, but non-organic compounds are also present. Carbon (92.18 m/m% for E10), oxygen (6.34 m/m% for E10), fluorine (0.64 m/m% for E10), and zinc (0.56 m/m% for E10) have been found in the largest quantity of deposits taken form the combustion chamber.

Engineering machinery, tools, and implements, Technological innovations. Automation
DOAJ Open Access 2024
Research on Text Information Extraction and Analysis of Civil Transport Aircraft Accidents Based on Large Language Model

Jianzhong Yang, Tao Su, Xiyuan Chen

Civil aviation safety is crucial to the airline transportation industry, and the effective prevention and analysis of accidents are essential. This paper delves into the mining of unstructured textual information within accident reports, tracing the evolution from manual rules to machine learning and then to advanced deep learning techniques. We particularly highlight the advantages of text extraction methods that leverage large language models. We propose an innovative approach that integrates TF-IDF keyword extraction with large language model prompted filtering to scrutinize the causes of accidents involving civil transport aircraft. By analyzing the keywords before and after filtering, this method significantly enhances the efficiency of information extraction, minimizes the need for manual annotation, and thus improves the overall effectiveness of accident prevention and analysis. This research is not only pivotal in preventing similar incidents in the future but also introduces new perspectives for conducting aviation accident investigations and promotes the sustainable development of the civil aviation industry.

Engineering machinery, tools, and implements
arXiv Open Access 2024
AutoTRIZ: Automating Engineering Innovation with TRIZ and Large Language Models

Shuo Jiang, Weifeng Li, Yuping Qian et al.

Various ideation methods, such as morphological analysis and design-by-analogy, have been developed to aid creative problem-solving and innovation. Among them, the Theory of Inventive Problem Solving (TRIZ) stands out as one of the best-known methods. However, the complexity of TRIZ and its reliance on users' knowledge, experience, and reasoning capabilities limit its practicality. To address this, we introduce AutoTRIZ, an artificial ideation system that integrates Large Language Models (LLMs) to automate and enhance the TRIZ methodology. By leveraging LLMs' vast pre-trained knowledge and advanced reasoning capabilities, AutoTRIZ offers a novel, generative, and interpretable approach to engineering innovation. AutoTRIZ takes a problem statement from the user as its initial input, automatically conduct the TRIZ reasoning process and generates a structured solution report. We demonstrate and evaluate the effectiveness of AutoTRIZ through comparative experiments with textbook cases and a real-world application in the design of a Battery Thermal Management System (BTMS). Moreover, the proposed LLM-based framework holds the potential for extension to automate other knowledge-based ideation methods, such as SCAMPER, Design Heuristics, and Design-by-Analogy, paving the way for a new era of AI-driven innovation tools.

en cs.HC, cs.AI
arXiv Open Access 2024
Hybrid Active Teaching Methodology for Learning Development: A Self-assessment Case Study Report in Computer Engineering

Renan Lima Baima, Tiago Miguel Barao Caetano, Ana Carolina Oliveira Lima et al.

The primary objective is to emphasize the merits of active methodologies and cross-disciplinary curricula in Requirement Engineering. This direction promises a holistic and applied trajectory for Computer Engineering education, supported by the outcomes of our case study, where artifact-centric learning proved effective, with 73% of students achieving the highest grade. Self-assessments further corroborated academic excellence, emphasizing students' engagement in skill enhancement and knowledge acquisition.

en cs.SE, cs.CE
arXiv Open Access 2024
A Novel Refactoring and Semantic Aware Abstract Syntax Tree Differencing Tool and a Benchmark for Evaluating the Accuracy of Diff Tools

Pouria Alikhanifard, Nikolaos Tsantalis

Software undergoes constant changes to support new requirements, address bugs, enhance performance, and ensure maintainability. Thus, developers spend a great portion of their workday trying to understand and review the code changes of their teammates. Abstract Syntax Tree (AST) diff tools were developed to overcome the limitations of line-based diff tools, which are used by the majority of developers. Despite the notable improvements brought by AST diff tools in understanding complex changes, they still suffer from serious limitations, such as (1) lacking multi-mapping support, (2) matching semantically incompatible AST nodes, (3) ignoring language clues to guide the matching process, (4) lacking refactoring awareness, and (5) lacking commit-level diff support. We propose a novel AST diff tool based on RefactoringMiner that resolves all aforementioned limitations. First, we improved RefactoringMiner to increase its statement mapping accuracy, and then we developed an algorithm that generates AST diff for a given commit or pull request based on the refactoring instances and pairs of matched program element declarations provided by RefactoringMiner. To evaluate the accuracy of our tool and compare it with the state-of-the-art tools, we created the first benchmark of AST node mappings, including 800 bug-fixing commits and 188 refactoring commits. Our evaluation showed that our tool achieved a considerably higher precision and recall, especially for refactoring commits, with an execution time that is comparable with that of the faster tools.

en cs.SE
DOAJ Open Access 2023
A Survey of Deep Learning Techniques Based on Computed Tomography Images for Detection of Pneumonia

Sharon Quispe, Ingrid Arellano, Pedro Shiguihara

A cluster of cases caused by the virus SARS-CoV-2 was detected in Wuhan, China, in December 2019. The disease derived from that virus was named Coronavirus (COVID-19), which was officially recognized as a pandemic by the World Health Organization in March 2020. Since COVID-19 can cause serious pneumonia, early diagnosis is crucial for adequate treatment and for reducing health system overload. Therefore, deep learning algorithms to detect pneumonia have been developed using computed tomography (CT) scans, as they provide more detailed information about the disease because of their three-dimensionality and good visibility. This information analyzed by specialists could support the confirmation of pneumonia. To find out the accuracy levels of various classifiers, we evaluated the baseline models utilized by researchers. The findings we drew were that the majority of CT classification algorithms have strong accuracy values in comparison to other algorithms performed using CT, but have not reached above 98%. According to the systematic literature survey, low accuracy levels resulting from the performance of the models were attributed to the incongruous dealing of medical images. These images instead of having common formats such as png or jpg, use more complex formats such as DICOM and NIFTI, in order to save more information about the disease and the patient. Moreover, some studies found that the influence of environmental conditions and lung movement could affect the quality of the image. This unclear pneumonia area may also result in a decrease in the efficiency of deep-learning algorithms for detecting pneumonia. Therefore, the objective of this survey is to identify, gather data and build a catalog of deep-learning techniques for detecting pneumonia abnormalities and annotating CT images from the literature review, reflecting a better understanding of the classification of pneumonia using CT images.

Engineering machinery, tools, and implements
DOAJ Open Access 2023
Development of a New Lubricant Degradation Monitoring Technique Using Terahertz Electromagnetic Waves

Hiroki Kawano, Daiki Shiozawa, Tomohiro Ooyagi et al.

Condition monitoring of lubricating oil is an effective method for early detection of abnormalities in rotating machinery in plants. In this research, a new monitoring technique for lubricant degradation using terahertz waves, which are electromagnetic waves located in the boundary region between light and radio waves, is developed based on the correlation between lubricant degradation and the transmission characteristics of terahertz waves. It is found that there is a correlation between the transmission characteristics of terahertz waves, such as transmittance and refractive index, and typical lubricant degradation, such as base oil degradation, water contamination, and metallic wear debris contamination. The results suggest that a new lubricant degradation monitoring technique using terahertz waves is possible by using these transmission characteristics.

Engineering machinery, tools, and implements
arXiv Open Access 2023
A ML-LLM pairing for better code comment classification

Hanna Abi Akl

The "Information Retrieval in Software Engineering (IRSE)" at FIRE 2023 shared task introduces code comment classification, a challenging task that pairs a code snippet with a comment that should be evaluated as either useful or not useful to the understanding of the relevant code. We answer the code comment classification shared task challenge by providing a two-fold evaluation: from an algorithmic perspective, we compare the performance of classical machine learning systems and complement our evaluations from a data-driven perspective by generating additional data with the help of large language model (LLM) prompting to measure the potential increase in performance. Our best model, which took second place in the shared task, is a Neural Network with a Macro-F1 score of 88.401% on the provided seed data and a 1.5% overall increase in performance on the data generated by the LLM.

en cs.SE, cs.AI
arXiv Open Access 2023
On Using Information Retrieval to Recommend Machine Learning Good Practices for Software Engineers

Laura Cabra-Acela, Anamaria Mojica-Hanke, Mario Linares-Vásquez et al.

Machine learning (ML) is nowadays widely used for different purposes and in several disciplines. From self-driving cars to automated medical diagnosis, machine learning models extensively support users' daily activities, and software engineering tasks are no exception. Not embracing good ML practices may lead to pitfalls that hinder the performance of an ML system and potentially lead to unexpected results. Despite the existence of documentation and literature about ML best practices, many non-ML experts turn towards gray literature like blogs and Q&A systems when looking for help and guidance when implementing ML systems. To better aid users in distilling relevant knowledge from such sources, we propose a recommender system that recommends ML practices based on the user's context. As a first step in creating a recommender system for machine learning practices, we implemented Idaka. A tool that provides two different approaches for retrieving/generating ML best practices: i) an information retrieval (IR) engine and ii) a large language model. The IR-engine uses BM25 as the algorithm for retrieving the practices, and a large language model, in our case Alpaca. The platform has been designed to allow comparative studies of best practices retrieval tools. Idaka is publicly available at GitHub: https://bit.ly/idaka. Video: https://youtu.be/cEb-AhIPxnM.

en cs.SE
arXiv Open Access 2023
Cloud Native Software Engineering

Brian S. Mitchell

Cloud compute adoption has been growing since its inception in the early 2000's with estimates that the size of this market in terms of worldwide spend will increase from \$700 billion in 2021 to \$1.3 trillion in 2025. While there is a significant research activity in many areas of cloud computing technologies, we see little attention being paid to advancing software engineering practices needed to support the current and next generation of cloud native applications. By cloud native, we mean software that is designed and built specifically for deployment to a modern cloud platform. This paper frames the landscape of Cloud Native Software Engineering from a practitioners standpoint, and identifies several software engineering research opportunities that should be investigated. We cover specific engineering challenges associated with software architectures commonly used in cloud applications along with incremental challenges that are expected with emerging IoT/Edge computing use cases.

en cs.SE
arXiv Open Access 2022
Search Budget in Multi-Objective Refactoring Optimization: a Model-Based Empirical Study

Daniele Di Pompeo, Michele Tucci

Software model optimization is the task of automatically generate design alternatives, usually to improve quality aspects of software that are quantifiable, like performance and reliability. In this context, multi-objective optimization techniques have been applied to help the designer find suitable trade-offs among several non-functional properties. In this process, design alternatives can be generated through automated model refactoring, and evaluated on non-functional models. Due to their complexity, this type of optimization tasks require considerable time and resources, often limiting their application in software engineering processes. In this paper, we investigate the effects of using a search budget, specifically a time limit, to the search for new solutions. We performed experiments to quantify the impact that a change in the search budget may have on the quality of solutions. Furthermore, we analyzed how different genetic algorithms (i.e., NSGA-II, SPEA2, and PESA2) perform when imposing different budgets. We experimented on two case studies of different size, complexity, and domain. We observed that imposing a search budget considerably deteriorates the quality of the generated solutions, but the specific algorithm we choose seems to play a crucial role. From our experiments, NSGA-II is the fastest algorithm, while PESA2 generates solutions with the highest quality. Differently, SPEA2 is the slowest algorithm, and produces the solutions with the lowest quality.

Halaman 40 dari 327081