Nidhal Selmi, Jean-michel Bruel, Sébastien Mosser
et al.
Decision-making is a core engineering design activity that conveys the engineer's knowledge and translates it into courses of action. Capturing this form of knowledge can reap potential benefits for the engineering teams and enhance development efficiency. Despite its clear value, traditional decision capture often requires a significant amount of effort and still falls short of capturing the necessary context for reuse. Model-based systems engineering (MBSE) can be a promising solution to address these challenges by embedding decisions directly within system models, which can reduce the capture workload while maintaining explicit links to requirements, behaviors, and architectural elements. This article discusses a lightweight framework for integrating decision capture into MBSE workflows by representing decision alternatives as system model slices. Using a simplified industry example from aircraft architecture, we discuss the main challenges associated with decision capture and propose preliminary solutions to address these challenges.
Abstract The main objective of this study is to present the practical application of the Theory of Constraints (TOC) and its Thinking Process Tools (TPT) to identify, analyze, and eliminate constraints in the production department of a machinery company specializing in the production of bearing cages. A case study was used as the research method. Interviews, direct observations, and performance indicator analysis were used to identify the most critical constraints in the production department of the studied company. The research included the application of five TOC-TPT tools: Goal Tree (GT), Current Reality Tree (CRT), Evaporating Cloud (EC), Future Reality Tree (FRT), and Prerequisite Tree (PRT). These tools were used to identify organizational goals, diagnose the root causes of key constraints, resolve internal conflicts, design future solutions, and outline the path for implementing solutions. The study focused on an in-depth analysis of the key constraint in the form of the lack of work instructions at each stage of production. The analysis revealed that the root causes of this limitation were insufficient investment in training and management development. Determining a dedicated budget and resources for a structured, ongoing training and development program that aligns with the company’s strategic goals and develops both technical and interpersonal skills at all levels of staff was identified as a critical solution to this problem. The advantages and limitations of using TOC-TPT were also analyzed in response to comments from the working group participating in the study. The results showed the effectiveness of TOC-TPT in solving complex operational problems in the study company and the technical and organizational problems associated with the use of TOC thinking tools. The study indicated that strategic commitment of top management, cross-functional teamwork, and targeted training are key to the successful implementation of TOC tools. The results have practical implications for manufacturing companies seeking to improve the effectiveness and efficiency of their production systems through comprehensive constraint management.
Retinal disorders, such as diabetic retinopathy, cataract, and glaucoma, are among the leading causes of vision loss and blindness worldwide. The use of normal data in diagnostic studies provides a basis for distinguishing between pathological and healthy conditions. Complete and accurate diagnosis of these conditions is essential for effective treatment and prevention of recurrence. This study focuses on the VGG19 model and transfer learning to classify retinal conditions such as normal, diabetic, cataract, and glaucoma. A publicly available dataset from Kaggle consisting of labeled retinal images is used for training and evaluation. The data used in this study consists of 400 retinal images, each consisting of 100 images per class, where there are four classes consisting of normal eyes, cataract, diabetic retinopathy and glaucoma. In 50 epochs of training, Adam optimization and softmax function activation, the modeling performance measured using the confusion matrix, including the accuracy, precision, recall and F1 score, achieves accuracy results of 0.91 for 320 training data and 0.88 for 80 validation data. The loss value is 0.18 for the training data and 0.31 for the validation data. Using the test data, the values of the cataract class are 0.94 for precision, 0.8 for recall, and 0.86 for the F1 score. The values are 0.91 for precision, 1.00 for recall and 0.95 for the F1 score in the diabetic retinopathy class. For glaucoma, the scores are 0.74 for precision, 0.85 for recall, and 0.79 for the F1 score. The normal class has scores of 1.00 for precision, 0.9 for recall and 0.95 for the F1 score. Given the performance test results shown above, VGG19 modeling for diagnosing retinal disease provides quite good results. Future research can expand this research by combining additional datasets and exploring other neural network architectures to improve the diagnostic performance.
Traffic measurement systems are an essential part of intelligent transportation systems (ITS). These are specialized transport infrastructures where traffic data is collected and analyzed in order to optimize the use of road systems, improve transport safety, and implement future transport plans. The rapid development of transportation systems, urbanization, and industrialization have led to a global problem of air pollution. This has raised the topical issue of measuring and monitoring environmental parameters at traffic monitoring stations in ITS. In this paper, we present a wireless environmental monitoring system, which is a subsystem of a traffic monitoring station. Along with measuring traffic parameters, the station also collects useful meteorological information. A novel hybrid, dual-band IoT system based on LoRa and LoRaWAN for environmental parameters monitoring is presented. The hardware realization of a developed hybrid LoRaWAN end device, together with the sensors used for the measurement of air parameters, is described. Initial results from real test monitoring of environmental parameters on the road in urban environments are presented as a proof of concept. The presented wireless environmental monitoring system can also be used for indoor or outdoor air pollution monitoring, serving as a useful complement to intelligent transport systems.
Rabiea Ashowen Ahmoda, Andrea Pirković, Milena Milošević
et al.
The objective of the present study was to investigate the influence of high temperature on the extraction of polyphenols and flavonoids from <i>Fumaria officinalis</i>. The polyphenol yield varied from 16.56 to 18.33 mg gallic acid equivalent/g of dried plant material, achieving the highest value in the extract prepared using heat-assisted extraction (HAE) for 30 min. The same trend was noticed for the flavonoid concentration in the extracts (7.14–8.48 mg catechin equivalent/g of dried plant material): macerate after 60 min ≤ macerate after 90 min ≤ HAE extract after 15 min ≤ HAE extract after 30 min. Compared to maceration and taking into consideration the industrial requirements such as high extraction yield for a shorter time, HAE could be recommended as a convenient technique for polyphenol and flavonoid extraction from fumitory.
From its early foundations in the 1970s, empirical software engineering (ESE) has evolved into a mature research discipline that embraces a plethora of different topics, methodologies, and industrial practices. Despite its remarkable progress, the ESE research field still needs to keep evolving, as new impediments, shortcoming, and technologies emerge. Research reproducibility, limited external validity, subjectivity of reviews, and porting research results to industrial practices are just some examples of the drivers for improvements to ESE research. Additionally, several facets of ESE research are not documented very explicitly, which makes it difficult for newcomers to pick them up. With this new regular ACM SIGSOFT SEN column (SEN-ESE), we introduce a venue for discussing meta-aspects of ESE research, ranging from general topics such as the nature and best practices for replication packages, to more nuanced themes such as statistical methods, interview transcription tools, and publishing interdisciplinary research. Our aim for the column is to be a place where we can regularly spark conversations on ESE topics that might not often be touched upon or are left implicit. Contributions to this column will be grounded in expert interviews, focus groups, surveys, and position pieces, with the goal of encouraging reflection and improvement in how we conduct, communicate, teach, and ultimately improve ESE research. Finally, we invite feedback from the ESE community on challenging, controversial, or underexplored topics, as well as suggestions for voices you would like to hear from. While we cannot promise to act on every idea, we aim to shape this column around the community interests and are grateful for all contributions.
Chaos Engineering (CE) is an engineering technique aimed at improving the resilience of distributed systems. It involves intentionally injecting faults into a system to test its resilience, uncover weaknesses, and address them before they cause failures in production. Recent CE tools automate the execution of predefined CE experiments. However, planning such experiments and improving the system based on the experimental results still remain manual. These processes are labor-intensive and require multi-domain expertise. To address these challenges and enable anyone to build resilient systems at low cost, this paper proposes ChaosEater, a system that automates the entire CE cycle with Large Language Models (LLMs). It predefines an agentic workflow according to a systematic CE cycle and assigns subdivided processes within the workflow to LLMs. ChaosEater targets CE for software systems built on Kubernetes. Therefore, the LLMs in ChaosEater complete CE cycles through software engineering tasks, including requirement definition, code generation, testing, and debugging. We evaluate ChaosEater through case studies on small- and large-scale Kubernetes systems. The results demonstrate that it consistently completes reasonable CE cycles with significantly low time and monetary costs. Its cycles are also qualitatively validated by human engineers and LLMs.
Kota TAKASHIMA, Naofumi TSUJI, Daisuke KONO
et al.
Recently, there has been a growing interest in utilizing surface texture to enhance the tribological properties of sliding components. Particularly noteworthy is the application of tool vibration at ultrasonic frequencies for efficiently generating surface textures. This study focuses on generating surface texture on the end surface of a stainless steel disk through ultrasonic assisted turning. The mathematical expression of the theoretical texture configuration, derived from the tool trajectory, is closely aligned with the actual machined surface. A novel geometric analysis was conducted to address the challenge of interference between the finished surface and the flank surface, resulting in a reduction in texture height. This analysis revealed that the texture height error from the theoretical value was limited to within 10%. Ball-on-disk tribological experiments were also performed on the textured surface to assess starting friction phenomena. The findings indicated that surfaces with texture exhibited a more minor fluctuation in the starting friction coefficient compared to those without texture. In summary, this paper explores the efficient generation of surface texture on stainless steel disks using ultrasonic assisted turning. Theoretical configurations were mathematically expressed and aligned well with actual machined surfaces. The study also introduced a novel geometric analysis to address interference-related texture height reduction. Moreover, tribological experiments demonstrated that textured surfaces experienced a more stable starting friction coefficient, highlighting the potential of surface texturing for improving tribological properties in sliding components.
Engineering machinery, tools, and implements, Mechanical engineering and machinery
B. B. V. L. Deepak, M. V. A. Raju Bahubalendruni, Dayal Parhi
et al.
The 5th International Conference on Innovative Product Design and Intelligent Manufacturing Systems (ICIPDIMS’23) was held at the National Institute of Technology, Rourkela, India during 6–7 December 2023 [...]
Michael Danner, Elena Brake, Gabriela Kosel
et al.
This paper introduces an AI-assisted pattern generator, aimed to simplify garment design by flattening the pattern creation in an automated process from 3D scans for users without knowledge of conventional pattern construction. This garment tool plug-in computerizes the development of scanned persons into 3D shell surface meshes, which are automatically unwrapped into 2D patterns, streamlining the traditionally complex aspects of garment design for novices. The process uses advanced AI algorithms to facilitate the conversion of 3D scans into usable patterns. Machine learning adapts to different garment styles (close-fitting, regular fit and loose-fitting), ensuring a broad applicability, while customization options allow a precise adaption to individual body measurements. This AI-assisted tool enables a wider audience to generate customized garment creation.
Textile bleaching, dyeing, printing, etc., Engineering machinery, tools, and implements
As the number of elderly people increases, the demand for electric wheelchairs is increasing. Among them is a joystick-type 6-wheel electric wheelchair. It has three wheels, the central wheel of which is the driving wheel in side view. This structure allows the left and right drive wheels to rotate in the opposite direction, resulting in an extremely small turning radius compared to other types. On the other hand, it is difficult to ensure the grounding performance of the wheels, including the drive wheels, due to changes in the road surface outdoors because it has 6 wheels, so it is mainly intended for indoor use. In this paper, I studied how to improve the grounding performance of the drive wheels and other wheels by using a newly devised passive link. I devised a parallel double rocker link mechanism that connects six wheels with four sets of rocker links. I found a way to improve it. As a result, it was confirmed that the prototype vehicle using the parallel double rocker link satisfies the JIS requirements.
Mechanical engineering and machinery, Engineering machinery, tools, and implements
Software engineering (SE) is full of abstract concepts that are crucial for both researchers and practitioners, such as programming experience, team productivity, code comprehension, and system security. Secondary studies aimed at summarizing research on the influences and consequences of such concepts would therefore be of great value. However, the inability to measure abstract concepts directly poses a challenge for secondary studies: primary studies in SE can operationalize such concepts in many ways. Standardized measurement instruments are rarely available, and even if they are, many researchers do not use them or do not even provide a definition for the studied concept. SE researchers conducting secondary studies therefore have to decide a) which primary studies intended to measure the same construct, and b) how to compare and aggregate vastly different measurements for the same construct. In this experience report, we discuss the challenge of study selection in SE secondary research on latent variables. We report on two instances where we found it particularly challenging to decide which primary studies should be included for comparison and synthesis, so as not to end up comparing apples with oranges. Our report aims to spark a conversation about developing strategies to address this issue systematically and pave the way for more efficient and rigorous secondary studies in software engineering.
In this practice paper, we propose a framework for integrating AI into disciplinary engineering courses and curricula. The use of AI within engineering is an emerging but growing area and the knowledge, skills, and abilities (KSAs) associated with it are novel and dynamic. This makes it challenging for faculty who are looking to incorporate AI within their courses to create a mental map of how to tackle this challenge. In this paper, we advance a role-based conception of competencies to assist disciplinary faculty with identifying and implementing AI competencies within engineering curricula. We draw on prior work related to AI literacy and competencies and on emerging research on the use of AI in engineering. To illustrate the use of the framework, we provide two exemplary cases. We discuss the challenges in implementing the framework and emphasize the need for an embedded approach where AI concerns are integrated across multiple courses throughout the degree program, especially for teaching responsible and ethical AI development and use.
To understand the impacts of AI-driven coding tools on engineers' workflow and work environment, we utilize the Jellyfish platform to analyze indicators of change. Key indicators are derived from Allocations, Coding Fraction vs. PR Fraction, Lifecycle Phases, Cycle Time, Jira ticket size, PR pickup time, PR comments, PR comment count, interactions, and coding languages. Significant changes were observed in coding time fractions among Copilot users, with an average decrease of 3% with individual decreases as large as 15%. Ticket sizes decreased by an average of 16% across four companies, accompanied by an 8% decrease in cycle times, whereas the control group showed no change. Additionally, the PR process evolved with Copilot usage, featuring longer and more comprehensive comments, despite the weekly number of PRs reviewed remaining constant. Not all hypothesized changes were observed across all participating companies. However, some companies experienced a decrease in PR pickup times by up to 33%, indicating reduced workflow bottlenecks, and one company experienced a shift of up to 17% of effort from maintenance and support work towards product growth initiatives. This study is the first to utilize data from more than one company and goes beyond simple productivity and satisfaction measures, considering real-world engineering settings instead. By doing so, we highlight that some companies seem to benefit more than others from the use of Copilot and that changes can be subtle when investigating aggregates rather than specific aspects of engineering work and workflows - something that will be further investigated in the future.
Elicitation interviews are the most common requirements elicitation technique, and proficiency in conducting these interviews is crucial for requirements elicitation. Traditional training methods, typically limited to textbook learning, may not sufficiently address the practical complexities of interviewing techniques. Practical training with various interview scenarios is important for understanding how to apply theoretical knowledge in real-world contexts. However, there is a shortage of educational interview material, as creating interview scripts requires both technical expertise and creativity. To address this issue, we develop a specialized GPT agent for auto-generating interview scripts. The GPT agent is equipped with a dedicated knowledge base tailored to the guidelines and best practices of requirements elicitation interview procedures. We employ a prompt chaining approach to mitigate the output length constraint of GPT to be able to generate thorough and detailed interview scripts. This involves dividing the interview into sections and crafting distinct prompts for each, allowing for the generation of complete content for each section. The generated scripts are assessed through standard natural language generation evaluation metrics and an expert judgment study, confirming their applicability in requirements engineering training.
Hsiao-Ching Huang, I-Hsien Liu, Meng-Huan Lee
et al.
The Internet of Things (IoT) has revolutionized technologies in society, including in households, offices, factories, and health centers. Among these, the Healthcare Internet of Things (HIoT) significantly transforms medical assistance for patients. By using wearable devices with remote network connections, caregivers monitor patients’ physiological data to gain valuable insights into their health conditions. Despite the many benefits of the HIoT, several security vulnerabilities still exist. Hackers can exploit the internet connection to steal or modify credential information regarding patients, violating the integrity and confidentiality of the security policy. Moreover, they can launch cyberattacks on hospitals or critical life-support systems, further endangering patients’ lives. Consequently, it is crucial to implement robust cybersecurity measures to enhance the security of healthcare services. Therefore, we proposed an anomaly detection method based on network traffic for the HIoT, adopting Markov models. Owing to their simplicity, interpretability, and well-developed theory, the Markov models have been applied to network traffic prediction and modeling, serving as a viable approach to cater to our needs. We evaluated the proposed method using the public dataset ToN_IoT and analyzed the results.
This article presents a performance comparison of two known public key cryptography techniques namely RSA (Rivest–Shamir–Adleman) and El-Gamal algorithms to encrypt/decrypt the speech signals during transferring over open networks. Specifically, this work is divided into two stages. The first stage is enciphering-deciphering the input speech file by employing the RSA method. The second stage is enciphering-deciphering the same input speech file by employing the El-Gamal method. Then, a comparative analysis is performed to test the performance of both cryptosystems using diverse experimental and statistical analyses for the ciphering and deciphering procedures like some known speech quality measures: histogram, spectrogram, correlation, differential, speed performance and noise effect analyses. The analyses outcomes reveal that the RSA and El-Gamal approaches are efficient and adequate for providing high degree of security, confidentiality and reliability. Additionally, the outcomes indicate that the RSA speech cryptosystem outperforms its counterpart the El-Gamal speech cryptosystem in most of ciphering/deciphering speech performance metrics.
Engineering machinery, tools, and implements, Mechanics of engineering. Applied mechanics
[Context] Systematic Literature Review (SLR) has been a major type of study published in Software Engineering (SE) venues for about two decades. However, there is a lack of understanding of whether an SLR is really needed in comparison to a more conventional literature review. Very often, SE researchers embark on an SLR with such doubts. We aspire to provide more understanding of when an SLR in SE should be conducted. [Objective] The first step of our investigation was focused on the dataset, i.e., the reviewed papers, in an SLR, which indicates the development of a research topic or area. The objective of this step is to provide a better understanding of the characteristics of the datasets of SLRs in SE. [Method] A research synthesis was conducted on a sample of 170 SLRs published in top-tier SE journals. We extracted and analysed the quantitative attributes of the datasets of these SLRs. [Results] The findings show that the median size of the datasets in our sample is 57 reviewed papers, and the median review period covered is 14 years. The number of reviewed papers and review period have a very weak and non-significant positive correlation. [Conclusions] The results of our study can be used by SE researchers as an indicator or benchmark to understand whether an SLR is conducted at a good time.
Ze Shi Li, Nowshin Nawar Arony, Kezia Devathasan
et al.
Capstone courses in undergraduate software engineering are a critical final milestone for students. These courses allow students to create a software solution and demonstrate the knowledge they accumulated in their degrees. However, a typical capstone project team is small containing no more than 5 students and function independently from other teams. To better reflect real-world software development and meet industry demands, we introduce in this paper our novel capstone course. Each student was assigned to a large-scale, multi-team (i.e., company) of up to 20 students to collaboratively build software. Students placed in a company gained first-hand experiences with respect to multi-team coordination, integration, communication, agile, and teamwork to build a microservices based project. Furthermore, each company was required to implement plug-and-play so that their services would be compatible with another company, thereby sharing common APIs. Through developing the product in autonomous sub-teams, the students enhanced not only their technical abilities but also their soft skills such as communication and coordination. More importantly, experiencing the challenges that arose from the multi-team project trained students to realize the pitfalls and advantages of organizational culture. Among many lessons learned from this course experience, students learned the critical importance of building team trust. We provide detailed information about our course structure, lessons learned, and propose recommendations for other universities and programs. Our work concerns educators interested in launching similar capstone projects so that students in other institutions can reap the benefits of large-scale, multi-team development