Recognizing tool-based hand activities from a first-person view is a critical yet challenging task in computer vision, due to the complexity of hand-object interactions and often subtle, ambiguous motion patterns. In real-world manufacturing scenarios, these challenges are exacerbated by bidirectional action pairs whose visual cues are almost identical, with differences revealed only through subtle motion dynamics. However, existing datasets rarely capture these direction-sensitive interactions at scale, particularly in realistic tool-use contexts, limiting the ability of current models to learn fine-grained motion dynamics essential for accurate recognition. We introduce Ego-Bi (Egocentric-Bidirectional dataset), a large-scale, real-world egocentric RGB video dataset comprising 1,223 video sequences and 622,737 frames that cover diverse tool-use activities in unconstrained environments. Ego-Bi provides an extended 38-category hand type taxonomy, detailed object–tool labels, and challenging bidirectional action pairs, offering rich semantic and temporal cues for modeling complex hand–object interactions. In addition, to address the ambiguity in motion dynamics, we propose a BMP (Bidirectional Motion Prior module) that derives rotation and directional cues from predicted 3D hand poses to improve class separability of visually similar actions. Experimental results on Ego-Bi demonstrate that our approach improves bidirectional action recognition accuracy by + 8.96% over the baseline, while also yielding consistent gains across general action classes without requiring costly 3D pose annotations. Furthermore, the proposed motion priors generalize effectively to other egocentric benchmarks, underscoring their robustness in handling visually similar, direction-sensitive actions.
The proliferation of data across the system lifecycle presents both a significant opportunity and a challenge for Engineering Design and Systems Engineering (EDSE). While this "digital thread" has the potential to drive innovation, the fragmented and inaccessible nature of existing datasets hinders method validation, limits reproducibility, and slows research progress. Unlike fields such as computer vision and natural language processing, which benefit from established benchmark ecosystems, engineering design research often relies on small, proprietary, or ad-hoc datasets. This paper addresses this challenge by proposing a systematic framework for a "Map of Datasets in EDSE." The framework is built upon a multi-dimensional taxonomy designed to classify engineering datasets by domain, lifecycle stage, data type, and format, enabling faceted discovery. An architecture for an interactive discovery tool is detailed and demonstrated through a working prototype, employing a knowledge graph data model to capture rich semantic relationships between datasets, tools, and publications. An analysis of the current data landscape reveals underrepresented areas ("data deserts") in early-stage design and system architecture, as well as relatively well-represented areas ("data oases") in predictive maintenance and autonomous systems. The paper identifies key challenges in curation and sustainability and proposes mitigation strategies, laying the groundwork for a dynamic, community-driven resource to accelerate data-centric engineering research.
This paper gives an overview about aspects of mechanical engineering of undulators. It is based mainly on two types that are used in the SwissFEL facility. The U15 Undulator is an example of an in-vacuum type and the UE38 is an APPLE-X type. It describes the frame, the adjustment of the magnets with flexible keepers and the adjustment of the whole device with eccentric movers.
Jannatul Bushra, Md Habibor Rahman, Mohammed Shafae
et al.
Reverse engineering can be used to derive a 3D model of an existing physical part when such a model is not readily available. For parts that will be fabricated with subtractive and formative manufacturing processes, existing reverse engineering techniques can be readily applied, but parts produced with additive manufacturing can present new challenges due to the high level of process-induced distortions and unique part attributes. This paper introduces an integrated 3D scanning and process simulation data-driven framework to minimize distortions of reverse-engineered additively manufactured components. This framework employs iterative finite element simulations to predict geometric distortions to minimize errors between the predicted and measured geometrical deviations of the key dimensional characteristics of the part. The effectiveness of this approach is then demonstrated by reverse engineering two Inconel-718 components manufactured using laser powder bed fusion additive manufacturing. This paper presents a remanufacturing process that combines reverse engineering and additive manufacturing, leveraging geometric feature-based part compensation through process simulation. Our approach can generate both compensated STL and parametric CAD models, eliminating laborious experimentation during reverse engineering. We evaluate the merits of STL-based and CAD-based approaches by quantifying the errors induced at the different steps of the proposed approach and analyzing the influence of varying part geometries. Using the proposed CAD-based method, the average absolute percent error between simulation-predicted distorted dimensions and actual measured dimensions of the manufactured parts was 0.087%, with better accuracy than the STL-based method.
Allysson Allex Araújo, Marcos Kalinowski, Matheus Paixao
et al.
[Background] Emotional Intelligence (EI) can impact Software Engineering (SE) outcomes through improved team communication, conflict resolution, and stress management. SE workers face increasing pressure to develop both technical and interpersonal skills, as modern software development emphasizes collaborative work and complex team interactions. Despite EI's documented importance in professional practice, SE education continues to prioritize technical knowledge over emotional and social competencies. [Objective] This paper analyzes SE students' self-perceptions of their EI after a two-month cooperative learning project, using Mayer and Salovey's four-ability model to examine how students handle emotions in collaborative development. [Method] We conducted a case study with 29 SE students organized into four squads within a project-based learning course, collecting data through questionnaires and focus groups that included brainwriting and sharing circles, then analyzing the data using descriptive statistics and open coding. [Results] Students demonstrated stronger abilities in managing their own emotions compared to interpreting others' emotional states. Despite limited formal EI training, they developed informal strategies for emotional management, including structured planning and peer support networks, which they connected to improved productivity and conflict resolution. [Conclusion] This study shows how SE students perceive EI in a collaborative learning context and provides evidence-based insights into the important role of emotional competencies in SE education.
This lecture presents an overview of the basic concepts and fundamentals of Engineering Materials within the framework of accelerator applications. After a short introduction, main concepts relative to the structure of matter are reviewed, like crystalline structures, defects and dislocations, phase diagrams and transformations. The microscopic description is correlated with physical properties of materials, focusing in metallurgical aspects like deformation and strengthening. Main groups of materials are addressed and described, namely, metals and alloys, ceramics, polymers, composite materials, and advanced materials, where brush-strokes of tangible applications in particle accelerators and detectors are given. Deterioration aspects of materials are also presented, like corrosion in metals and degradation in plastics.
Autonomous driving technology plays a key role in addressing traffic safety issues and relieving traffic congestion by virtue of its capabilities of enabling accurate environmental perception and real-time response. Aiming at the problem of limited computing power of mobile driving platform, an improved algorithm based on YOLOv8n: LBT-YOLO was proposed. The algorithm is improved in three aspects: firstly, replacing part of the traditional convolutional layers by linear deformable convolution, and designing a new C2L module by optimizing the C2F module, so as to reduce the number of model parameters and maintain the detection accuracy at the same time. Secondly, a new neck network structure BCFPN (Bidirectional Collocated Feature Pyramid Network) is designed based on the weighted bidirectional feature pyramid network, which enhances the feature fusion and the interaction of contextual information, and improves the detection accuracy of the model. Finally, a new detection head TADDH (Task Aligned Dynamic Detection Head) is proposed. This detection head reduces the number of parameters by sharing the neck network features, and performs task decomposition alignment to achieve high accuracy target detection using dynamic convolution and dynamic feature selection. After a series of improvements, LBT-YOLO outperforms YOLOv8n and other detection algorithms on the Autonomous Driving BDD100K dataset, with an average accuracy improvement of 2.4% while reducing the number of model parameters by 48.2%.
Alexandra COROIAN, Larisa IVASCU, Timea CISMA
et al.
Romania's automotive sector is experiencing an evolution towards sustainable
transport, with an increasing interest in incorporating solar power technology into
vehicles. This article examines the present state of solar power use in Romania's
automobile industry, including difficulties, possibilities, and prospects. The analysis looks
at technology improvements, legislative applications, consumer preferences, and the
carbon footprint of solar-powered cars in Romania.
Due to the very efficient relaxation of elastic stress on strain-free sidewalls, III–V nanowires offer almost unlimited possibilities for bandgap engineering in nanowire heterostructures by using material combinations that are attainable in epilayers. However, axial nanowire heterostructures grown using the vapor–liquid–solid method often suffer from the reservoir effect in a catalyst droplet. Control over the interfacial abruptness in nanowire heterostructures based on the group V interchange is more difficult than for group-III-based materials, because the low concentrations of highly volatile group V atoms cannot be measured after or during growth. Here, we develop a self-consistent model for calculations of the coordinate-dependent compositional profiles in the solid and liquid phases during the vapor–liquid–solid growth of the axial nanowire heterostructure <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">A</mi></mrow><mrow><msub><mrow><mi mathvariant="normal">x</mi></mrow><mrow><mn>0</mn></mrow></msub></mrow></msub><msub><mrow><mi mathvariant="normal">B</mi></mrow><mrow><mn>1</mn><mo>−</mo><msub><mrow><mi mathvariant="normal">x</mi></mrow><mrow><mn>0</mn></mrow></msub></mrow></msub><mi mathvariant="normal">C</mi><mo>/</mo><msub><mrow><mi mathvariant="normal">A</mi></mrow><mrow><msub><mrow><mi mathvariant="normal">x</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></msub><msub><mrow><mi mathvariant="normal">B</mi></mrow><mrow><mn>1</mn><mo>−</mo><msub><mrow><mi mathvariant="normal">x</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></msub><mi mathvariant="normal">C</mi></mrow></semantics></math></inline-formula> with any stationary compositions <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">x</mi></mrow><mrow><mn>0</mn></mrow></msub></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">x</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></semantics></math></inline-formula>. The only assumption of the model is that the growth rates of both binaries AC and BC are proportional to the concentrations of group V atoms A and B in a catalyst droplet, requiring high enough supersaturations in liquid phase. The model contains a minimum number of parameters and fits quite well the data on the interfacial abruptness across double heterostructures in GaP/GaAs<sub>x</sub>P<sub>1−x</sub>/GaP nanowires. It can be used for any axial III–V nanowire heterostructures obtained through the vapor–liquid–solid method. It forms a basis for further developments in modeling the complex growth process and suppression of the interfacial broadening caused by the reservoir effect.
This article investigates the peristaltic flow of a hyperbolic tangent fluid with variable viscosity and thermal conductivity through a vertical asymmetric channel. Consideration is given to the effects of viscous dissipation, chemical reactions, and convection heat at the channel walls. The relevant mathematical modeling incorporates lubrication approximation. The resulting system of highly non-linear differential equations is converted non-dimensional via suitable quantities. The non-dimensional parameters formed in viscosity or thermal conductivity are treated as variables. This treatment is the fundamental recommendation of the current study to avoid obtaining unrealistic results. Using the built-in package (ParametricNDSolve) in the computing program Mathematica, the analysis is performed numerically, and the results for temperature and concentration profiles in addition to the trapping phenomenon are shown through graphs. The major findings show that an increase in temperature is associated with a decrease in viscosity, whereas the opposite (unrealistic) behavior is maintained when the variable parameters are treated as constants. Results also indicate that the thermal conductivity at relatively low temperatures is enhanced, whereas the opposite trend is noted at higher temperature values. Potential applications for the current work include cooling techniques used in medical and industrial applications, food processing, and blood flow in microvessels.
Intrusion Detection Systems are expected to detect and prevent malicious activities in a network, such as a smart grid. However, they are the main systems targeted by cyber-attacks. A number of approaches have been proposed to classify and detect these attacks, including supervised machine learning. However, these models require large labeled datasets for training and testing. Therefore, this paper compares the performance of supervised and unsupervised learning models in detecting cyber-attacks. The benchmark of CICDDOS 2019 was used to train, test, and validate the models. The supervised models are Gaussian Naïve Bayes, Classification and Regression Decision Tree, Logistic Regression, C-Support Vector Machine, Light Gradient Boosting, and Alex Neural Network. The unsupervised models are Principal Component Analysis, K-means, and Variational Autoencoder. The performance comparison is made in terms of accuracy, probability of detection, probability of misdetection, probability of false alarm, processing time, prediction time, training time per sample, and memory size. The results show that the Alex Neural Network model outperforms the other supervised models, while the Variational Autoencoder model has the best results compared to unsupervised models.
As feature sizes decrease, an investigation of pad unevenness caused by pad conditioning and its influence on chemical mechanical polishing is necessary. We set up a kinematic model to predict the pad wear profile caused by only diamond disk conditioning and verify it. This model shows the influences of different kinematic parameters. To keep the pad surface planar during polishing or only conditioning, we can change the sweep mode and range of the conditioner arm. The kinematic model is suitable for the prediction of the pad wear profile without considering the influence of mechanical parameters. Furthermore, based on the pad wear profile obtained from a real industrial process, we set up a static model to preliminarily investigate the influence of pad unevenness on the pad–wafer contact stress. The pad–wafer contact status in this static model can be approximated as an instantaneous state in a dynamic model. The model shows that the existence of a retaining ring helps to improve the wafer edge profile, and that pad unevenness can cause stress concentration and increase the difficulty in multi-zone pressure control of the polishing head.
Software testing is one of the crucial supporting processes of the software life cycle. Unfortunately for the software industry, the role is stigmatized, partly due to misperception and partly due to treatment of the role. The present study aims to analyze the situation to explore what restricts computer science and software engineering students from taking up a testing career in the software industry. To conduct this study, we surveyed 88 Pakistani students taking computer science or software engineering degrees. The results showed that the present study supports previous work into the unpopularity of testing compared to other software life cycle roles. Furthermore, the findings of our study showed that the role of tester has become a social role, with as many social connotations as technical implications.
Software companies have widely used online A/B testing to evaluate the impact of a new technology by offering it to groups of users and comparing it against the unmodified product. However, running online A/B testing needs not only efforts in design, implementation, and stakeholders' approval to be served in production but also several weeks to collect the data in iterations. To address these issues, a recently emerging topic, called "Offline A/B Testing", is getting increasing attention, intending to conduct the offline evaluation of new technologies by estimating historical logged data. Although this approach is promising due to lower implementation effort, faster turnaround time, and no potential user harm, for it to be effectively prioritized as requirements in practice, several limitations need to be addressed, including its discrepancy with online A/B test results, and lack of systematic updates on varying data and parameters. In response, in this vision paper, I introduce AutoOffAB, an idea to automatically run variants of offline A/B testing against recent logging and update the offline evaluation results, which are used to make decisions on requirements more reliably and systematically.
Owing to the outstanding feature extraction capability, convolutional neural networks (CNNs) have been widely applied in hyperspectral image (HSI) classification problems and have achieved an impressive performance. However, it is well known that 2D convolution suffers from the absent consideration of spectral information, while 3D convolution requires a huge amount of computational cost. In addition, the cost of labeling and the limitation of computing resources make it urgent to improve the generalization performance of the model with scarcely labeled samples. To relieve these issues, we design an end-to-end 3D octave and 2D vanilla mixed CNN, namely Oct-MCNN-HS, based on the typical 3D-2D mixed CNN (MCNN). It is worth mentioning that two feature fusion operations are deliberately constructed to climb the top of the discriminative features and practical performance. That is, 2D vanilla convolution merges the feature maps generated by 3D octave convolutions along the channel direction, and homology shifting aggregates the information of the pixels locating at the same spatial position. Extensive experiments are conducted on four publicly available HSI datasets to evaluate the effectiveness and robustness of our model, and the results verify the superiority of Oct-MCNN-HS both in efficacy and efficiency.
This work presents an approach for using GitHub classroom as a shared, structured, and persistent repository to support project-based courses at the Software Engineering Undergraduate program at PUC Minas, in Brazil. We discuss the needs of the different stakeholders that guided the development of the approach. Results on the perceptions of professors and students show that the approach brings benefits. Besides the lessons learned, we present insights on improving the education of the next generation of software engineers by employing metrics to monitor skill development, verifying student work portfolios, and employing tooling support in project-based courses.
Issam Jebreen, Robert Wellington, Stephen G. MacDonell
Small to medium sized business enterprises (SMEs) generally thrive because they have successfully done something unique within a niche market. For this reason, SMEs may seek to protect their competitive advantage by avoiding any standardization encouraged by the use of packaged software (PS). Packaged software implementation at SMEs therefore presents challenges relating to how best to respond to misfits between the functionality offered by the packaged software and each SME's business needs. An important question relates to which processes small software enterprises - or Small to Medium-Sized Software Development Companies (SMSSDCs) - apply in order to identify and then deal with these misfits. To explore the processes of packaged software (PS) implementation, an ethnographic study was conducted to gain in-depth insights into the roles played by analysts in two SMSSDCs. The purpose of the study was to understand PS implementation in terms of requirements engineering (or 'PSIRE'). Data collected during the ethnographic study were analyzed using an inductive approach. Based on our analysis of the cases we constructed a theoretical model explaining the requirements engineering process for PS implementation, and named it the PSIRE Parallel Star Model. The Parallel Star Model shows that during PSIRE, more than one RE process can be carried out at the same time. The Parallel Star Model has few constraints, because not only can processes be carried out in parallel, but they do not always have to be followed in a particular order. This paper therefore offers a novel investigation and explanation of RE practices for packaged software implementation, approaching the phenomenon from the viewpoint of the analysts, and offers the first extensive study of packaged software implementation RE (PSIRE) in SMSSDCs.
Reliable empirical models such as those used in software effort estimation or defect prediction are inherently dependent on the data from which they are built. As demands for process and product improvement continue to grow, the quality of the data used in measurement and prediction systems warrants increasingly close scrutiny. In this paper we propose a taxonomy of data quality challenges in empirical software engineering, based on an extensive review of prior research. We consider current assessment techniques for each quality issue and proposed mechanisms to address these issues, where available. Our taxonomy classifies data quality issues into three broad areas: first, characteristics of data that mean they are not fit for modeling; second, data set characteristics that lead to concerns about the suitability of applying a given model to another data set; and third, factors that prevent or limit data accessibility and trust. We identify this latter area as of particular need in terms of further research.
Gias Uddin, Yann-Gael Gueheneuc, Foutse Khomh
et al.
Sentiment analysis in software engineering (SE) has shown promise to analyze and support diverse development activities. We report the results of an empirical study that we conducted to determine the feasibility of developing an ensemble engine by combining the polarity labels of stand-alone SE-specific sentiment detectors. Our study has two phases. In the first phase, we pick five SE-specific sentiment detection tools from two recently published papers by Lin et al. [31, 32], who first reported negative results with standalone sentiment detectors and then proposed an improved SE-specific sentiment detector, POME [31]. We report the study results on 17,581 units (sentences/documents) coming from six currently available sentiment benchmarks for SE. We find that the existing tools can be complementary to each other in 85-95% of the cases, i.e., one is wrong, but another is right. However, a majority voting-based ensemble of those tools fails to improve the accuracy of sentiment detection. We develop Sentisead, a supervised tool by combining the polarity labels and bag of words as features. Sentisead improves the performance (F1-score) of the individual tools by 4% (over Senti4SD [5]) - 100% (over POME [31]). In a second phase, we compare and improve Sentisead infrastructure using Pre-trained Transformer Models (PTMs). We find that a Sentisead infrastructure with RoBERTa as the ensemble of the five stand-alone rule-based and shallow learning SE-specific tools from Lin et al. [31, 32] offers the best F1-score of 0.805 across the six datasets, while a stand-alone RoBERTa shows an F1-score of 0.801.