With the rapid rise of AI coding agents, the fundamental premise of what it means to be a software engineer is in question. In this vision paper, we re-examine what it means for an AI agent to be considered a software engineer and then critically think about what makes such an agent trustworthy. \textit{Grounded} in established definitions of software engineering (SE) and informed by recent research on agentic AI systems, we conceptualise AI software engineers as participants in human-AI SE teams composed of human software engineers and AI models and tools, and we distinguish trustworthiness as a key property of these systems and actors rather than a subjective human attitude. Based on historical perspectives and emerging visions, we identify key dimensions that contribute to the trustworthiness of AI software engineers, spanning technical quality, transparency and accountability, epistemic humility, and societal and ethical alignment. We further discuss how trustworthiness can be evaluated and demonstrated, highlighting a fundamental trust measurement gap: not everything that matters for trust can be easily measured. Finally, we outline implications for the design, evaluation, and governance of AI SE systems, advocating for an ethics-by-design approach to enable appropriate trust in future human-AI SE teams.
The software engineering research community faces a systemic crisis: peer review is failing under growing submissions, misaligned incentives, and reviewer fatigue. Community surveys reveal that researchers perceive the process as "broken." This position paper argues that these dysfunctions are mechanism design failures amenable to computational solutions. We propose modeling the research community as a stochastic multi-agent system and applying multi-agent reinforcement learning to design incentive-compatible protocols. We outline three interventions: a credit-based submission economy, MARL-optimized reviewer assignment, and hybrid verification of review consistency. We present threat models, equity considerations, and phased pilot metrics. This vision charts a research agenda toward sustainable peer review.
Mohamad Amin Kalateh, Naeime Talebi, Soroush Nekoei
et al.
A comprehensive study on thermomechanical processing of pure Mg was conducted through sequential hot extrusion, hot rolling, and cold drawing operations. Three different extrusion ratios (6:1, 25:1, and 39:1) were investigated at 350°C, revealing that 39:1 ratio produced an optimal bimodal grain structure with beneficial twin morphology. Subsequently, hot rolling experiments were performed at varying linear speeds (26- and 130-mm s-1) and interpass annealing times (2.5 and 10 minutes). Results demonstrated that higher rolling speeds led to finer microstructure, while longer interpass annealing times resulted in reduced twin fraction and more inhomogeneous microstructure. The processed material was then subjected to cold drawing with approximately 12% true strain per pass. Different annealing conditions (275°C and 375°C for 2.5-10 minutes) between drawing passes were evaluated. Analysis showed that annealing at 375°C for 2.5-5 minutes provided optimal softening for subsequent deformation. Fracture analysis revealed a mixed ductile-brittle behavior, with twin-matrix interfaces serving as preferred crack propagation paths This study establishes optimal processing parameters for pure Mg wire production, highlighting the critical role of twin characteristics and restoration processes in determining material formability during multi-step thermomechanical processing.
The reconstruction, management, and optimization of gas pipelines is of significant importance for solving modern engineering problems. This paper presents innovative methodologies aimed at the effective reconstruction of gas pipelines under unstable conditions. The research encompasses the application of machine learning and optimization algorithms, targeting the enhancement of system reliability and the optimization of interventions during emergencies. The findings of the study present engineering solutions aimed at addressing the challenges in real-world applications by comparing the performance of various algorithms. Consequently, this work contributes to the advancement of cutting-edge approaches in the field of engineering and opens new perspectives for future research. A highly reliable and efficient technological Figure has been proposed for managing emergency processes in gas transportation based on the principles of the reconstruction phase. For complex gas pipeline systems, new approaches have been investigated for the modernization of existing control process monitoring systems. These approaches are based on modern achievements in control theory and information technology, aiming to select emergency and technological modes. One of the pressing issues is to develop a method to minimize the transmission time of measured and controlled data on non-stationary flow parameters of gas networks to dispatcher control centers. Therefore, the reporting Figures obtained for creating a reliable information base for dispatcher centers using modern methods to efficiently manage the gas dynamic processes of non-stationary modes are of particular importance.
In arena-style evaluation of large language models (LLMs), two LLMs respond to a user query, and the user chooses the winning response or deems the "battle" a draw, resulting in an adjustment to the ratings of both models. The prevailing approach for modeling these rating dynamics is to view battles as two-player game matches, as in chess, and apply the Elo rating system and its derivatives. In this paper, we critically examine this paradigm. Specifically, we question whether a draw genuinely means that the two models are equal and hence whether their ratings should be equalized. Instead, we conjecture that draws are more indicative of query difficulty: if the query is too easy, then both models are more likely to succeed equally. On three real-world arena datasets, we show that ignoring rating updates for draws yields a 1-3% relative increase in battle outcome prediction accuracy (which includes draws) for all four rating systems studied. Further analyses suggest that draws occur more for queries rated as very easy and those as highly objective, with risk ratios of 1.37 and 1.35, respectively. We recommend future rating systems to reconsider existing draw semantics and to account for query properties in rating updates.
We introduce ShadowDraw, a framework that transforms ordinary 3D objects into shadow-drawing compositional art. Given a 3D object, our system predicts scene parameters, including object pose and lighting, together with a partial line drawing, such that the cast shadow completes the drawing into a recognizable image. To this end, we optimize scene configurations to reveal meaningful shadows, employ shadow strokes to guide line drawing generation, and adopt automatic evaluation to enforce shadow-drawing coherence and visual quality. Experiments show that ShadowDraw produces compelling results across diverse inputs, from real-world scans and curated datasets to generative assets, and naturally extends to multi-object scenes, animations, and physical deployments. Our work provides a practical pipeline for creating shadow-drawing art and broadens the design space of computational visual art, bridging the gap between algorithmic design and artistic storytelling. Check out our project page https://red-fairy.github.io/ShadowDraw/ for more results and an end-to-end real-world demonstration of our pipeline!
Patrizio Angelini, Carla Binucci, Giuseppe Di Battista
et al.
Unit edge-length drawings, rectilinear drawings (where each edge is either a horizontal or a vertical segment), and rectangular face drawings are among the most studied subjects in Graph Drawing. However, most of the literature on these topics refers to planar graphs and planar drawings. In this paper we study drawings with all the above nice properties but that can have edge crossings; we call them Unit Edge length Rectilinear drawings with Rectangular Faces (UER-RF drawings). We consider crossings as dummy vertices and apply the unit edge-length convention to the edge segments connecting any two (real or dummy) vertices. Note that UER-RF drawings are grid drawings (vertices are placed at distinct integer coordinates), which is another classical requirement of graph visualizations. We present several efficient and easily implementable algorithms for recognizing graphs that admit UER-RF drawings and for constructing such drawings if they exist. We consider restrictions on the degree of the vertices or on the size of the faces. For each type of restriction, we consider both the general unconstrained setting and a setting in which either the external boundary of the drawing is fixed or the rotation system of the graph is fixed as part of the input.
C. Jai Shiva Rao, K. Prasanna Lakshmi, M. Venkata Ramana
et al.
One of the important metal forming techniques employed in forming processes of sheet metal is deep drawing. This method allows to production of intricate shapes with fewer flaws. The quality of the deep-drawn product depends on the extent of control, exercised by the manufacturer, on process parameters of deep drawing. An effective end product with the least possible flaws can be manufactured using a deep drawing process by effectively controlling the process parameters. This article brings out a consolidated report of the research findings, as reported by researchers across the globe, on recent developments of deep drawing methods with emphasis on the quality of deep drawn products. These methods include hydromechanical deep drawing, micro deep drawing, and deep drawing operation using magnet-rheological medium. This paper also presents challenges and scope of future research leading to commercial implementation of recently developed techniques of deep drawing.
This study aims to develop AR technology of simplified representations based on ISO standards and to quantify the efficiency and contribution of developed AR technology in assisting students in learning mechanical drawing. This research proposed a marker-based AR application development, intended for teaching simplified representations, named Augmented Reality Penyederhanaan Gambar - ARPeGa, and an experimental study to quantify the user experience (UX) using the User Experience Questionnaire (UEQ). A pilot study involving 38 mechanical engineering students was conducted to evaluate the impact of AR involvement on user experience. In addition, the UEQ data analysis tool version 11 was used. The UEQ results showed attractiveness was excellent (1.87), while efficiency, dependability, stimulation, and novelty were good (1.63, 1.60, 1.63, and 1.22 respectively). And perspicuity was categorized as “above average” (1.51). This study’s outcomes demonstrate that using 3D model visualization in the AR application strengthens user experiences to understand simplified representations. Overall, the application has a ‘good’ level in some categories: efficiency, dependability, stimulation, and novelty.
The rapid development of deep learning techniques, improved computational power, and the availability of vast training data have led to significant advancements in pre-trained models and large language models (LLMs). Pre-trained models based on architectures such as BERT and Transformer, as well as LLMs like ChatGPT, have demonstrated remarkable language capabilities and found applications in Software engineering. Software engineering tasks can be divided into many categories, among which generative tasks are the most concern by researchers, where pre-trained models and LLMs possess powerful language representation and contextual awareness capabilities, enabling them to leverage diverse training data and adapt to generative tasks through fine-tuning, transfer learning, and prompt engineering. These advantages make them effective tools in generative tasks and have demonstrated excellent performance. In this paper, we present a comprehensive literature review of generative tasks in SE using pre-trained models and LLMs. We accurately categorize SE generative tasks based on software engineering methodologies and summarize the advanced pre-trained models and LLMs involved, as well as the datasets and evaluation metrics used. Additionally, we identify key strengths, weaknesses, and gaps in existing approaches, and propose potential research directions. This review aims to provide researchers and practitioners with an in-depth analysis and guidance on the application of pre-trained models and LLMs in generative tasks within SE.
[Context and Motivation]: The quality of requirements specifications impacts subsequent, dependent software engineering activities. Requirements quality defects like ambiguous statements can result in incomplete or wrong features and even lead to budget overrun or project failure. [Problem]: Attempts at measuring the impact of requirements quality have been held back by the vast amount of interacting factors. Requirements quality research lacks an understanding of which factors are relevant in practice. [Principal Ideas and Results]: We conduct a case study considering data from both interview transcripts and issue reports to identify relevant factors of requirements quality. The results include 17 factors and 11 interaction effects relevant to the case company. [Contribution]: The results contribute empirical evidence that (1) strengthens existing requirements engineering theories and (2) advances industry-relevant requirements quality research.
Geometric deep learning has sparked a rising interest in computer graphics to perform shape understanding tasks, such as shape classification and semantic segmentation. When the input is a polygonal surface, one has to suffer from the irregular mesh structure. Motivated by the geometric spectral theory, we introduce Laplacian2Mesh, a novel and flexible convolutional neural network (CNN) framework for coping with irregular triangle meshes (vertices may have any valence). By mapping the input mesh surface to the multi-dimensional Laplacian-Beltrami space, Laplacian2Mesh enables one to perform shape analysis tasks directly using the mature CNNs, without the need to deal with the irregular connectivity of the mesh structure. We further define a mesh pooling operation such that the receptive field of the network can be expanded while retaining the original vertex set as well as the connections between them. Besides, we introduce a channel-wise self-attention block to learn the individual importance of feature ingredients. Laplacian2Mesh not only decouples the geometry from the irregular connectivity of the mesh structure but also better captures the global features that are central to shape classification and segmentation. Extensive tests on various datasets demonstrate the effectiveness and efficiency of Laplacian2Mesh, particularly in terms of the capability of being vulnerable to noise to fulfill various learning tasks.
This paper has provided a novel design idea and some implementation methods to make a real time detection of multi-areas with multiple detecting areas that are generated by the real time drawing on the screen display of the video. The drawing on the video will remain the output as polylines, and the colors of the outlines will change when the stage of drawing or detecting is changed. The shape of the drawn area is free to be customized and real-time effective. The configuration of the drawn areas can be renewed and the detecting areas are working individually. The detection result should be shown with a GUI designed by Tkinter. The object recognition model was developed on YOLOv5 but can be changed to others, which means the core design and implementation idea of this paper is model-independent. With PIL and OpenCV and Tkinter, the drawing effect is real time and efficient. The design and code of this research is basic and can be extended to be implemented in numerous monitoring and detecting situations.
Graph Drawing techniques have been developed in the last few years with the purpose of producing aesthetically pleasing node-link layouts. Recently, the employment of differentiable loss functions has paved the road to the massive usage of Gradient Descent and related optimization algorithms. In this paper, we propose a novel framework for the development of Graph Neural Drawers (GND), machines that rely on neural computation for constructing efficient and complex maps. GNDs are Graph Neural Networks (GNNs) whose learning process can be driven by any provided loss function, such as the ones commonly employed in Graph Drawing. Moreover, we prove that this mechanism can be guided by loss functions computed by means of Feedforward Neural Networks, on the basis of supervision hints that express beauty properties, like the minimization of crossing edges. In this context, we show that GNNs can nicely be enriched by positional features to deal also with unlabelled vertexes. We provide a proof-of-concept by constructing a loss function for the edge-crossing and provide quantitative and qualitative comparisons among different GNN models working under the proposed framework.
The advancement of modern processors with many-core and large-cache may have little computational advantages if only serial computing is employed. In this study, several parallel computing approaches, using devices with multiple or many processor cores, and graphics processing units are applied and compared to illustrate the potential applications in fluid-film lubrication study. Two Reynolds equations and an air bearing optimum design are solved using three parallel computing paradigms, OpenMP, Compute Unified Device Architecture, and OpenACC, on standalone shared-memory computers. The newly developed processors with many-integrated-core are also using OpenMP to release the computing potential. The results show that the OpenACC computing can have a better performance than the OpenMP computing for the discretized Reynolds equation with a large gridwork. This is mainly due to larger sizes of available cache in the tested graphics processing units. The bearing design can benefit most when the system with many-integrated-core processor is being used. This is due to the many-integrated-core system can perform computation in the optimization-algorithm-level and using the many processor cores effectively. A proper combination of parallel computing devices and programming models can complement efficient numerical methods or optimization algorithms to accelerate many tribological simulations or engineering designs.
A. V. Palagin, N. G. Petrenko, V. Yu. Velychko
et al.
The given paper considered a generalized model representation of the software system "Instrumental complex for ontological engineering purpose". Represented complete software system development process. Developed relevant formal models of the software system "Instrumental complex for ontological engineering purpose", represented as mathematical expressions, UML diagrams, and also described the three-tier architecture of the software system "Instrumental complex for ontological engineering purpose" in a client-server environment.
Kai-Kristian Kemell, Anh Nguyen-Duc, Xiaofeng Wang
et al.
Software Engineering as an industry is highly diverse in terms of development methods and practices. Practitioners employ a myriad of methods and tend to further tailor them by e.g. omitting some practices or rules. This diversity in development methods poses a challenge for software engineering education, creating a gap between education and industry. General theories such as the Essence Theory of Software Engineering can help bridge this gap by presenting software engineering students with higher-level frameworks upon which to build an understanding of software engineering methods and practical project work. In this paper, we study Essence in an educational setting to evaluate its usefulness for software engineering students while also investigating barriers to its adoption in this context. To this end, we observe 102 student teams utilize Essence in practical software engineering projects during a semester long, project-based course.
Lukas Barth, Benjamin Niedermann, Ignaz Rutter
et al.
Ortho-Radial drawings are a generalization of orthogonal drawings to grids that are formed by concentric circles and straight-line spokes emanating from the circles' center. Such drawings have applications in schematic graph layouts, e.g., for metro maps and destination maps. A plane graph is a planar graph with a fixed planar embedding. We give a combinatorial characterization of the plane graphs that admit a planar ortho-radial drawing without bends. Previously, such a characterization was only known for paths, cycles, and theta graphs, and in the special case of rectangular drawings for cubic graphs, where the contour of each face is required to be a rectangle. The characterization is expressed in terms of an ortho-radial representation that, similar to Tamassia's orthogonal representations for orthogonal drawings describes such a drawing combinatorially in terms of angles around vertices and bends on the edges. In this sense our characterization can be seen as a first step towards generalizing the Topology-Shape-Metrics framework of Tamassia to ortho-radial drawings.
The context of the reported research is the documentation of software technologies such as object/relational mappers, web-application frameworks, or code generators. We assume that documentation should model a macroscopic view on usage scenarios of technologies in terms of involved artifacts, leveraged software languages, data flows, conformance relationships, and others. In previous work, we referred to such documentation also as 'linguistic architecture'. The corresponding models may also be referred to as 'megamodels' while adopting this term from the technological space of modeling/model-driven engineering. This work is an inquiry into making such documentation less abstract and more effective by means of connecting (mega)models, systems, and developer experience in several ways. To this end, we adopt an approach that is primarily based on prototyping (i.e., implementa- tion of a megamodeling infrastructure with all conceivable connections) and experimentation with showcases (i.e., documentation of concrete software technologies). The knowledge gained by this research is a notion of interconnected linguistic architecture on the grounds of connecting primary model elements, inferred model elements, static and runtime system artifacts, traceability links, system contexts, knowledge resources, plugged interpretations of model elements, and IDE views. A corresponding suite of aspects of interconnected linguistic architecture is systematically described. As to the grounding of this research, we describe a literature survey which tracks scattered occurrences and thus demonstrates the relevance of the identified aspects of interconnected linguistic architecture. Further, we describe the MegaL/Xtext+IDE infrastructure which realizes interconnected linguistic architecture. The importance of this work lies in providing more formal (ontologically rich, navigable, verifiable) documentation of software technologies helping developers to better understand how to use technologies in new systems (prescriptive mode) or how technologies are used in existing systems (descriptive mode).