H. Sinan Bank, Daniel R. Herber, Thomas H. Bradley
Engineering system design -- whether mechatronic, control, or embedded -- often proceeds in an ad hoc manner, with requirements left implicit and traceability from intent to parameters largely absent. Existing specification-driven and systematic design methods mostly target software, and AI-assisted tools tend to enter the workflow at solution generation rather than at problem framing. Human--AI collaboration in the design of physical systems remains underexplored. This paper presents Design-OS, a lightweight, specification-driven workflow for engineering system design organized in five stages: concept definition, literature survey, conceptual design, requirements definition, and design definition. Specifications serve as the shared contract between human designers and AI agents; each stage produces structured artifacts that maintain traceability and support agent-augmented execution. We position Design-OS relative to requirements-driven design, systematic design frameworks, and AI-assisted design pipelines, and demonstrate it on a control systems design case using two rotary inverted pendulum platforms -- an open-source SimpleFOC reaction wheel and a commercial Quanser Furuta pendulum -- showing how the same specification-driven workflow accommodates fundamentally different implementations. A blank template and the full design-case artifacts are shared in a public repository to support reproducibility and reuse. The workflow makes the design process visible and auditable, and extends specification-driven orchestration of AI from software to physical engineering system design.
Creating high-quality anime illustrations presents notable challenges, particularly for beginners, due to the intricate styles and fine details inherent in anime art. We present an interactive drawing guidance system specifically designed for anime illustrations to address this issue. It offers real-time guidance to help users refine their work and streamline the creative process. Our system is built upon the StreamDiffusion pipeline to deliver real-time drawing assistance. We fine-tune Stable Diffusion with LoRA to synthesize anime style RGB images from user-provided hand-drawn sketches and prompts. Leveraging the Informative Drawings model, we transform these RGB images into rough sketches, which are further refined into structured guidance sketches using a custom-designed optimizer. The proposed system offers precise, real-time guidance aligned with the creative intent of the user, significantly enhancing both the efficiency and accuracy of the drawing process. To assess the effectiveness of our approach, we conducted a user study, gathering empirical feedback on both system performance and interface usability.
Large Language Model (LLM) agents have shown great potential for solving real-world problems and promise to be a solution for tasks automation in industry. However, more benchmarks are needed to systematically evaluate automation agents from an industrial perspective, for example, in Civil Engineering. Therefore, we propose DrafterBench for the comprehensive evaluation of LLM agents in the context of technical drawing revision, a representation task in civil engineering. DrafterBench contains twelve types of tasks summarized from real-world drawing files, with 46 customized functions/tools and 1920 tasks in total. DrafterBench is an open-source benchmark to rigorously test AI agents' proficiency in interpreting intricate and long-context instructions, leveraging prior knowledge, and adapting to dynamic instruction quality via implicit policy awareness. The toolkit comprehensively assesses distinct capabilities in structured data comprehension, function execution, instruction following, and critical reasoning. DrafterBench offers detailed analysis of task accuracy and error statistics, aiming to provide deeper insight into agent capabilities and identify improvement targets for integrating LLMs in engineering applications. Our benchmark is available at https://github.com/Eason-Li-AIS/DrafterBench, with the test set hosted at https://huggingface.co/datasets/Eason666/DrafterBench.
Equity, diversity, and inclusion in software engineering often overlook neurodiversity, particularly the experiences of developers with Attention Deficit Hyperactivity Disorder (ADHD). Despite the growing awareness about that population in SE, few tools are designed to support their cognitive challenges (e.g., sustained attention, task initiation, self-regulation) within development workflows. We present Tether, an LLM-powered desktop application designed to support software engineers with ADHD by delivering adaptive, context-aware assistance. Drawing from engineering research methodology, Tether combines local activity monitoring, retrieval-augmented generation (RAG), and gamification to offer real-time focus support and personalized dialogue. The system integrates operating system level system tracking to prompt engagement and its chatbot leverages ADHD-specific resources to offer relevant responses. Preliminary validation through self-use revealed improved contextual accuracy following iterative prompt refinements and RAG enhancements. Tether differentiates itself from generic tools by being adaptable and aligned with software-specific workflows and ADHD-related challenges. While not yet evaluated by target users, this work lays the foundation for future neurodiversity-aware tools in SE and highlights the potential of LLMs as personalized support systems for underrepresented cognitive needs.
Neural radiance fields (NeRF) based methods have shown amazing performance in synthesizing 3D-consistent photographic images, but fail to generate multi-view portrait drawings. The key is that the basic assumption of these methods -- a surface point is consistent when rendered from different views -- doesn't hold for drawings. In a portrait drawing, the appearance of a facial point may changes when viewed from different angles. Besides, portrait drawings usually present little 3D information and suffer from insufficient training data. To combat this challenge, in this paper, we propose a Semantic-Aware GEnerator (SAGE) for synthesizing multi-view portrait drawings. Our motivation is that facial semantic labels are view-consistent and correlate with drawing techniques. We therefore propose to collaboratively synthesize multi-view semantic maps and the corresponding portrait drawings. To facilitate training, we design a semantic-aware domain translator, which generates portrait drawings based on features of photographic faces. In addition, use data augmentation via synthesis to mitigate collapsed results. We apply SAGE to synthesize multi-view portrait drawings in diverse artistic styles. Experimental results show that SAGE achieves significantly superior or highly competitive performance, compared to existing 3D-aware image synthesis methods. The codes are available at https://github.com/AiArt-HDU/SAGE.
Alexander E. I. Brownlee, James Callan, Karine Even-Mendoza
et al.
Large language models (LLMs) have been successfully applied to software engineering tasks, including program repair. However, their application in search-based techniques such as Genetic Improvement (GI) is still largely unexplored. In this paper, we evaluate the use of LLMs as mutation operators for GI to improve the search process. We expand the Gin Java GI toolkit to call OpenAI's API to generate edits for the JCodec tool. We randomly sample the space of edits using 5 different edit types. We find that the number of patches passing unit tests is up to 75% higher with LLM-based edits than with standard Insert edits. Further, we observe that the patches found with LLMs are generally less diverse compared to standard edits. We ran GI with local search to find runtime improvements. Although many improving patches are found by LLM-enhanced GI, the best improving patch was found by standard GI.
Emilio Vital Brazil, Eduardo Soares, Lucas Villa Real
et al.
Data is a critical element in any discovery process. In the last decades, we observed exponential growth in the volume of available data and the technology to manipulate it. However, data is only practical when one can structure it for a well-defined task. For instance, we need a corpus of text broken into sentences to train a natural language machine-learning model. In this work, we will use the token \textit{dataset} to designate a structured set of data built to perform a well-defined task. Moreover, the dataset will be used in most cases as a blueprint of an entity that at any moment can be stored as a table. Specifically, in science, each area has unique forms to organize, gather and handle its datasets. We believe that datasets must be a first-class entity in any knowledge-intensive process, and all workflows should have exceptional attention to datasets' lifecycle, from their gathering to uses and evolution. We advocate that science and engineering discovery processes are extreme instances of the need for such organization on datasets, claiming for new approaches and tooling. Furthermore, these requirements are more evident when the discovery workflow uses artificial intelligence methods to empower the subject-matter expert. In this work, we discuss an approach to bringing datasets as a critical entity in the discovery process in science. We illustrate some concepts using material discovery as a use case. We chose this domain because it leverages many significant problems that can be generalized to other science fields.
Prior researchers have identified charter documents as texts that serve an outsize role in stabilizing social reality and mediating work, writing, and network building. While charter documents are typically authoritative and text-only tomes, this article expands the category to include charter graphics, visual texts that serve similarly important genre and network functions. Through retrospective analysis of one charter graphic and its role in a decade-long project by a nonprofit organization, this article demonstrates the potential rhetorical, social, and network functions of charter graphics; distinguishes them from charter documents; and offers suggestions for both practitioners and researchers.
Mohammad Kasra Habib, Stefan Wagner, Daniel Graziotin
Requirements Engineering (RE) is the initial step towards building a software system. The success or failure of a software project is firmly tied to this phase, based on communication among stakeholders using natural language. The problem with natural language is that it can easily lead to different understandings if it is not expressed precisely by the stakeholders involved, which results in building a product different from the expected one. Previous work proposed to enhance the quality of the software requirements detecting language errors based on ISO 29148 requirements language criteria. The existing solutions apply classical Natural Language Processing (NLP) to detect them. NLP has some limitations, such as domain dependability which results in poor generalization capability. Therefore, this work aims to improve the previous work by creating a manually labeled dataset and using ensemble learning, Deep Learning (DL), and techniques such as word embeddings and transfer learning to overcome the generalization problem that is tied with classical NLP and improve precision and recall metrics using a manually labeled dataset. The current findings show that the dataset is unbalanced and which class examples should be added more. It is tempting to train algorithms even if the dataset is not considerably representative. Whence, the results show that models are overfitting; in Machine Learning this issue is solved by adding more instances to the dataset, improving label quality, removing noise, and reducing the learning algorithms complexity, which is planned for this research.
Md Jahidul Haque, Md Mamun Molla, Md Amirul Islam Khan
et al.
In this present study, three-dimensional lattice Boltzmann method is implemented with the popular turbulence modeling method large-eddy simulation incorporating three different non-dynamic sub-grid scale models Smagorinsky, Vreman, and wall-adapting local eddy-viscosity for finding the inhomogeneous turbulent airflow patterns inside a model room with a partition. The large eddy simulation-lattice Boltzmann method code is validated with the experimental results of Posner’s model, where the model room having one partition at the bottom, one inlet, an outlet placed at top wall considered for the comparisons. The lattice Boltzmann method code is also validated without any sub-grid scale model with the results of lid-driven flow in a cubic cavity. The present numerical simulations are performed by the graphics process unit accelerated parallel programs using compute unified device architecture C platform. Double precession capable a Tesla k40 with 2880 compute unified device architecture cores NVIDIA graphics process unit card has been used for these simulations. Graphics processor units have gained popularity in recent years as a propitious platform for numerical simulation of fluid dynamics. In fact, faster computational task performance in graphics process units is one of the key factors for researchers to choose graphics process unit over conventional central processing units for the implementation of data-intensive numerical methods like lattice Boltzmann method. The effects of the sub-grid scale model have been evaluated in terms of the mean velocity profiles, streamlines as well as turbulence characteristics and found that there are significant differences in the results due to the different sub-grid scale models.
Panagiotis Lionakis, Giorgos Kritikakis, Ioannis G. Tollis
We present algorithms that extend the path-based hierarchical drawing framework and give experimental results. Our algorithms run in $O(km)$ time, where $k$ is the number of paths and $m$ is the number of edges of the graph, and provide better upper bounds than the original path based framework: e.g., the height of the resulting drawings is equal to the length of the longest path of $G$, instead of $n-1$, where $n$ is the number of nodes. Additionally, we extend this framework, by bundling and drawing all the edges of the DAG in $O(m + n \log n)$ time, using minimum extra width per path. We also provide some comparison to a well known hierarchical drawing framework, widely known as the Sugiyama framework, as a proof of concept. The experimental results show that our algorithms produce drawings that are better in area and number of bends, but worse for crossings in sparse graphs. Hence, our technique offers an interesting alternative for drawing hierarchical graphs. Finally, we present an $O(m + k \log k)$ time algorithm that computes a specific order of the paths in order to reduce the total edge length and number of crossings and bends.
Giordano Da Lozzo, Anthony D'Angelo, Fabrizio Frati
In this paper we study the area requirements of planar greedy drawings of triconnected planar graphs. Cao, Strelzoff, and Sun exhibited a family $\cal H$ of subdivisions of triconnected plane graphs and claimed that every planar greedy drawing of the graphs in $\mathcal H$ respecting the prescribed plane embedding requires exponential area. However, we show that every $n$-vertex graph in $\cal H$ actually has a planar greedy drawing respecting the prescribed plane embedding on an $O(n)\times O(n)$ grid. This reopens the question whether triconnected planar graphs admit planar greedy drawings on a polynomial-size grid. Further, we provide evidence for a positive answer to the above question by proving that every $n$-vertex Halin graph admits a planar greedy drawing on an $O(n)\times O(n)$ grid. Both such results are obtained by actually constructing drawings that are convex and angle-monotone. Finally, we consider $α$-Schnyder drawings, which are angle-monotone and hence greedy if $α\leq 30^\circ$, and show that there exist planar triangulations for which every $α$-Schnyder drawing with a fixed $α<60^\circ$ requires exponential area for any resolution rule.
Network visualisation techniques are important tools for the exploratory analysis of complex systems. While these methods are regularly applied to visualise data on complex networks, we increasingly have access to time series data that can be modelled as temporal networks or dynamic graphs. In dynamic graphs, the temporal ordering of time-stamped edges determines the causal topology of a system, i.e., which nodes can, directly and indirectly, influence each other via a so-called causal path. This causal topology is crucial to understand dynamical processes, assess the role of nodes, or detect clusters. However, we lack graph drawing techniques that incorporate this information into static visualisations. Addressing this gap, we present a novel dynamic graph visualisation algorithm that utilises higher-order graphical models of causal paths in time series data to compute time-aware static graph visualisations. These visualisations combine the simplicity and interpretability of static graphs with a time-aware layout algorithm that highlights patterns in the causal topology that result from the temporal dynamics of edges.
Walter Didimo, Giuseppe Liotta, Giacomo Ortali
et al.
A planar orthogonal drawing $Γ$ of a planar graph $G$ is a geometric representation of $G$ such that the vertices are drawn as distinct points of the plane, the edges are drawn as chains of horizontal and vertical segments, and no two edges intersect except at their common end-points. A bend of $Γ$ is a point of an edge where a horizontal and a vertical segment meet. $Γ$ is bend-minimum if it has the minimum number of bends over all possible planar orthogonal drawings of $G$. This paper addresses a long standing, widely studied, open question: Given a planar 3-graph $G$ (i.e., a planar graph with vertex degree at most three), what is the best computational upper bound to compute a bend-minimum planar orthogonal drawing of $G$ in the variable embedding setting? In this setting the algorithm can choose among the exponentially many planar embeddings of $G$ the one that leads to an orthogonal drawing with the minimum number of bends. We answer the question by describing an $O(n)$-time algorithm that computes a bend-minimum planar orthogonal drawing of $G$ with at most one bend per edge, where $n$ is the number of vertices of $G$. The existence of an orthogonal drawing algorithm that simultaneously minimizes the total number of bends and the number of bends per edge was previously unknown.
In this paper we explore the usage of rule engines in a graphical framework for visualising dynamic access control policies. We use the Drools rule engine to dynamically compute permissions, following the Category-Based Access Control metamodel.
Background: Biological data often originate from samples containing mixtures of subpopulations, corresponding e.g. to distinct cellular phenotypes. However, identification of distinct subpopulations may be difficult if biological measurements yield distributions that are not easily separable. Results: We present Multiresolution Correlation Analysis (MCA), a method for visually identifying subpopulations based on the local pairwise correlation between covariates, without needing to define an a priori interaction scale. We demonstrate that MCA facilitates the identification of differentially regulated subpopulations in simulated data from a small gene regulatory network, followed by application to previously published single-cell qPCR data from mouse embryonic stem cells. We show that MCA recovers previously identified subpopulations, provides additional insight into the underlying correlation structure, reveals potentially spurious compartmentalizations, and provides insight into novel subpopulations. Conclusions: MCA is a useful method for the identification of subpopulations in low-dimensional expression data, as emerging from qPCR or FACS measurements. With MCA it is possible to investigate the robustness of covariate correlations with respect subpopulations, graphically identify outliers, and identify factors contributing to differential regulation between pairs of covariates. MCA thus provides a framework for investigation of expression correlations for genes of interests and biological hypothesis generation.
We consider embeddings of planar graphs in $R^2$ where vertices map to points and edges map to polylines. We refer to such an embedding as a polyline drawing, and ask how few bends are required to form such a drawing for an arbitrary planar graph. It has long been known that even when the vertex locations are completely fixed, a planar graph admits a polyline drawing where edges bend a total of $O(n^2)$ times. Our results show that this number of bends is optimal. In particular, we show that $Ω(n^2)$ total bends is required to form a polyline drawing on any set of fixed vertex locations for almost all planar graphs. This result generalizes all previously known lower bounds, which only applied to convex point sets, and settles 2 open problems.
Bernardo M. Ábrego, Oswin Aichholzer, Silvia Fernández-Merchant
et al.
The Harary-Hill Conjecture States that the number of crossings in any drawing of the complete graph $ K_n $ in the plane is at least $Z(n):=\frac{1}{4}\left\lfloor \frac{n}{2}\right\rfloor \left\lfloor\frac{n-1}{2}\right\rfloor \left\lfloor \frac{n-2}{2}\right\rfloor\left\lfloor \frac{n-3}{2}\right\rfloor$. In this paper, we settle the Harary-Hill conjecture for {\em shellable drawings}. We say that a drawing $D$ of $ K_n $ is {\em $ s $-shellable} if there exist a subset $ S = \{v_1,v_2,\ldots,v_ s\}$ of the vertices and a region $R$ of $D$ with the following property: For all $1 \leq i < j \leq s$, if $D_{ij}$ is the drawing obtained from $D$ by removing $v_1,v_2,\ldots v_{i-1},v_{j+1},\ldots,v_{s}$, then $v_i$ and $v_j$ are on the boundary of the region of $D_{ij}$ that contains $R$. For $ s\geq n/2 $, we prove that the number of crossings of any $ s $-shellable drawing of $ K_n $ is at least the long-conjectured value Z(n). Furthermore, we prove that all cylindrical, $ x $-bounded, monotone, and 2-page drawings of $ K_n $ are $ s $-shellable for some $ s\geq n/2 $ and thus they all have at least $ Z(n) $ crossings. The techniques developed provide a unified proof of the Harary-Hill conjecture for these classes of drawings.