The Mu2e experiment at Fermilab will search for the charged lepton flavour violating process of coherent neutrinoless muon-to-electron conversion in the presence of an aluminum nucleus. The muons are produced by an 8 GeV proton beam from the Fermilab Booster striking a production target to create hadrons that decay to muons. The production target design space is strongly constrained by a required one-year operating lifetime and the need for radiative cooling in a vacuum. Uncertainties in the lifetime of the existing baseline design - a monolithic, segmented tungsten (WL10) target - are large, particularly due to unknown effects of radiation damage at the very high proton fluences expected in the experiment. We have begun evaluating a new design utilizing Inconel 718. Here, we present an engineering analysis of a prototype modular design. specifically thermal management, structural stability, fatigue lifetime, and fabrication changes. The results approve a promising new target design for the Mu2e experiment.
Elisabeth Halser, Elisabeth Finhold, Neele Leithäuser
et al.
Optimizing a building's energy supply design is a task with multiple competing criteria, where not only monetary but also, for example, an environmental objective shall be taken into account. Moreover, when deciding which storages and heating and cooling units to purchase (here-and-now-decisions), there is uncertainty about future developments of prices for energy, e.g. electricity and gas. This can be accounted for later by operating the units accordingly (wait-and-see-decisions), once the uncertainty revealed itself. Therefore, the problem can be modeled as an adjustable robust optimization problem. We combine adjustable robustness and multicriteria optimization for the case of building energy supply design and solve the resulting problem using a column and constraint generation algorithm in combination with an $\varepsilon$-constraint approach. In the multicriteria adjustable robust problem, we simultaneously minimize worst-case cost regret and carbon emissions. We take into account future price uncertainties and consider the results in the light of information gap decision theory to find a trade-off between security against price fluctuations and over-conservatism. We present the model, a solution strategy and discuss different application scenarios for a case study building.
High-level automation is increasingly critical in AI, driven by rapid advances in large language models (LLMs) and AI agents. However, LLMs, despite their general reasoning power, struggle significantly in specialized, data-sensitive tasks such as designing Graph Neural Networks (GNNs). This difficulty arises from (1) the inherent knowledge gaps in modeling the intricate, varying relationships between graph properties and suitable architectures and (2) the external noise from misleading descriptive inputs, often resulting in generic or even misleading model suggestions. Achieving proficiency in designing data-aware models -- defined as the meta-level capability to systematically accumulate, interpret, and apply data-specific design knowledge -- remains challenging for existing automated approaches, due to their inefficient construction and application of meta-knowledge. To achieve meta-level proficiency, we propose DesiGNN, a knowledge-centered framework that systematically converts past model design experience into structured, fine-grained knowledge priors well-suited for meta-learning with LLMs. To account for the inherent variability and external noise, DesiGNN aligns empirical property filtering from extensive benchmarks with adaptive elicitation of literature insights via LLMs. By constructing a solid meta-knowledge between unseen graph understanding and known effective architecture patterns, DesiGNN can deliver top-5.77% initial model proposals for unseen datasets within seconds and achieve consistently superior performance with minimal search cost compared to baselines.
Mohammad Reza Kolani, Stavros Nousias, André Borrmann
Utilizing robotic systems in the construction industry is gaining popularity due to their build time, precision, and efficiency. In this paper, we introduce a system that allows the coordination of multiple manipulator robots for construction activities. As a case study, we chose robotic brick wall assembly. By utilizing a multi robot system where arm manipulators collaborate with each other, the entirety of a potentially long wall can be assembled simultaneously. However, the reduction of overall bricklaying time is dependent on the minimization of time required for each individual manipulator. In this paper, we execute the simulation with various placements of material and the robots base, as well as different robot configurations, to determine the optimal position of the robot and material and the best configuration for the robot. The simulation results provide users with insights into how to find the best placement of robots and raw materials for brick wall assembly.
Chip design is about to be revolutionized by the integration of large language, multimodal, and circuit models (collectively LxMs). While exploring this exciting frontier with tremendous potential, the community must also carefully consider the related security risks and the need for building trust into using LxMs for chip design. First, we review the recent surge of using LxMs for chip design in general. We cover state-of-the-art works for the automation of hardware description language code generation and for scripting and guidance of essential but cumbersome tasks for electronic design automation tools, e.g., design-space exploration, tuning, or designer training. Second, we raise and provide initial answers to novel research questions on critical issues for security and trustworthiness of LxM-powered chip design from both the attack and defense perspectives.
Hardware design workflows rely on Process Design Kits (PDKs) from different fabrication nodes, each containing standard cell libraries optimized for speed, power, or density. Engineers typically navigate between the design and target PDK to make informed decisions, such as selecting gates for area optimization or enhancing the speed of the critical path. However, this process is often manual, time-consuming, and prone to errors. To address this, we present ChipXplore, a multi-agent collaborative framework powered by large language models that enables engineers to query hardware designs and PDKs using natural language. By exploiting the structured nature of PDK and hardware design data, ChipXplore retrieves relevant information through text-to-SQL and text-to-Cypher customized workflows. The framework achieves an execution accuracy of 97.39\% in complex natural language queries and improves productivity by making retrieval 5.63x faster while reducing errors by 5.25x in user studies. Compared to generic workflows, ChipXplore's customized workflow is capable of orchestrating reasoning and planning over multiple databases, improving accuracy by 29.78\%. ChipXplore lays the foundation for building autonomous agents capable of tackling diverse physical design tasks that require PDK and hardware design awareness.
Exploiting the recent advancements in artificial intelligence, showcased by ChatGPT and DALL-E, in real-world applications necessitates vast, domain-specific, and publicly accessible datasets. Unfortunately, the scarcity of such datasets poses a significant challenge for researchers aiming to apply these breakthroughs in engineering design. Synthetic datasets emerge as a viable alternative. However, practitioners are often uncertain about generating high-quality datasets that accurately represent real-world data and are suitable for the intended downstream applications. This study aims to fill this knowledge gap by proposing comprehensive guidelines for generating, annotating, and validating synthetic datasets. The trade-offs and methods associated with each of these aspects are elaborated upon. Further, the practical implications of these guidelines are illustrated through the creation of a turbo-compressors dataset. The study underscores the importance of thoughtful sampling methods to ensure the appropriate size, diversity, utility, and realism of a dataset. It also highlights that design diversity does not equate to performance diversity or realism. By employing test sets that represent uniform, real, or task-specific samples, the influence of sample size and sampling strategy is scrutinized. Overall, this paper offers valuable insights for researchers intending to create and publish synthetic datasets for engineering design, thereby paving the way for more effective applications of AI advancements in the field. The code and data for the dataset and methods are made publicly accessible at https://github.com/cyrilpic/radcomp .
This study explores the potential of generative artificial intelligence (AI) models, specifically OpenAI's generative pre-trained transformer (GPT) series, when integrated with building information modeling (BIM) tools as an interactive design assistant for architectural design. The research involves the development and implementation of three key components: 1) BIM2XML, a component that translates BIM data into extensible markup language (XML) format; 2) Generative AI-enabled Interactive Architectural design (GAIA), a component that refines the input design in XML by identifying designer intent, relevant objects, and their attributes, using pre-trained language models; and 3) XML2BIM, a component that converts AI-generated XML data back into a BIM tool. This study validated the proposed approach through a case study involving design detailing, using the GPT series and Revit. Our findings demonstrate the effectiveness of state-of-the-art language models in facilitating dynamic collaboration between architects and AI systems, highlighting the potential for further advancements.
We present a new high-level synthesis methodology for using large language model tools to generate hardware designs. The methodology uses exclusively open-source tools excluding the large language model. As a case study, we use our methodology to generate a permuted congruential random number generator design with a wishbone interface. We verify the functionality and quality of the random number generator design using large language model-generated simulations and the Dieharder randomness test suite. We document all the large language model chat logs, Python scripts, Verilog scripts, and simulation results used in the case study. We believe that our method of hardware design generation coupled with the open source silicon 130 nm design tools will revolutionize application-specific integrated circuit design. Our methodology significantly lowers the bar to entry when building domain-specific computing accelerators for the Internet of Things and proof of concept prototypes for later fabrication in more modern process nodes.
Andrei Paleyes, Henry B. Moss, Victor Picheny
et al.
We present HIghly Parallelisable Pareto Optimisation (HIPPO) -- a batch acquisition function that enables multi-objective Bayesian optimisation methods to efficiently exploit parallel processing resources. Multi-Objective Bayesian Optimisation (MOBO) is a very efficient tool for tackling expensive black-box problems. However, most MOBO algorithms are designed as purely sequential strategies, and existing batch approaches are prohibitively expensive for all but the smallest of batch sizes. We show that by encouraging batch diversity through penalising evaluations with similar predicted objective values, HIPPO is able to cheaply build large batches of informative points. Our extensive experimental validation demonstrates that HIPPO is at least as efficient as existing alternatives whilst incurring an order of magnitude lower computational overhead and scaling easily to batch sizes considerably higher than currently supported in the literature. Additionally, we demonstrate the application of HIPPO to a challenging heat exchanger design problem, stressing the real-world utility of our highly parallelisable approach to MOBO.
Within the environmental context, numerical modeling is a promising approach to assessing the energy efficiency of buildings. Resilient buildings need to be designed, and capable of adapting to future extreme heat. Simulations are required assuming a one-dimensional heat transfer problem through walls and a simulation horizon of several years (nearly 30). The computational cost associated with such modeling is quite significant and model reduction methods are worth investigating. The objective is to propose a reliable reduced-order model for such long-term simulations. For this, an alternative model reduction approach is investigated, assuming a known Proper Orthogonal Decomposition reduced basis for time, and not for space as usual. The model enables computing parametric solutions using basis interpolation on the tangent space of the \textsc{Grassmann} manifold. Three study cases are considered to verify the efficiency of the \revision{reduced-order} model. Results highlight that the model has a satisfying accuracy of $10^{\,-3}\,$ compared to reference solutions. The last case study focuses on the wall energy efficiency design under climate change according to a \revision{four-dimensional} parameter space. The latter is composed of the load material emissivity, heat capacity, thermal conductivity, and thickness insulation layer. Simulations are carried over $30$ years considering climate change. The solution minimizing the wall work rate is determined with a computational ratio of $0.1\%$ compared to standard approaches.
Bengisu Cagiltay, Joseph Michaelis, Sarah Sebo
et al.
Research in child-robot interactions suggests that engaging in "care-taking" of a social robot, such as tucking the robot in at night, can strengthen relationships formed between children and robots. In this work, we aim to better understand and explore the design space of caretaking activities with 10 children, aged 8--12 from eight families, involving an exploratory design session followed by a preliminary feasibility testing of robot caretaking activities. The design sessions provided insight into children's current caretaking tasks, how they would take care of a social robot, and how these new caretaking activities could be integrated into their daily routines. The feasibility study tested two different types of robot caretaking tasks, which we call connection and utility, and measured their short term effects on children's perceptions of and closeness to the social robot. We discuss the themes and present interaction design guidelines of robot caretaking activities for children.
We propose a flexible, co-creative framework bringing together multiple machine learning techniques to assist human users to efficiently produce effective creative designs. We demonstrate its potential with a perfume bottle design case study, including human evaluation and quantitative and qualitative analyses.
Roof falls due to geological conditions are major safety hazards in mining and tunneling industries, causing lost work times, injuries, and fatalities. Several large-opening limestone mines in the Eastern and Midwestern United States have roof fall problems caused by high horizontal stresses. The typical hazard management approach for this type of roof fall hazard relies heavily on visual inspections and expert knowledge. In this study, we propose an artificial intelligence (AI) based system for the detection roof fall hazards caused by high horizontal stresses. We use images depicting hazardous and non-hazardous roof conditions to develop a convolutional neural network for autonomous detection of hazardous roof conditions. To compensate for limited input data, we utilize a transfer learning approach. In transfer learning, an already-trained network is used as a starting point for classification in a similar domain. Results confirm that this approach works well for classifying roof conditions as hazardous or safe, achieving a statistical accuracy of 86%. However, accuracy alone is not enough to ensure a reliable hazard management system. System constraints and reliability are improved when the features being used by the network are understood. Therefore, we used a deep learning interpretation technique called integrated gradients to identify the important geologic features in each image for prediction. The analysis of integrated gradients shows that the system mimics expert judgment on roof fall hazard detection. The system developed in this paper demonstrates the potential of deep learning in geological hazard management to complement human experts, and likely to become an essential part of autonomous tunneling operations in those cases where hazard identification heavily depends on expert knowledge.