Modern engineering design platforms excel at discipline-specific tasks such as CAD, CAM, and CAE, but often lack native systems engineering frameworks. This creates a disconnect where system-level requirements and architectures are managed separately from detailed component design, hindering holistic development and increasing integration risks. To address this, we present the conceptual framework for the GenAI Workbench, a Model-Based Systems Engineering (MBSE) environment that integrates systems engineering principles into the designer's workflow. Built on an open-source PLM platform, it establishes a unified digital thread by linking semantic data from documents, physical B-rep geometry, and relational system graphs. The workbench facilitates an AI-assisted workflow where a designer can ingest source documents, from which the system automatically extracts requirements and uses vision-language models to generate an initial system architecture, such as a Design Structure Matrix (DSM). This paper presents the conceptual architecture, proposed methodology, and anticipated impact of this work-in-progress framework, which aims to foster a more integrated, data-driven, and informed engineering design methodology.
T. Adibi, Seyed Esmail Razavi, Shams Forruque Ahmed
et al.
Noise control in fluid structures is significant to improve the performance of the system, reduce noise pollution, and ensure structural integrity in many engineering applications. Although past studies have mainly studied fluid–structure interaction and acoustics individually, the present study fills this gap and presents a coupled study of fluid flow over an elastic baffle, considering the interaction among fluid flow, structural deformation, and acoustic emissions. The effect of the elastic baffle on velocity, pressure, and induced vibration frequency is evaluated to examine how it affects fluid movement and the minimization of noise. A finite element and a Galerkin method are utilized to perform discretization. Among the most significant findings, there is a 50% decrease in noise transmission along with a 50% decrease in the length of the baffle, supporting the idea that geometric optimization can be employed to manage noise. The baffle height also results in a 27% transmission enhancement of noise. This understanding can be a foundation of optimization and enhancement of baffle layout, particularly in heating, ventilation, and air conditioning (HVAC), automotive exhaust systems, and aerospace components, where sound mitigation is an important consideration. The study also offers a better understanding of fluid–structure–acoustic interactions and a practical solution for noiseless, more efficient systems.
Since the creation of the spatially oriented format for acoustics (SOFA, Audio Engineering Society standard AES69), numerous databases of head-related transfer functions (HRTFs) are now available as standardized SOFA files. However, the methodologies for measuring and postprocessing HRTFs vary significantly across laboratories. This leads to objective and perceptual inconsistencies between HRTF databases and makes it challenging to integrate multiple databases into a single repository to facilitate wide-scale research and application. This paper introduces a normalization procedure, applicable to any HRTF data set, aimed at enhancing the consistency across HRTF data sets obtained from different laboratories while preserving the spatial information essential to HRTFs. The proposed approach consists of six processing steps: low-pass filtering, temporal alignment, temporal windowing, diffuse-field equalization, low-frequency extrapolation, and far-field correction. The normalization was evaluated on 17 HRTF data sets of the same dummy head by means of acoustic analyses and auditory simulations and further validated with respect to a database of 54 human subjects. Results show that the proposed normalization improves data set applicability and consistency while maintaining the directional cues within each data set.
Space-folded acoustic metamaterials (SFAMs) have exceptional low frequency sound control capability in deep sub-wavelength scale, showing significant potential for weak acoustic signal detection. However, lowering the working frequencies of SFAMs may increase the device size, limiting their applications and raising production costs. To address these issues, this study proposes compact SFAM acoustic amplifiers operating in low frequencies, where their working frequencies are flexibly controlled by engineering effective density without increasing device size. Further experiments have demonstrated the capability of wide range frequency control of SFAM through engineering its effective density. The experimental results clearly indicate that the low frequency tuning of SFAMs results from controlling SFAMs' channel width gradients, which is associated with the spatial distribution of metamaterial's effective density. Further studies reveal that compared to the gradient density distribution method in SFAMs, the step density distribution method may have superior low frequency control of acoustic waves, which are capable of frequency tuning of SFAM acoustic amplifier over one octave band. This work provides possibilities for the development of low frequency acoustic amplifiers, where the merits include compact size, lightweight, and low cost, which are highly desirable for acoustics sensing and noise control applications.
Context: Jupyter Notebook has emerged as a versatile tool that transforms how researchers, developers, and data scientists conduct and communicate their work. As the adoption of Jupyter notebooks continues to rise, so does the interest from the software engineering research community in improving the software engineering practices for Jupyter notebooks. Objective: The purpose of this study is to analyze trends, gaps, and methodologies used in software engineering research on Jupyter notebooks. Method: We selected 146 relevant publications from the DBLP Computer Science Bibliography up to the end of 2024, following established systematic literature review guidelines. We explored publication trends, categorized them based on software engineering topics, and reported findings based on those topics. Results: The most popular venues for publishing software engineering research on Jupyter notebooks are related to human-computer interaction instead of traditional software engineering venues. Researchers have addressed a wide range of software engineering topics on notebooks, such as code reuse, readability, and execution environment. Although reusability is one of the research topics for Jupyter notebooks, only 64 of the 146 studies can be reused based on their provided URLs. Additionally, most replication packages are not hosted on permanent repositories for long-term availability and adherence to open science principles. Conclusion: Solutions specific to notebooks for software engineering issues, including testing, refactoring, and documentation, are underexplored. Future research opportunities exist in automatic testing frameworks, refactoring clones between notebooks, and generating group documentation for coherent code cells.
Bianca Trinkenreich, Fabio Calefato, Geir Hanssen
et al.
The adoption of Large Language Models (LLMs) is not only transforming software engineering (SE) practice but is also poised to fundamentally disrupt how research is conducted in the field. While perspectives on this transformation range from viewing LLMs as mere productivity tools to considering them revolutionary forces, we argue that the SE research community must proactively engage with and shape the integration of LLMs into research practices, emphasizing human agency in this transformation. As LLMs rapidly become integral to SE research - both as tools that support investigations and as subjects of study - a human-centric perspective is essential. Ensuring human oversight and interpretability is necessary for upholding scientific rigor, fostering ethical responsibility, and driving advancements in the field. Drawing from discussions at the 2nd Copenhagen Symposium on Human-Centered AI in SE, this position paper employs McLuhan's Tetrad of Media Laws to analyze the impact of LLMs on SE research. Through this theoretical lens, we examine how LLMs enhance research capabilities through accelerated ideation and automated processes, make some traditional research practices obsolete, retrieve valuable aspects of historical research approaches, and risk reversal effects when taken to extremes. Our analysis reveals opportunities for innovation and potential pitfalls that require careful consideration. We conclude with a call to action for the SE research community to proactively harness the benefits of LLMs while developing frameworks and guidelines to mitigate their risks, to ensure continued rigor and impact of research in an AI-augmented future.
Background: Harnessing advanced computing for scientific discovery and technological innovation demands scientists and engineers well-versed in both domain science and computational science and engineering (CSE). However, few universities provide access to both integrated domain science/CSE cross-training and Top-500 High-Performance Computing (HPC) facilities. National laboratories offer internship opportunities capable of developing these skills. Purpose: This student presents an evaluation of federally-funded postgraduate internship outcomes at a national laboratory. This study seeks to answer three questions: 1) What computational skills, research skills, and professional skills do students improve through internships at the selected national laboratory. 2) Do students gain knowledge in domain science topics through their internships. 3) Do students' career interests change after these internships? Design/Method: We developed a survey and collected responses from past participants of five federally-funded internship programs and compare participant ratings of their prior experience to their internship experience. Findings: Our results indicate that participants improve CSE skills and domain science knowledge, and are more interested in working at national labs. Participants go on to degree programs and positions in relevant domain science topics after their internships. Conclusions: We show that national laboratory internships are an opportunity for students to build CSE skills that may not be available at all institutions. We also show a growth in domain science skills during their internships through direct exposure to research topics. The survey instrument and approach used may be adapted to other studies to measure the impact of postgraduate internships in multiple disciplines and internship settings.
Good speech intelligibility in university classrooms is crucial to the learning process, ensuring that students can clearly hear all conversations taking place in the classroom. While it is well known that speech intelligibility depends on the geometrical characteristics of a space and the properties of its surfaces, other factors need also to be considered. Among the most important are: the heating, ventilation, and air conditioning (HVAC) systems used in classrooms. Fan noise from HVAC systems increases the background noise level (BNL), negatively affecting speech intelligibility. In addition, the movement of air caused by these systems alters room acoustic variables. Although this dynamic situation is often overlooked in the early design stages, HVAC systems are often active during lectures and influence acoustics variables, especially the speech transmission index (STI). In this study, the impact of HVAC systems on the STI was measured in five different unoccupied classrooms in the Rafet Kayı¸s Faculty of Engineering at Alanya Alaaddin Keykubat University. The results were evaluated according to relevant standards. The results of these evaluations offer insights for researchers, architects, and engineers working in the field of acoustics.
Luíza Rívero Rago, Maria Fernanda de Oliveira, Simone Tavares
et al.
This paper reports on a teaching experience at the School of Civil Engineering, Architecture, and Urban Design at Unicamp. The subjects "Acoustics and learning environments: design" and "Acoustics and learning environments: construction" took place in 2024 and involved 25 students. The aim was to provide practical experience in implementing solutions to problems related to the sound environment in learning spaces, integrating the community and the university. The practical activities involved measurements by ISO 3382, the acoustic design of a school environment, and the executive design and construction of wooden panels. Through an extension project, the students could apply their theoretical learning to a real situation involving designing and constructing wooden panels for the acoustic conditioning of a school environment.
To address the requirements for bionic covert underwater acoustic communication, this study introduces an evaluative methodology for assessing the concealment of sound signals that mimic those produced by marine mammals. Within the realm of bionic communication, the degree of covert effectiveness is predicated on the similarity between the synthetic sounds, which are embedded with information, and the natural sounds emitted by marine mammals. Higher fidelity in replicating authentic marine mammal acoustics correlates with enhanced stealth capabilities of the synthetic signals. Given the stringent requirements for information concealment in bionic covert underwater acoustic communication, it is imperative to assess the bionic stealth or the biomimetic efficacy of synthetic marine mammal sounds. This research aims to devise and implement an evaluative framework for the stealth of bionic signals, leveraging advanced methodologies such as feature engineering and audio fingerprinting, applied to bionic signal data derived from an array of marine mammals. The proposed evaluation framework within this article quantifies the covert effectiveness of bionic signals corresponding to the acoustics of various marine species, thereby providing a comprehensive measure of biomimetic fidelity and stealth performance. The outcomes of the evaluation elucidate the degree of resemblance between the synthetic bionic signals and the authentic sounds produced by marine mammals. A higher score indicates a minimal discrepancy between the synthetic and original marine mammal sounds, thereby denoting superior biomimetic accuracy and enhanced stealth of the bionic signals.
Yabin Jin, D. Torrent, Bahram Djafari-Rouhani
et al.
Over the past 3 decades, phononic crystals experienced revolutionary development for understanding and utilizing mechanical waves by exploring interaction between mechanical waves and structures. With the significant advances in manufacture technologies from nanoscale to macroscale, phononic crystals attract researchers from diverse disciplines to study abundant directions such as bandgaps, dispersion engineering, novel modes, reconfigurable control, efficient design algorithms and so on. The aim of this roadmap is to present the current state of the art, an overview of properties, functions and applications of phononic crystals, opinions on the challenges and opportunities. The various perspectives cover wide topics on basic property, homogenization, machine learning assisted design, topological, non-Hermitian, nonreciprocal, nanoscale, chiral, nonlocal, active, spatiotemporal, hyperuniform properties of phononic crystals, and applications in underwater acoustics, seismic wave protection, vibration and noise control, thermal transport, sensing, acoustic tweezers, written by over 40 renown experts. It is also intended to guide researchers, funding agencies and industry in identifying new prospects for phononic crystals in the upcoming years.
Purpose: The goal of this study was to assess various recording methods, including combinations of high- versus low-cost microphones, recording interfaces, and smartphones in terms of their ability to produce commonly used time- and spectral-based voice measurements. Method: Twenty-four vowel samples representing a diversity of voice quality deviations and severities from a wide age range of male and female speakers were played via a head-and-thorax model and recorded using a high-cost, research standard GRAS 40AF (GRAS Sound & Vibration) microphone and amplification system. Additional recordings were made using various combinations of headset microphones (AKG C555 L [AKG Acoustics GmbH], Shure SM35-XLR [Shure Incorporated], AVID AE-36 [AVID Products, Inc.]) and audio interfaces (Focusrite Scarlett 2i2 [Focusrite Audio Engineering Ltd.] and PC, Focusrite and smartphone, smartphone via a TRRS adapter), as well as smartphones direct (Apple iPhone 13 Pro, Google Pixel 6) using their built-in microphones. The effect of background noise from four different room conditions was also evaluated. Vowel samples were analyzed for measures of fundamental frequency, perturbation, cepstral peak prominence, and spectral tilt (low vs. high spectral ratio). Results: Results show that a wide variety of recording methods, including smartphones with and without a low-cost headset microphone, can effectively track the wide range of acoustic characteristics in a diverse set of typical and disordered voice samples. Although significant differences in acoustic measures of voice may be observed, the presence of extremely strong correlations (rs > .90) with the recording standard implies a strong linear relationship between the results of different methods that may be used to predict and adjust any observed differences in measurement results. Conclusion: Because handheld smartphone distance and positioning may be highly variable when used in actual clinical recording situations, smartphone + a low-cost headset microphone is recommended as an affordable recording method that controls mouth-to-microphone distance and positioning and allows both hands to be available for manipulation of the smartphone device.
A flaky test yields inconsistent results upon repetition, posing a significant challenge to software developers. An extensive study of their presence and characteristics has been done in classical computer software but not quantum computer software. In this paper, we outline challenges and potential solutions for the automated detection of flaky tests in bug reports of quantum software. We aim to raise awareness of flakiness in quantum software and encourage the software engineering community to work collaboratively to solve this emerging challenge.
* Hungarian acoustics dates back to 1893 when the „Telephone newscaster” was started. The first radio emission was transmitted from the Postal Experimental Station in 1924. Before WW2 the priority subject of acoustics was centered also around the radio: a number of dedicated studios were designed and built. This in turn drew the attention to the field of room acoustics. The most significant scientist of the second half of the 20th century was T. Tarnóczy, who was active in many areas. He founded a research group at ELTE University, which became an essential research entity of the Hungarian Academy of Science. A similarly important technical center was created at the Technical University under the leadership of Z. Barát. Electroacoustics was clearly the core activity of the Hungarian acoustic industry. A number of ministries have also founded R&D institutes. After 1990 most of these industrial and ministerial institutions have significantly shrinked or disappeared. Their leading staff members have often founded small businesses, offering engineering services. Research activities were concentrated to certain universities, initially supported by EU-funded projects. Nowadays they represent centers of gravity for various modern acoustic areas.
Matteo Mancinelli, Eduardo Martini, Vincent Jaunet
et al.
Vortex-sheet models of jets are widely used to describe the dynamics of modes, such as the Kelvin-Helmholtz instability and guided acoustic waves. However, it is seldom pointed out in the literature the absence of the free-stream acoustic modes in the vortex-sheet spectrum. This indicates that free-stream sound waves are not eigensolutions of the parallel jet. This family of modes is important if, for example, one is interested in problems of sound emission or flow-acoustic interactions. In this work we show how a distantly-confined jet may be used as a surrogate problem for the free jet, in which free-stream acoustic waves appear as a set of discrete modes. Comparing the modes observed in the free jet with those of the distantly-confined jet, we show that, other than the free-stream acoustic modes, the eigenvectors and eigenvalues converge with wall distance. The proposed surrogate problem thus efficiently reproduces the dynamics of the original problem, while allowing to account for the dynamics of free-stream acoustic modes.
Alexander Felfernig, Stefan Reiterer, Martin Stettinger
et al.
The knowledge engineering bottleneck is still a major challenge in configurator projects. In this paper we show how recommender systems can support knowledge base development and maintenance processes. We discuss a couple of scenarios for the application of recommender systems in knowledge engineering and report the results of empirical studies which show the importance of user-centered configuration knowledge organization.
With the growth of complexity and extent, large scale interconnected network systems, e.g. transportation networks or infrastructure networks, become more vulnerable towards external disruptions. Hence, managing potential disruptive events during the design, operating, and recovery phase of an engineered system therefore improving the system's resilience is an important yet challenging task. In order to ensure system resilience after the occurrence of failure events, this study proposes a mixed-integer linear programming (MILP) based restoration framework using heterogeneous dispatchable agents. Scenario-based stochastic optimization (SO) technique is adopted to deal with the inherent uncertainties imposed on the recovery process from nature. Moreover, different from conventional SO using deterministic equivalent formulations, additional risk measure is implemented for this study because of the temporal sparsity of the decision making in applications such as the recovery from extreme events. The resulting restoration framework involves a large-scale MILP problem and thus an adequate decomposition technique, i.e. modified Lagrangian dual decomposition, is also employed in order to achieve tractable computational complexity. Case study results based on the IEEE 37-bus test feeder demonstrate the benefits of using the proposed framework for resilience improvement as well as the advantages of adopting SO formulations.