When Code Becomes Abundant: Redefining Software Engineering Around Orchestration and Verification
Karina Kohl, Luigi Carro
Software Engineering (SE) faces simultaneous pressure from AI automation (reducing code production costs) and hardware-energy constraints (amplifying failure costs). We position that SE must redefine itself around human discernment-intent articulation, architectural control, and verification-rather than code construction. This shift introduces accountability collapse as a central risk and requires fundamental changes to research priorities, educational curricula, and industrial practices. We argue that Software Engineering, as traditionally defined around code construction and process management, is no longer sufficient. Instead, the discipline must be redefined around intent articulation, architectural control, and systematic verification. This redefinition shifts Software Engineering from a production-oriented field to one centered on human judgment under automation, with profound implications for research, practice, and education.
Antibacterial PEEK-Ag Surfaces: Development and In Vitro Evaluation Against <i>Staphylococcus aureus</i> and <i>Pseudomonas aeruginosa</i>
Flávio Rodrigues, Mariana Fernandes, Filipe Samuel Silva
et al.
In the pursuit of addressing the persistent challenge of bacterial adhesion and biofilm formation in dental care, this study investigates the efficacy of electric current as an alternative strategy, specifically focusing on its application in dental contexts. Polyether ether ketone (PEEK), known for its excellent biocompatibility and resistance to bacterial plaque, was enhanced with conductive properties by incorporating silver (Ag), a well-known antibacterial material. Through systematic in vitro experiments, the effectiveness of alternating current (AC) and direct current (DC) in reducing bacterial proliferation was evaluated. The tests were conducted using two bacterial strains: the Gram-positive <i>Staphylococcus aureus</i> and the Gram-negative <i>Pseudomonas aeruginosa</i>. Various configurations, current parameters, and two different electrode configurations were assessed to determine their impact on bacterial reduction. A notable finding from this study is that alternating current (AC) demonstrates superior efficacy compared to direct current (DC). The more significant decrease in CFUs/mL for <i>P. aeruginosa</i> with AC was recorded at the current levels of 5 mA and 500 nA. In opposition, <i>S. aureus</i> exhibited the greatest reduction at 5 mA and 1 mA. This study highlights the potential of using electric current within specific intensity ranges as an alternative strategy to effectively mitigate bacterial challenges in dental care.
Biotechnology, Medicine (General)
An Empirical Exploration of ChatGPT's Ability to Support Problem Formulation Tasks for Mission Engineering and a Documentation of its Performance Variability
Max Ofsa, Taylan G. Topcu
Systems engineering (SE) is evolving with the availability of generative artificial intelligence (AI) and the demand for a systems-of-systems perspective, formalized under the purview of mission engineering (ME) in the US Department of Defense. Formulating ME problems is challenging because they are open-ended exercises that involve translation of ill-defined problems into well-defined ones that are amenable for engineering development. It remains to be seen to which extent AI could assist problem formulation objectives. To that end, this paper explores the quality and consistency of multi-purpose Large Language Models (LLM) in supporting ME problem formulation tasks, specifically focusing on stakeholder identification. We identify a relevant reference problem, a NASA space mission design challenge, and document ChatGPT-3.5's ability to perform stakeholder identification tasks. We execute multiple parallel attempts and qualitatively evaluate LLM outputs, focusing on both their quality and variability. Our findings portray a nuanced picture. We find that the LLM performs well in identifying human-focused stakeholders but poorly in recognizing external systems and environmental factors, despite explicit efforts to account for these. Additionally, LLMs struggle with preserving the desired level of abstraction and exhibit a tendency to produce solution specific outputs that are inappropriate for problem formulation. More importantly, we document great variability among parallel threads, highlighting that LLM outputs should be used with caution, ideally by adopting a stochastic view of their abilities. Overall, our findings suggest that, while ChatGPT could reduce some expert workload, its lack of consistency and domain understanding may limit its reliability for problem formulation tasks.
LLM-Powered Fully Automated Chaos Engineering: Towards Enabling Anyone to Build Resilient Software Systems at Low Cost
Daisuke Kikuta, Hiroki Ikeuchi, Kengo Tajiri
Chaos Engineering (CE) is an engineering technique aimed at improving the resilience of distributed systems. It involves intentionally injecting faults into a system to test its resilience, uncover weaknesses, and address them before they cause failures in production. Recent CE tools automate the execution of predefined CE experiments. However, planning such experiments and improving the system based on the experimental results still remain manual. These processes are labor-intensive and require multi-domain expertise. To address these challenges and enable anyone to build resilient systems at low cost, this paper proposes ChaosEater, a system that automates the entire CE cycle with Large Language Models (LLMs). It predefines an agentic workflow according to a systematic CE cycle and assigns subdivided processes within the workflow to LLMs. ChaosEater targets CE for software systems built on Kubernetes. Therefore, the LLMs in ChaosEater complete CE cycles through software engineering tasks, including requirement definition, code generation, testing, and debugging. We evaluate ChaosEater through case studies on small- and large-scale Kubernetes systems. The results demonstrate that it consistently completes reasonable CE cycles with significantly low time and monetary costs. Its cycles are also qualitatively validated by human engineers and LLMs.
Knowledge-Based Aerospace Engineering -- A Systematic Literature Review
Tim Wittenborg, Ildar Baimuratov, Ludvig Knöös Franzén
et al.
The aerospace industry operates at the frontier of technological innovation while maintaining high standards regarding safety and reliability. In this environment, with an enormous potential for re-use and adaptation of existing solutions and methods, Knowledge-Based Engineering (KBE) has been applied for decades. The objective of this study is to identify and examine state-of-the-art knowledge management practices in the field of aerospace engineering. Our contributions include: 1) A SWARM-SLR of over 1,000 articles with qualitative analysis of 164 selected articles, supported by two aerospace engineering domain expert surveys. 2) A knowledge graph of over 700 knowledge-based aerospace engineering processes, software, and data, formalized in the interoperable Web Ontology Language (OWL) and mapped to Wikidata entries where possible. The knowledge graph is represented on the Open Research Knowledge Graph (ORKG), and an aerospace Wikibase, for reuse and continuation of structuring aerospace engineering knowledge exchange. 3) Our resulting intermediate and final artifacts of the knowledge synthesis, available as a Zenodo dataset. This review sets a precedent for structured, semantic-based approaches to managing aerospace engineering knowledge. By advancing these principles, research, and industry can achieve more efficient design processes, enhanced collaboration, and a stronger commitment to sustainable aviation.
Left shifting analysis of Human-Autonomous Team interactions to analyse risks of autonomy in high-stakes AI systems
Ben Larwood, Oliver J. Sutton, Callum Cockburn
Developing high-stakes autonomous systems that include Artificial Intelligence (AI) components is complex; the consequences of errors can be catastrophic, yet it is challenging to plan for all operational cases. In stressful scenarios for the human operator, such as short decision-making timescales, the risk of failures is exacerbated. A lack of understanding of AI failure modes obstructs this and so blocks the robust implementation of applications of AI in smart systems. This prevents early risk identification, leading to increased time, risk and cost of projects. A key tenet of Systems Engineering and acquisition engineering is centred around a "left-shift" in test and evaluation activities to earlier in the system lifecycle, to allow for "accelerated delivery of [systems] that work". We argue it is therefore essential that this shift includes the analysis of AI failure cases as part of the design stages of the system life cycle. Our proposed framework enables the early characterisation of risks emerging from human-autonomy teaming (HAT) in operational contexts. The cornerstone of this is a new analysis of AI failure modes, built on the seminal modelling of human-autonomy teams laid out by LaMonica et al., 2022. Using the analysis of the interactions between human and autonomous systems and exploring the failure modes within each aspect, our approach provides a way to systematically identify human-AI interactions risks across the operational domain of the system of interest. The understanding of the emergent behaviour enables increased robustness of the system, for which the analysis should be undertaken over the whole scope of its operational design domain. This approach is illustrated through an example use case for an AI assistant supporting a Command & Control (C2) System.
Automated and Risk-Aware Engine Control Calibration Using Constrained Bayesian Optimization
Maarten Vlaswinkel, Duarte Antunes, Frank Willems
Decarbonization of the transport sector sets increasingly strict demands to maximize thermal efficiency and minimize greenhouse gas emissions of Internal Combustion Engines. This has led to complex engines with a surge in the number of corresponding tunable parameters in actuator set points and control settings. Automated calibration is therefore essential to keep development time and costs at acceptable levels. In this work, an innovative self-learning calibration method is presented based on in-cylinder pressure curve shaping. This method combines Principal Component Decomposition with constrained Bayesian Optimization. To realize maximal thermal engine efficiency, the optimization problem aims at minimizing the difference between the actual in-cylinder pressure curve and an Idealized Thermodynamic Cycle. By continuously updating a Gaussian Process Regression model of the pressure's Principal Components weights using measurements of the actual operating conditions, the mean in-cylinder pressure curve as well as its uncertainty bounds are learned. This information drives the optimization of calibration parameters, which are automatically adapted while dealing with the risks and uncertainties associated with operational safety and combustion stability. This data-driven method does not require prior knowledge of the system. The proposed method is successfully demonstrated in simulation using a Reactivity Controlled Compression Ignition engine model. The difference between the Gross Indicated Efficiency of the optimal solution found and the true optimum is 0.017%. For this complex engine, the optimal solution was found after 64.4s, which is relatively fast compared to conventional calibration methods.
POE-$Δ$: a framework for change engineering
Georgi Markov, Jon G. Hall, Lucia Rapanotti
Many organisational problems are addressed through systemic change and re-engineering of existing Information Systems rather than radical new design. In the face of widespread IT project failure, devising effective ways to tackle this type of change remains an open challenge. This work discusses the motivation, theoretical foundation, characteristics and evaluation of a novel framework - referred to as POE-$Δ$, which is rooted in design and engineering and is aimed at providing systematic support for representing, structuring and exploring change problems of a socio-technical nature, including implementing their solutions when they exist. We generalise an existing framework of greenfield design as problem solving for application to change problems. From a theoretical perspective,POE-$Δ$ is a strict extension to its parent framework, allowing the seamless integration of greenfield and brownfield design to tackle change problems. A Design Science Research methodology was applied over a decade to define and evaluate POE-$Δ$, with significant case study research conducted to evaluate the framework in its application to real-world change problems of varying criticality and complexity. The results show that POE-$Δ$ exhibits desirable characteristics of a design approach to organisational change and can bring tangible benefits when applied in practice as a holistic and systematic approach to change in socio-technical contexts.
From Hazard Identification to Controller Design: Proactive and LLM-Supported Safety Engineering for ML-Powered Systems
Yining Hong, Christopher S. Timperley, Christian Kästner
Machine learning (ML) components are increasingly integrated into software products, yet their complexity and inherent uncertainty often lead to unintended and hazardous consequences, both for individuals and society at large. Despite these risks, practitioners seldom adopt proactive approaches to anticipate and mitigate hazards before they occur. Traditional safety engineering approaches, such as Failure Mode and Effects Analysis (FMEA) and System Theoretic Process Analysis (STPA), offer systematic frameworks for early risk identification but are rarely adopted. This position paper advocates for integrating hazard analysis into the development of any ML-powered software product and calls for greater support to make this process accessible to developers. By using large language models (LLMs) to partially automate a modified STPA process with human oversight at critical steps, we expect to address two key challenges: the heavy dependency on highly experienced safety engineering experts, and the time-consuming, labor-intensive nature of traditional hazard analysis, which often impedes its integration into real-world development workflows. We illustrate our approach with a running example, demonstrating that many seemingly unanticipated issues can, in fact, be anticipated.
Lie similarity analysis of MHD Casson fluid flow with heat source and variable viscosity over a porous stretching sheet
Thenmozhi D, M. Eswara Rao, Ch. Nagalakshmi
et al.
The current study presents a novel examination of heat transfer properties in a magnetohydrodynamic (MHD) flow of Casson fluid across a porous stretching sheet, uniquely incorporating the effects of heat source and variable viscosity. Unlike previous studies, this research employs the Lie similarity transformation to convert the governing equations into a dimensionless form. These transformed equations are then solved using advanced numerical techniques, specifically the fourth-order Runge-Kutta (RK) along with shooting method. The findings reveal that the velocity decreases with the adjustment of significant parameters such as the Casson fluid properties, variable viscosity, heat source, magnetic field, and porosity, leading to an inverse increase in temperature within the convection system. As the Prandtl number increases, the temperature gradient and thermal boundary layer thickness decrease, resulting in reduced heat transfer rates within the convection system. Likewise, an increase in the Schmidt number decreases the concentration gradient and mass transfer rate within the fluid. This novel approach provides new insights into the behavior of Casson fluids, with significant applications in industrial processes, energy systems, environmental engineering, material science, and aerospace and automotive industries, where understanding heat transfer mechanisms in complex systems can enhance efficiency, performance, and safety.
Correction: The BCPM method: decoding breast cancer with machine learning
Badar Almarri, Gaurav Gupta, Ravinder Kumar
et al.
Model for dimensioning borehole heat exchanger applied to mixed-integer-linear-problem (MILP) energy system optimization
Tobias Blanke, Holger Born, Bernd Döring
et al.
Abstract This paper introduces three novel approaches to size geothermal energy piles in a MILP, offering fresh perspectives and potential solutions. The research overlooks MILP models that incorporate the sizing of a geothermal borefield. Therefore, this paper presents a new model utilizing a g-function model to regulate the power limits. Geothermal energy is an essential renewable source, particularly for heating and cooling. Complex energy systems, with their diverse sources of heating and cooling and intricate interactions, are crucial for a climate-neutral energy system. This work significantly contributes to the integration of geothermal energy as a vital energy source into the modelling of such complex systems. Borehole heat exchangers help generate heat in low-temperature energy systems. However, optimizing these exchangers using mixed-integer-linear programming (MILP), which only allows for linear equations, is complex. The current research only uses R-C, reservoir, or g-function models for pre-sized borefields. As a result, borehole heat exchangers are often represented by linear factors such as 50 W/m for extraction or injection limits. A breakthrough in the accuracy of borehole heat exchanger sizing has been achieved with the development of a new model, which has been rigorously compared to two simpler models. The geothermal system was configured for three energy systems with varying ground and bore field parameters. The results were then compared with existing geothermal system tools. The new model provides more accurate depth sizing with an error of less than 5 % compared to simpler models with an error higher than 50 %, although it requires more calculation time. The new model can lead to more accurate borefield sizing in MILP applications to optimize energy systems. This new model is especially beneficial for large-scale projects that are highly dependent on borefield size.
Renewable energy sources, Geology
Formation of Cluster‐Structured Metallic Filaments in Organic Memristors for Wearable Neuromorphic Systems with Bio‐Mimetic Synaptic Weight Distributions
Uihoon Jung, Miseong Kim, Jaewon Jang
et al.
Abstract With increasing demand for wearable electronics capable of computing huge data, flexible neuromorphic systems mimicking brain functions have been receiving much attention. Despite considerable efforts in developing practical neural networks utilizing several types of flexible artificial synapses, it is still challenging to develop wearable systems for complex computations due to the difficulties in emulating continuous memory states in a synaptic component. In this study, polymer conductivity is analyzed as a crucial factor in determining the growth dynamics of metallic filaments in organic memristors. Moreover, flexible memristors with bio‐mimetic synaptic functions such as linearly tunable weights are demonstrated by engineering the polymer conductivity. In the organic memristor, the cluster‐structured filaments are grown within the polymer medium in response to electric stimuli, resulting in gradual resistive switching and stable synaptic plasticity. Additionally, the device exhibits the continuous and numerous non‐volatile memory states due to its low leakage current. Furthermore, complex hardware neural networks including ternary logic operators and a noisy image recognitions system are successfully implemented utilizing the developed memristor arrays. This promising concept of creating flexible neural networks with bio‐mimetic weight distributions will contribute to the development of a new computing architecture for energy‐efficient wearable smart electronics.
Constructive Safety-Critical Control: Synthesizing Control Barrier Functions for Partially Feedback Linearizable Systems
Max H. Cohen, Ryan K. Cosner, Aaron D. Ames
Certifying the safety of nonlinear systems, through the lens of set invariance and control barrier functions (CBFs), offers a powerful method for controller synthesis, provided a CBF can be constructed. This paper draws connections between partial feedback linearization and CBF synthesis. We illustrate that when a control affine system is input-output linearizable with respect to a smooth output function, then, under mild regularity conditions, one may extend any safety constraint defined on the output to a CBF for the full-order dynamics. These more general results are specialized to robotic systems where the conditions required to synthesize CBFs simplify. The CBFs constructed from our approach are applied and verified in simulation and hardware experiments on a quadrotor.
How Mature is Requirements Engineering for AI-based Systems? A Systematic Mapping Study on Practices, Challenges, and Future Research Directions
Umm-e- Habiba, Markus Haug, Justus Bogner
et al.
Artificial intelligence (AI) permeates all fields of life, which resulted in new challenges in requirements engineering for artificial intelligence (RE4AI), e.g., the difficulty in specifying and validating requirements for AI or considering new quality requirements due to emerging ethical implications. It is currently unclear if existing RE methods are sufficient or if new ones are needed to address these challenges. Therefore, our goal is to provide a comprehensive overview of RE4AI to researchers and practitioners. What has been achieved so far, i.e., what practices are available, and what research gaps and challenges still need to be addressed? To achieve this, we conducted a systematic mapping study combining query string search and extensive snowballing. The extracted data was aggregated, and results were synthesized using thematic analysis. Our selection process led to the inclusion of 126 primary studies. Existing RE4AI research focuses mainly on requirements analysis and elicitation, with most practices applied in these areas. Furthermore, we identified requirements specification, explainability, and the gap between machine learning engineers and end-users as the most prevalent challenges, along with a few others. Additionally, we proposed seven potential research directions to address these challenges. Practitioners can use our results to identify and select suitable RE methods for working on their AI-based systems, while researchers can build on the identified gaps and research directions to push the field forward.
Case hacks in action: Examples from a case study on green chemistry in education for sustainable development
Per Fors, Thomas Taro Lennerfors, Jonathan Woodward
This paper aims to outline an approach for case-based chemistry and chemical engineering education for sustainability. Education for Sustainability is assumed to offer a holistic approach to equip students with the knowledge, skills, values, and attitudes needed to contribute to a more sustainable society in their future careers. While Case-Based Education traditionally focuses on disciplinary learning in simulated settings, it can also effectively teach essential sustainability-related skills like integrated problem-solving, critical thinking, and systems thinking. The approach we propose is “case hacking”, which should be understood as utilizing existing business cases while incorporating supplementary resources to align the assignment with intended learning objectives. This expansion of the cases involves, among other things, introducing additional questions and assignments, perspectives from stakeholders previously unexplored in the original case, and the integration of recent research articles from relevant fields. We advocate for the use of case hacking when educators want to harness the educational benefits of Case-Based Education while emphasizing the complexity of sustainability-related challenges faced by industrial companies today. As an illustrative example, we demonstrate the process of hacking a case related to Green Chemistry in the pharmaceutical industry, highlighting specific challenges for chemistry and chemical engineering education. We hope this example will inspire educators in these disciplinary contexts to engage with the case hacking approach as they navigate the complex terrain of sustainability.
Chemical engineering, Information technology
An Automated Image-Based Dietary Assessment System for Mediterranean Foods
Fotios S. Konstantakopoulos, Eleni I. Georga, Dimitrios I. Fotiadis
<italic>Goal</italic>: The modern way of living has significantly influenced the daily diet. The ever-increasing number of people with obesity, diabetes and cardiovascular diseases stresses the need to find tools that could help in the daily intake of the necessary nutrients. <italic>Methods:</italic> In this paper, we present an automated image-based dietary assessment system of Mediterranean food, based on: 1) an image dataset of Mediterranean foods, 2) on a pre-trained Convolutional Neural Network (CNN) for food image classification, and 3) on stereo vision techniques for the volume and nutrition estimation of the food. We use a pre-trained CNN in the Food-101 dataset to train a deep learning classification model employing our dataset Mediterranean Greek Food (MedGRFood). Based on the EfficientNet family of CNNs, we use the EfficientNetB2 both for the pre-trained model and its weights evaluation, as well as for classifying food images in the MedGRFood dataset. Next, we estimate the volume of the food, through 3D food reconstruction of two images taken by a smartphone camera. The proposed volume estimation subsystem uses stereo vision techniques and algorithms, and needs the input of two food images to reconstruct the point cloud of the food and to compute its quantity. <italic>Results:</italic> The classification accuracy where true class matches with the most probable class predicted by the model (Top-1 accuracy) is 83.8%, while the accuracy where true class matches with any one of the 5 most probable classes predicted by the model (Top-5 accuracy) is 97.6%, for the food classification subsystem. The food volume estimation subsystem achieves an overall mean absolute percentage error 10.5% for 148 different food dishes. <italic>Conclusions:</italic> The proposed automated image-based dietary assessment system provides the capability of continuous recording of health data in real time.
Computer applications to medicine. Medical informatics, Medical technology
CSNTSteg: Color Spacing Normalization Text Steganography Model to Improve Capacity and Invisibility of Hidden Data
Reema Thabit, Nur Izura Udzir, Sharifah Md Yasin
et al.
The rapid growth of online communication has increased the demand for secure communication. Most government entities, healthcare providers, the legal sector, financial and banking, and other industries are vulnerable to information security issues. Text steganography is one way to protect secure communication by hiding secret messages in the cover text. Hiding a large amount of secret information without raising the attacker’s suspicion is the main challenge in steganography. This paper proposes the Color and Spacing Normalization stego (CSNTSteg) model to resolve the low capacity and invisibility problems in text steganography. CSNTSteg consists of two stages: the pre-embedding stage, which achieves high capacity by utilizing RGB coding and character spacing. It is designed to increase the number of bits per location and usable characters. Besides, it applies the Huffman coding technique to compress the secret message to add more capacity enhancement. The second stage is color and space normalization, which accomplishes high invisibility by normalizing the RGB coding and character spacing of the cover and stego text. CSNTSteg overcomes the color difference issue between the cover and stego texts regardless of the color of the cover text. To assess the quality of CSNTSteg, the experimental results are compared with existing works. CSNTSteg shows superior capacity over the existing studies with a percentage of 98.85%. CSNTSteg also achieves high invisibility by reducing the color difference with a percentage of 4.7% and 5.07% for black and colored cover text, respectively. Furthermore, CSNTSteg improves robustness by 94.22% by reducing the distortion in stego text. Overall, the CSNTSteg model embeds a high capacity of secret data while maintaining invisibility and security, offering a new perspective on text steganography to protect against visual and statistical attack issues.
Electrical engineering. Electronics. Nuclear engineering
A novel chaotic flower pollination algorithm for function optimization and constrained optimal power flow considering renewable energy sources
Fatima Daqaq, Fatima Daqaq, Mohammed Ouassaid
et al.
This study presents an improved chaotic flower pollination algorithm (CFPA) with a view to handle the optimal power flow (OPF) problem integrating a hybrid wind and solar power and generate the optimal settings of generator power, bus voltages, shunt reactive power, and tap setting transformers. In spite of the benefits of FPA, it encounters two problems like other evolutionary algorithms: entrapment in local optima and slow convergence speed. Thus, to deal with these drawbacks and enhance the FPA searching accuracy, a hybrid optimization approach CFPA which combines the stochastic algorithm FPA that simulates the flowering plants process with the chaos methodology is applied in this manuscript. Therefore, owing to the various nonlinear constraints in OPF issue, a constraint handling technique named superiority of feasible solutions (SF) is embedded into CFPA. To confirm the performance of the chaotic FPA, a set of different well-known benchmark functions were employed for ten diverse chaotic maps, and then the best map is tested on IEEE 30-bus and IEEE 57-bus test systems incorporating the renewable energy sources (RESs). The obtained results are analyzed statistically using non-parametric Wilcoxon rank-sum test in view of evaluating their significance compared to the outcomes of the state-of-the-art meta-heuristic algorithms such as ant bee colony (ABC), grasshopper optimization algorithm (GOA), and dragonfly algorithm (DA). From this study, it may be established that the suggested CFPA algorithm outperforms its meta-heuristic competitors in most benchmark test cases. Additionally, the experimental results regarding the OPF problem demonstrate that the integration of RESs decreases the total cost by 12.77% and 33.11% for the two systems, respectively. Thus, combining FPA with chaotic sequences is able to accelerate the convergence and provide better accuracy to find optimal solutions. Furthermore, CFPA (especially with the Sinusoidal map) is challenging in solving complex real-world problems.
Two lower-bounding algorithms for the p-center problem in an area
Yanchao Liu
Abstract The p-center location problem in an area is an important yet very difficult problem in location science. The objective is to determine the location of p hubs within a service area so that the distance from any point in the area to its nearest hub is as small as possible. While effective heuristic methods exist for finding good feasible solutions, research work that probes the lower bound of the problem’s objective value is still limited. This paper presents an iterative solution framework along with two optimization-based heuristics for computing and improving the lower bound, which is at the core of the problem’s difficulty. One method obtains the lower bound via solving the discrete version of the Euclidean p-center problem, and the other via solving a relatively easier clustering problem. Both methods have been validated in various test cases, and their performances can serve as a benchmark for future methodological improvements.