The influence of station-to-station line orientation on sea current speed observations using Coastal Acoustic Tomography (CAT) was quantitatively investigated. For this purpose, we conducted CAT experiments at five stations in Yeosu Bay, South Korea. Through these experiments, the sea current speeds were estimated along a total of six tomographic observation lines with different orientations, and the results were compared with current speeds measured simultaneously by an Acoustic Doppler Current Profiler (ADCP). The comparison showed that the concordance between tomography-estimated sea current speed and ADCP-measured sea current speed tended to decrease as the acute angle between the predominant tidal current direction in Yeosu Bay and a tomographic observation line increased. This tendency is interpreted as arising because the smaller the difference between the two one-way travel times obtained during tomographic observations, the greater the effect of the travel time measurement error whose magnitude is relatively direction-independent. This interpretation was supported by a simple numerical simulation. Furthermore, quantitative analysis of these simulation results indicated that a smaller acute angle between the predominant sea current direction in the survey area and a tomographic observation line enhances the robustness of sea current speed estimation against travel time measurement errors. The results show that the station-to-station line in CAT should be arranged considering the predominant sea current direction in the survey area, which can provide an important guideline for selecting station locations.
Keila Lima, Tosin Daniel Oyetoyan, Rogardt Heldal
et al.
The latest surveys estimate an increasing number of connected Internet-of-Things (IoT) devices (around 16 billion) despite the sector's shortage of manufacturers. All these devices deployed into the wild will collect data to guide decision-making that can be made automatically by other systems, humans, or hybrid approaches. In this work, we conduct an initial investigation of benchmark configuration options for IoT Platforms that process data ingested by such devices in real-time using the MQTT protocol. We identified metrics and related MQTT configurable parameters in the system's component deployment for an MQTT bridge architecture. For this purpose, we benchmark a real-world IoT platform's operational data flow design to monitor the surrounding environment remotely. We consider the MQTT broker solution and the system's real-time ingestion and bridge processing portion of the platform to be the system under test. In the benchmark, we investigate two architectural deployment options for the bridge component to gain insights into the latency and reliability of MQTT bridge deployments in which data is provided in a cross-organizational context. Our results indicate that the number of bridge components, MQTT packet sizes, and the topic name can impact the quality attributes in IoT architectures using MQTT protocol.
Floriment Klinaku, Sarah Sophie Stieß, Alireza Hakamian
et al.
The cloud computing model enables the on-demand provisioning of computing resources, reducing manual management, increasing efficiency, and improving environmental impact. Software architects now play a strategic role in designing and deploying elasticity policies for automated resource management. However, creating policies that meet performance and cost objectives is complex. Existing approaches, often relying on formal models like Queueing Theory, require advanced skills and lack specific methods for representing elasticity within architectural models. This paper introduces an architectural view type for modeling and simulating elasticity, supported by the Scaling Policy Definition (SPD) modeling language, a visual notation, and precise simulation semantics. The view type is integrated into the Palladio ecosystem, providing both conceptual and tool-based support. We evaluate the approach through two single-case experiments and a user study. In the first experiment, simulations of elasticity policies demonstrate sufficient accuracy when compared to load tests, showing the utility of simulations for evaluating elasticity. The second experiment confirms feasibility for larger applications, though with increased simulation times. The user study shows that participants completed 90% of tasks, rated the usability at 71%, and achieved an average score of 76% in nearly half the allocated time. However, the empirical evidence suggests that modeling with this architectural view requires more time than modeling control flow, resource environments, or usage profiles, despite its benefits for elasticity policy design and evaluation.
The integration of large language models into software systems is transforming capabilities such as natural language understanding, decision-making, and autonomous task execution. However, the absence of a commonly accepted software reference architecture hinders systematic reasoning about their design and quality attributes. This gap makes it challenging to address critical concerns like privacy, security, modularity, and interoperability, which are increasingly important as these systems grow in complexity and societal impact. In this paper, we describe our \textit{emerging} results for a preliminary functional reference architecture as a conceptual framework to address these challenges and guide the design, evaluation, and evolution of large language model-integrated systems. We identify key architectural concerns for these systems, informed by current research and practice. We then evaluate how the architecture addresses these concerns and validate its applicability using three open-source large language model-integrated systems in computer vision, text processing, and coding.
Architectural technical debt (ATD) represents trade-offs in software architecture that accelerate initial development but create long-term maintenance challenges. ATD, in particular when self-admitted, impacts the foundational structure of software, making it difficult to detect and resolve. This study investigates the lifecycle of ATD, focusing on how it affects i) the connectivity between classes and ii) the frequency of file modifications. We aim to understand how ATD evolves from introduction to repayment and its implications on software architectures. Our empirical approach was applied to a dataset of SATD items extracted from various software artifacts. We isolated ATD instances, filtered for architectural indicators, and calculated dependencies at different lifecycle stages using FAN-IN and FAN-OUT metrics. Statistical analyses, including the Mann-Whitney U test and Cliff's Delta, were used to assess the significance and effect size of connectivity and dependency changes over time. We observed that ATD repayment increased class connectivity, with FAN-IN increasing by 57.5% on average and FAN-OUT by 26.7%, suggesting a shift toward centralization and increased architectural complexity after repayment. Moreover, ATD files were modified less frequently than Non-ATD files, with changes accumulated in high-dependency portions of the code. Our study shows that resolving ATD improves software quality in the short-term, but can make the architecture more complex by centralizing dependencies. Also, even if dependency metrics (like FAN-IN and FAN-OUT) can help understand the impact of ATD, they should be combined with other measures to capture other effects of ATD on software maintainability.
Abstract Key message Evergreen and deciduous species in a subtropical urban forest of Eastern China exhibit pronounced differences in leaf traits, with evergreens species showing lower photosynthetic rate on a leaf mass basis and leaf nutrient contents, but higher leaf mass per area ratio, leaf thickness, leaf carbon content, and leaf carbon-to-nitrogen ratio, whereas deciduous species show the opposite pattern, reflecting distinct resource-use characteristics. In addition, leaf economic and hydraulic traits are coordinated, with higher vein density associated with higher scores along the leaf economics spectrum PCA axis, reflecting resource-acquisitive characteristics and highlighting vein density as a key trait linking water transport capacity to carbon economy. Context Understanding how leaf economics and hydraulic traits vary and interact among different plant growth forms and leaf habits is essential for elucidating plant adaptability. However, the coupling of these two trait dimensions remains unclear within urban forest ecosystems where environmental conditions differ significantly from natural forests. Aims This study aimed to investigate variation and coordination between leaf economics and hydraulic traits among woody species in a subtropical urban forest of Eastern China, focusing on differences between leaf habits and growth forms. Methods We measured 10 leaf economic traits and 4 hydraulic traits across 53 woody species from a subtropical urban forest. Results Evergreen species exhibited lower photosynthetic rate on a leaf mass basis, leaf nutrient contents, and higher leaf mass per area ratio, leaf thickness, leaf carbon content, and leaf carbon-to-nitrogen ratio, consistent with resource-conserving characteristics. Deciduous species showed higher values of these parameters, indicative of rapid resource acquisition. Shrubs displayed significantly higher phosphorous content in leaves than trees. Vein density was positively correlated with the leaf economic spectrum. Conclusion These findings reveal a coordination between leaf hydraulic and economic traits. This coupling highlights the balance between water transport and resource acquisition characteristics.
Segmentation of retinal vessels from fundus images is critical for diagnosing diseases such as diabetes and hypertension. However, the inherent challenges posed by the complex geometries of vessels and the highly imbalanced distribution of thick versus thin vessel pixels demand innovative solutions for robust feature extraction. In this paper, we introduce DAF-UNet, a novel architecture that integrates advanced modules to address these challenges. Specifically, our method leverages a pre-trained deformable convolution (DC) module within the encoder to dynamically adjust the sampling positions of the convolution kernel, thereby adapting the receptive field to capture irregular vessel morphologies more effectively than traditional convolutional approaches. At the network’s bottleneck, an enhanced atrous spatial pyramid pooling (ASPP) module is employed to extract and fuse rich, multi-scale contextual information, significantly improving the model’s capacity to delineate vessels of varying calibers. Furthermore, we propose a hybrid loss function that combines pixel-level and segment-level losses to robustly address the segmentation inconsistencies caused by the disparity in vessel thickness. Experimental evaluations on the DRIVE and CHASE_DB1 datasets demonstrated that DAF-UNet achieved a global accuracy of 0.9572/0.9632 and a Dice score of 0.8298/0.8227, respectively, outperforming state-of-the-art methods. These results underscore the efficacy of our approach in precisely capturing fine vascular details and complex boundaries, marking a significant advancement in retinal vessel segmentation.
As AI systems grow increasingly specialized and complex, managing hardware heterogeneity becomes a pressing challenge. How can we efficiently coordinate and synchronize heterogeneous hardware resources to achieve high utilization? How can we minimize the friction of transitioning between diverse computation phases, reducing costly stalls from initialization, pipeline setup, or drain? Our insight is that a network abstraction at the ISA level naturally unifies heterogeneous resource orchestration and phase transitions. This paper presents a Reconfigurable Stream Network Architecture (RSN), a novel ISA abstraction designed for the DNN domain. RSN models the datapath as a circuit-switched network with stateful functional units as nodes and data streaming on the edges. Programming a computation corresponds to triggering a path. Software is explicitly exposed to the compute and communication latency of each functional unit, enabling precise control over data movement for optimizations such as compute-communication overlap and layer fusion. As nodes in a network naturally differ, the RSN abstraction can efficiently virtualize heterogeneous hardware resources by separating control from the data plane, enabling low instruction-level intervention. We build a proof-of-concept design RSN-XNN on VCK190, a heterogeneous platform with FPGA fabric and AI engines. Compared to the SOTA solution on this platform, it reduces latency by 6.1x and improves throughput by 2.4x-3.2x. Compared to the T4 GPU with the same FP32 performance, it matches latency with only 18% of the memory bandwidth. Compared to the A100 GPU at the same 7nm process node, it achieves 2.1x higher energy efficiency in FP32.
Unsupervised representation learning presents new opportunities for advancing Quantum Architecture Search (QAS) on Noisy Intermediate-Scale Quantum (NISQ) devices. QAS is designed to optimize quantum circuits for Variational Quantum Algorithms (VQAs). Most QAS algorithms tightly couple the search space and search algorithm, typically requiring the evaluation of numerous quantum circuits, resulting in high computational costs and limiting scalability to larger quantum circuits. Predictor-based QAS algorithms mitigate this issue by estimating circuit performance based on structure or embedding. However, these methods often demand time-intensive labeling to optimize gate parameters across many circuits, which is crucial for training accurate predictors. Inspired by the classical neural architecture search algorithm Arch2vec, we investigate the potential of unsupervised representation learning for QAS without relying on predictors. Our framework decouples unsupervised architecture representation learning from the search process, enabling the learned representations to be applied across various downstream tasks. Additionally, it integrates an improved quantum circuit graph encoding scheme, addressing the limitations of existing representations and enhancing search efficiency. This predictor-free approach removes the need for large labeled datasets. During the search, we employ REINFORCE and Bayesian Optimization to explore the latent representation space and compare their performance against baseline methods. We further validate our approach by executing the best-discovered MaxCut circuits on IBM's ibm_sherbrooke quantum processor, confirming that the architectures retain optimal performance even under real hardware noise. Our results demonstrate that the framework efficiently identifies high-performing quantum circuits with fewer search iterations.
Honeypots are designed to trap the attacker with the purpose of investigating its malicious behavior. Owing to the increasing variety and sophistication of cyber attacks, how to capture high-quality attack data has become a challenge in the context of honeypot area. All-round honeypots, which mean significant improvement in sensibility, countermeasure and stealth, are necessary to tackle the problem. In this paper, we propose a novel honeypot architecture termed HoneyDOC to support all-round honeypot design and implementation. Our HoneyDOC architecture clearly identifies three essential independent and collaborative modules, Decoy, Captor and Orchestrator. Based on the efficient architecture, a Software-Defined Networking (SDN) enabled honeypot system is designed, which supplies high programmability for technically sustaining the features for capturing high-quality data. A proof-of-concept system is implemented to validate its feasibility and effectiveness. The experimental results show the benefits by using the proposed architecture comparing to the previous honeypot solutions.
Julia Nerantzia Tzortzi, Maria Stella Lux, Natalia Pardo Delgado
Green infrastructure and nature-based solutions are crucial for the sustainable transformation of cities into more resilient and inclusive places. However, the planning and design of these interventions must be tailored to different urban environments and socioeconomic contexts. Despite being one of the most urbanised global areas, the Latin American and Caribbean region still needs to be more researched. In this regard, this contribution provides an analytical and design framework for integrating nature-based solutions in dense urban contexts for microclimate mitigation and improved usability. It is constructed by considering the morphological, historical, climatic, and administrative peculiarities of Latin America, and it has been applied and tested in the case study of Bogotá (Colombia). The result is a matrix for constructing design strategies based on three key attributes of outdoor spaces and four design components.
Article info
Received: 18/03/2024; Revised: 22/04/2024; Accepted: 02/05/2024
Carolien Bastiaanssen, Pilar Bobadilla Ugarte, Kijun Kim
et al.
Abstract Argonaute proteins are the central effectors of RNA-guided RNA silencing pathways in eukaryotes, playing crucial roles in gene repression and defense against viruses and transposons. Eukaryotic Argonautes are subdivided into two clades: AGOs generally facilitate miRNA- or siRNA-mediated silencing, while PIWIs generally facilitate piRNA-mediated silencing. It is currently unclear when and how Argonaute-based RNA silencing mechanisms arose and diverged during the emergence and early evolution of eukaryotes. Here, we show that in Asgard archaea, the closest prokaryotic relatives of eukaryotes, an evolutionary expansion of Argonaute proteins took place. In particular, a deep-branching PIWI protein (HrAgo1) encoded by the genome of the Lokiarchaeon ‘Candidatus Harpocratesius repetitus’ shares a common origin with eukaryotic PIWI proteins. Contrasting known prokaryotic Argonautes that use single-stranded DNA as guides and/or targets, HrAgo1 mediates RNA-guided RNA cleavage, and facilitates gene silencing when expressed in human cells and supplied with miRNA precursors. A cryo-EM structure of HrAgo1, combined with quantitative single-molecule experiments, reveals that the protein displays structural features and target-binding modes that are a mix of those of eukaryotic AGO and PIWI proteins. Thus, this deep-branching archaeal PIWI may have retained an ancestral molecular architecture that preceded the functional and mechanistic divergence of eukaryotic AGOs and PIWIs.
Humans have been described as a “forward-looking” species in more than simply physiological terms. We are, it seems, unusually concerned with the future. This essay explores how built environments can be designed to evoke positive anticipation of future events. It suggests that there are three primary means of achieving this: (1) the visible display of valued resources, (2) signs of readiness, and (3) views that encourage mental exploration. It is observed that while resources tend to elicit hope of their future use, readiness and visual prospects seem to evoke a more general sense of optimism. Given the large proportion of our lives that most of us now spend in buildings, it is suggested that these design strategies might be helpful in maintaining and improving occupant morale in the indoor spaces where we live and work, and even more so for those who, for one reason or another, are unable to venture out.
As we advance in the fast-growing era of Machine Learning, various new and more complex neural architectures are arising to tackle problem more efficiently. On the one hand their efficient usage requires advanced knowledge and expertise, which is most of the time difficult to find on the labor market. On the other hand, searching for an optimized neural architecture is a time-consuming task when it is performed manually using a trial and error approach. Hence, a method and a tool support is needed to assist users of neural architectures, leading to an eagerness in the field of Automatic Machine Learning (AutoML). When it comes to Deep Learning, an important part of AutoML is the Neural Architecture Search (NAS). In this paper, we propose a novel cell-based hierarchical search space, easy to comprehend and manipulate. The objectives of the proposed approach are to optimize the search-time and to be general enough to handle most of state of the art Convolutional Neural Networks (CNN) architectures.
Accelerators implementing Deep Neural Networks for image-based object detection operate on large volumes of data due to fetching images and neural network parameters, especially if they need to process video streams, hence with high power dissipation and bandwidth requirements to fetch all those data. While some solutions exist to mitigate power and bandwidth demands for data fetching, they are often assessed in the context of limited evaluations with a scale much smaller than that of the target application, which challenges finding the best tradeoff in practice. This paper sets up the infrastructure to assess at-scale a key power and bandwidth optimization - weight clustering - for You Only Look Once v3 (YOLOv3), a neural network-based object detection system, using videos of real driving conditions. Our assessment shows that accelerators such as systolic arrays with an Output Stationary architecture turn out to be a highly effective solution combined with weight clustering. In particular, applying weight clustering independently per neural network layer, and using between 32 (5-bit) and 256 (8-bit) weights allows achieving an accuracy close to that of the original YOLOv3 weights (32-bit weights). Such bit-count reduction of the weights allows shaving bandwidth requirements down to 30%-40% of the original requirements, and reduces energy consumption down to 45%. This is based on the fact that (i) energy due to multiply-and-accumulate operations is much smaller than DRAM data fetching, and (ii) designing accelerators appropriately may make that most of the data fetched corresponds to neural network weights, where clustering can be applied. Overall, our at-scale assessment provides key results to architect camera-based object detection accelerators by putting together a real-life application (YOLOv3), and real driving videos, in a unified setup so that trends observed are reliable.
Cognitive biases exert a significant influence on human thinking and decision-making. In order to identify how they influence the occurrence of architectural technical debt, a series of semi-structured interviews with software architects was performed. The results show which classes of architectural technical debt originate from cognitive biases, and reveal the antecedents of technical debt items (classes) through biases. This way, we analysed how and when cognitive biases lead to the creation of technical debt. We also identified a set of debiasing techniques that can be used in order to prevent the negative influence of cognitive biases. The observations of the role of organisational culture in the avoidance of inadvertent technical debt throw a new light on that issue.
Large fires in factories cause severe human casualties and property damage. Thus, preparing more economical and efficient management strategies for fire prevention can significantly improve fire safety. This study deals with property damage grade prediction by fire based on simplified building information. This paper’s primary objective is to propose and verify a framework for predicting the scale of property damage caused by fire using machine learning (ML). Korean public datasets are collected and preprocessed, and ML algorithms are trained with only 15 input data using building register and fire scenario information. Four models (artificial neural network (ANN), decision tree (DT), k-nearest neighbor (KNN), and random forest (RF)) are used for ML. The RF model is the most suitable for this study, with recall and precision of 74.2% and 73.8%, respectively. Structure, floor, causes, and total floor area are the critical factors that govern the fire size. This study proposes a novel approach by utilizing ML models to accurately and rapidly predict the size of fire damage based on basic building information. By analyzing domestic fire incident data and creating fire scenarios, a similar ML model can be developed.
<p>Ice-nucleating particles (INPs) initiate primary ice formation in Arctic mixed-phase clouds (MPCs), altering cloud radiative properties and modulating precipitation. For atmospheric INPs, the complexity of their spatiotemporal variations, heterogeneous sources, and evolution via intricate atmospheric interactions challenge the understanding of their impact on microphysical processes in Arctic MPCs and induce an uncertain representation in climate models. In this work, we performed a comprehensive analysis of atmospheric aerosols at the Arctic coastal site in Ny-Ålesund (Svalbard, Norway) from October to November 2019, including their ice nucleation ability, physicochemical properties, and potential sources. Overall, INP concentrations (<span class="inline-formula"><i>N</i><sub>INP</sub></span>) during the observation season were approximately up to 3 orders of magnitude lower compared to the global average, with several samples showing degradation of <span class="inline-formula"><i>N</i><sub>INP</sub></span> after heat treatment, implying the presence of proteinaceous INPs. Particle fluorescence was substantially associated with INP concentrations at warmer ice nucleation temperatures, indicating that in the far-reaching Arctic, aerosols of biogenic origin throughout the snow- and ice-free season may serve as important INP sources. In addition, case studies revealed the links between elevated <span class="inline-formula"><i>N</i><sub>INP</sub></span> and heat lability, fluorescence, high wind speeds originating from the ocean, augmented concentration of coarse-mode particles, and abundant organics. Backward trajectory analysis demonstrated a potential connection between high-latitude dust sources and high INP concentrations, while prolonged air mass history over the ice pack was identified for most scant INP cases. The combination of the above analyses demonstrates that the abundance, physicochemical properties, and potential sources of INPs in the Arctic are highly variable despite its remote location.</p>
Linear-infrastructure Mission Control (LiMiC) is an application for autonomous Unmanned Aerial Vehicle (UAV) infrastructure inspection mission planning developed in monolithic software architecture. The application calculates routes along the infrastructure based on the users' inputs, the number of UAVs participating in the mission, and UAVs' locations. LiMiC1.0 is the latest application version migrated from monolith to microservices, continuously integrated, and deployed using DevOps tools to facilitate future features development, enable better traffic management, and improve the route calculation processing time. Processing time was improved by refactoring the route calculation algorithm into services, scaling them in the Kubernetes cluster, and enabling asynchronous communication in between. In this paper, we discuss the differences between the monolith and microservice architecture to justify our decision for migration. We describe the methodology for the application's migration and implementation processes, technologies we use for continuous integration and deployment, and we present microservices improved performance results compared with the monolithic application.
Michel Friedrich, Ezequiel Farrher, Svenja Caspers
et al.
BackgroundIn glioma patients, multimodality therapy and recurrent tumor can lead to structural brain tissue damage characterized by pathologic findings in MR and PET imaging. However, little is known about the impact of different types of damage on the fiber architecture of the affected white matter.Patients and methodsThis study included 121 pretreated patients (median age, 52 years; ECOG performance score, 0 in 48%, 1-2 in 51%) with histomolecularly characterized glioma (WHO grade IV glioblastoma, n=81; WHO grade III anaplastic astrocytoma, n=28; WHO grade III anaplastic oligodendroglioma, n=12), who had a resection, radiotherapy, alkylating chemotherapy, or combinations thereof. After a median follow-up time of 14 months (range, 1-214 months), anatomic MR and O-(2-[18F]fluoroethyl)-L-tyrosine (FET) PET images were acquired on a 3T hybrid PET/MR scanner. Post-therapeutic findings comprised resection cavities, regions with contrast enhancement or increased FET uptake and T2/FLAIR hyperintensities. Local fiber density was determined from high angular-resolution diffusion-weighted imaging and advanced tractography methods. A cohort of 121 healthy subjects selected from the 1000BRAINS study matched for age, gender and education served as a control group.ResultsLesion types differed in both affected tissue volumes and relative fiber densities compared to control values (resection cavities: median volume 20.9 mL, fiber density 16% of controls; contrast-enhanced lesions: 7.9 mL, 43%; FET uptake areas: 30.3 mL, 49%; T2/FLAIR hyperintensities: 53.4 mL, 57%, p<0.001). In T2/FLAIR-hyperintense lesions caused by peritumoral edema due to recurrent glioma (n=27), relative fiber density was as low as in lesions associated with radiation-induced gliosis (n=13, 48% vs. 53%, p=0.17). In regions with pathologically increased FET uptake, local fiber density was inversely related (p=0.005) to the extent of uptake. Total fiber loss associated with contrast-enhanced lesions (p=0.006) and T2/FLAIR hyperintense lesions (p=0.013) had a significant impact on overall ECOG score.ConclusionsThese results suggest that apart from resection cavities, reduction in local fiber density is greatest in contrast-enhancing recurrent tumors, but total fiber loss induced by edema or gliosis has an equal detrimental effect on the patients’ performance status due to the larger volume affected.
Neoplasms. Tumors. Oncology. Including cancer and carcinogens