Intensive development of recreational construction has taken place in the Beskid Mountains in Southern Poland over the span of several decades, especially in the villages of Szczyrk, Wisła, and Brenna, due to the proximity of the industrial Silesian agglomeration. These buildings, constructed mostly since the 1970s, are heterogeneous in appearance and often do not reference traditional timber-and-stone sustainable architecture; instead, they replicate the esthetics found in contemporary single-family houses throughout Poland or abroad. Inconsistencies in building regulations have reinforced this approach, leading to a decline in the quality of both architecture and landscape. Although this situation has been widely discussed in public media, publications on this topic remain sporadic. This article therefore applies qualitative research to discuss the role of cultural identity in modern recreational architecture in the Beskid Mountains as it has affected the well-being of the citizens of Silesia since the 1930s. The unique contribution of this paper to Polish architectural and heritage research is threefold: it provides a structured framework for understanding the development of recreational architecture as a process, it explicitly links empirical field observations to theoretical frameworks (Frampton, Norberg-Schulz, Rapoport), and it proposes a general pathway for culturally sustainable design in the region.
This study proposes a systematic policy framework that leverages special tax measures to steer stakeholder behavior toward urban cultural heritage conservation. Integrating comparative policy analysis, microeconomic modeling, systematic policy framework construction and case studies from China, we design a synergistic system of tax incentives and disincentives across income, consumption, and property taxes. The framework is contextualized within China’s forthcoming Cultural Heritage Conservation Law and demonstrates how fiscal instruments can align individual economic rationality with collective conservation goals. A three-stage decision model – grounded in Multi-Criteria Decision Analysis (MCDA) – is introduced to assess the suitability, necessity, and balancing of tax interventions. Based on the analysis of secondary sources and policy documents, empirical case studies in Suzhou and Tianshui are used to illustrate the framework’s efficacy and limitations. The findings offer a transferable model for sustainable urban governance, with relevance for rapidly urbanizing regions globally.
The increase in extreme weather underscores the critical need for combining innovative architecture, urban, and landscape design to render our cities more resilient. Conventional approaches, heavily relying on energy consuming and dioxide producing technology, often falter during extreme events, worsening climate challenges. A project in Melbourne exemplifies a shift towards nature-inspired, distributed designs implementing passive strategies of shading, ventilation, water capture, and evaporative cooling. It transformed underused urban spaces into “climate oases” connected through walkable ecological corridors to mitigate urban heat and flooding while providing social and recreational benefits. Its design combined architectural, urban, and ecological strategies in interconnected city ecologies involving buildings, landscapes, and human activities. Local climate adaptation could similarly inform architectural and urban strategies in other locations across the globe. They could similarly draw on the needs of each climate: tropical cities would benefit from embracing cross-ventilation and shade, arid regions from integrating cooling gardens and introverted dense layouts, temperate climates from seasonal strategies alternating rain and sun protection, while cold areas could optimize sun exposure and wind protection. A study of climate design principles across architecture, urban, and landscape sections demonstrate tailored approaches for specific climates over one-size-fits-all models. They combine strategies to drive innovative urban ecologies that prioritize human and environmental well-being. While the Melbourne Cool Lines initiative exemplifies the integration of climate sensitive urban and ecological approaches within existing urban areas, the typological study ignites discussions on how to take these ideas into different contexts, transforming cities into resilient ecosystems that could better respond to changing climates.
Oguz Emrah Turgut, Mustafa Asker, Hayrullah Bilgeran Yesiloz
et al.
This theoretical research study proposes a novel hybrid algorithm that integrates an improved quasi-dynamical oppositional learning mutation scheme into the Mountain Gazelle Optimization method, augmented with chaotic sequences, for the thermal and economical design of a shell-and-tube heat exchanger operating with nanofluids. The Mountain Gazelle Optimizer is a recently developed metaheuristic algorithm that simulates the foraging behaviors of Mountain Gazelles. However, it suffers from premature convergence due to an imbalance between its exploration and exploitation mechanisms. A two-step improvement procedure is implemented to enhance the overall search efficiency of the original algorithm. The first step concerns substituting uniformly random numbers with chaotic numbers to refine the solution quality to better standards. The second step is to develop a novel manipulation equation that integrates different variants of quasi-dynamic oppositional learning search schemes, guided by a novel intelligently devised adaptive switch mechanism. The efficiency of the proposed algorithm is evaluated using the challenging benchmark functions from various CEC competitions. Finally, the thermo-economic design of a shell-and-tube heat exchanger operated with different nanoparticles is solved by the proposed improved metaheuristic algorithm to obtain the optimal design configuration. The predictive results indicate that using water + SiO<sub>2</sub> instead of ordinary water as the refrigerant on the tube side of the heat exchanger reduces the total cost by 16.3%, offering the most cost-effective design among the configurations compared. These findings align with the demonstration of how biologically inspired metaheuristic algorithms can be successfully applied to engineering design.
This study investigates the efficiency of electrocoagulation (EC) in removing Tartrazine Yellow (TY) azo dye from synthetic wastewater using aluminium electrodes. The effects of current density, <i>i</i> (0.008–0.024 A cm<sup>−2</sup>), initial solution pH (3.0–7.0), and treatment time, <i>t</i> (10–50 min) on key process parameters, including pH, temperature (<i>T</i>), TY dye concentration (<i>c</i>) and removal efficiency (<i>R</i>), anode consumption, and sludge characterisation were studied. The experiments were conducted in a batch reactor according to the experimental plan developed in Design-Expert software, which was also used for the evaluation of the obtained results. As the EC process progresses, the removal efficiency of the TY dye increases, while the removal dynamics and the final value of <i>R</i> (ranging from about 28% to 99%) depend on the experimental conditions (<i>i</i>, initial pH, and <i>t</i>). A high <i>R</i>-value is reached faster with the application of higher current densities and lower initial pH. This is associated with a higher proportion of carbon and sulphur in the sludge (from the TY dye) after the EC process. Additionally, a mathematical model was developed to predict the experimental data. A numerical optimisation method using response surface methodology (RSM) was applied to determine the optimal operating conditions for TY dye removal. This resulted in the following conditions: pH = 3.37, <i>t</i> = 18.74 min, and <i>i</i> = 0.016 A cm<sup>−2</sup>, achieving a removal efficiency of ≈70%.
Robert W. Heath,, Joseph Carlson, Nitish Vikas Deshpande
et al.
We present an evolution of multiple-input multiple-output (MIMO) wireless communications known as the tri-hybrid MIMO architecture. In this framework, the traditional operations of linear precoding at the transmitter are distributed across digital beamforming, analog beamforming, and reconfigurable antennas. Compared with the hybrid MIMO architecture, which combines digital and analog beamforming, the tri-hybrid approach introduces a third layer of electromagnetic beamforming through antenna reconfigurability. This added layer offers a pathway to scale MIMO spatial dimensions, important for 6G systems operating in centimeter-wave bands, where the tension between larger bandwidths and infrastructure reuse necessitates ultra-large antenna arrays. We introduce the key features of the tri-hybrid architecture by (i)~reviewing the benefits and challenges of communicating with reconfigurable antennas, (ii)~examining tradeoffs between spectral and energy efficiency enabled by reconfigurability, and (iii)~exploring configuration challenges across the three layers. Overall, the tri-hybrid MIMO architecture offers a new approach for integrating emerging antenna technologies in the MIMO precoding framework.
Large-scale neuromorphic architectures consist of computing tiles that communicate spikes using a shared interconnect. The communication patterns in such systems are inherently sparse, asynchronous, and localized due to the spiking nature of neural events, characterized by temporal sparsity with occasional bursts of traffic. These characteristics necessitate interconnects optimized for handling high-activity bursts while consuming minimal power during idle periods. Dynamic segmented bus has been proposed a promising interconnect for its simplicity, scalability and low power consumption. However, deploying spiking neural network applications on such buses presents challenges, including substantial inter-cluster traffic, which can lead to network congestion, spike loss, and unnecessary energy expenditure. In this paper, we propose a three-step process to deploy SNN applications on dynamic segmented buses aiming to reduce spike loss and conserve energy. Firstly, we formulate optimization heuristics to mitigate spike loss and energy consumption based on application connectivity. Secondly, we analyze the application traffic to determine spike schedules that minimize traffic flooding. Lastly, we propose a routing algorithm to minimize spike traffic path crossings. We evaluate our approach using a cycle-accurate network simulator. The simulation results show that our algorithms can eliminate spike loss while keeping energy consumption significantly lower compared to conventional NoCs.
As GPU architectures rapidly evolve to meet the growing demands of exascale computing and machine learning, the performance implications of architectural innovations remain poorly understood across diverse workloads. NVIDIA Blackwell (B200) introduces significant architectural advances, including fifth-generation tensor cores, tensor memory (TMEM), a decompression engine (DE), and a dual-chip design; however, systematic methodologies for quantifying these improvements lag behind hardware development cycles. We contribute an open-source microbenchmark suite that provides practical insights into optimizing workloads to fully utilize the rich feature sets of modern GPU architectures. This work enables application developers to make informed architectural decisions and guides future GPU design directions. We study Blackwell GPUs and compare them to the H200 generation with respect to the memory subsystem, tensor core pipeline, and floating-point precisions (FP32, FP16, FP8, FP6, FP4). Our systematic evaluation of dense and sparse GEMM, transformer inference, and training workloads shows that B200 tensor core enhancements achieve 1.85x ResNet-50 and 1.55x GPT-1.3B mixed-precision training throughput, with 32 percent better energy efficiency than H200.
In this paper, we propose LoopLynx, a scalable dataflow architecture for efficient LLM inference that optimizes FPGA usage through a hybrid spatial-temporal design. The design of LoopLynx incorporates a hybrid temporal-spatial architecture, where computationally intensive operators are implemented as large dataflow kernels. This achieves high throughput similar to spatial architecture, and organizing and reusing these kernels in a temporal way together enhances FPGA peak performance. Furthermore, to overcome the resource limitations of a single device, we provide a multi-FPGA distributed architecture that overlaps and hides all data transfers so that the distributed accelerators are fully utilized. By doing so, LoopLynx can be effectively scaled to multiple devices to further explore model parallelism for large-scale LLM inference. Evaluation of GPT-2 model demonstrates that LoopLynx can achieve comparable performance to state-of-the-art single FPGA-based accelerations. In addition, compared to Nvidia A100, our accelerator with a dual-FPGA configuration delivers a 2.52x speed-up in inference latency while consuming only 48.1% of the energy.
The end-to-end (E2E) architecture for the 6th generation of mobile network (6G) necessitates a comprehensive design, considering emerging use cases (UCs), requirements, and key value Indicators (KVIs). These UCs collectively share stringent requirements of extreme connectivity, inclusivity, and flexibility imposed on the architecture and its enablers. Furthermore, the trustworthiness and security of the 6G architecture must be enhanced compared to previous generations, owning to the expected increase in security threats and more complex UCs that may expose new security vulnerabilities. Additionally, sustainability emerges as a critical design consideration in the 6G architecture. In light of these new set of values and requirements for 6G, this paper aims to describe an architecture proposed within the Hexa-X, the European 6G flagship project, capable of enabling the above-mentioned 6G vision for the 2030s and beyond.
This paper presents our approach to accelerate computer architecture simulation by leveraging machine learning techniques. Traditional computer architecture simulations are time-consuming, making it challenging to explore different design choices efficiently. Our proposed model utilizes a combination of application features and micro-architectural features to predict the performance of an application. These features are derived from simulations of a small portion of the application. We demonstrate the effectiveness of our approach by building and evaluating a machine learning model that offers significant speedup in architectural exploration. This model demonstrates the ability to predict IPC values for the testing data with a root mean square error of less than 0.1.
The scalability and flexibility of microservice architecture have led to major changes in cloud-native application architectures. However, the complexity of managing thousands of small services written in different languages and handling the exchange of data between them have caused significant management challenges. Service mesh is a promising solution that could mitigate these problems by introducing an overlay layer on top of the services. In this paper, we first study the architecture and components of service mesh architecture. Then, we review two important service mesh implementations and discuss how the service mesh could be helpful in other areas, including 5G.
Zeolitic imidazolate framework-8 nanoparticles (ZIF-8 NPs) are typical metal–organic framework (MOF) materials and have been intensively studied for their potential application in drug delivery and environmental remediation. However, knowledge of their potential risks to health and the environment is still limited. Therefore, this study exposed female and male zebrafish to ZIF-8 NPs (0, 9.0, and 90 mg L<sup>−1</sup>) for four days. Subsequently, variations in their behavioral traits and brain oxidative stress levels were investigated. The behavioral assay showed that ZIF-8 NPs at 90 mg/L could significantly decrease the locomotor activity (i.e., hypoactivity) of both genders. After a ball falling stimulation, zebrafish exposed to ZIF-8 NPs (9.0 and 90 mg L<sup>−1</sup>) exhibited more freezing states (i.e., temporary cessations of movement), and males were more sensitive than females. Regardless of gender, ZIF-8 NPs exposure significantly reduced the SOD, CAT, and GST activities in the brain of zebrafish. Correlation analysis revealed that the brain oxidative stress induced by ZIF-8 NPs exposure might play an important role in their behavioral toxicity to zebrafish. These findings highlight the necessity for further assessment of the potential risks of MOF nanoparticles to aquatic species and the environment.
From the mid-1970s until the late 1980s, Angola hosted guerrillas fighting for the liberation of other southern African states, as well as Cuban and Soviet military advisors and civilian professionals. As the study of Cold War era liberation struggles has developed from nation-centred narratives towards both global and local perspectives, the international encounters that took place in the ambit of these struggles have attracted attention from several historians. In particular, the military training camps have come to be seen as an environment that nurtured specific kinds of social and political relationships, although little physical evidence of these camps remains. This article is based on photographs taken at Camalundu and Caculama, two sites in the Angolan Malanje province where the remains of camps are still visible. At Camalundu, Portuguese colonial architecture points to the original function of the site, while slogans painted in English and Spanish, variously referencing South African history and global revolutionary movements, bear witness to the presence of Cubans and South Africans, and provide evidence of how they saw their own role within the international politics of the day. At Caculama the secluded and defensive nature of the site and its installations provides evidence of the South African role in relation to Angolan strategic thinking. The photographs complement the existing memoirs and oral testimony about the politics of exile and about life in the camps, providing diverse evidence about the presence of liberation fighters and their relationships with the wider world. They also enable the preservation of a visual and tangible historical record which, in the absence of preservation measures, is in danger of decay beyond recognition.
Anirban Chaudhuri, Graham Pash, David A. Hormuth
et al.
We develop a methodology to create data-driven predictive digital twins for optimal risk-aware clinical decision-making. We illustrate the methodology as an enabler for an anticipatory personalized treatment that accounts for uncertainties in the underlying tumor biology in high-grade gliomas, where heterogeneity in the response to standard-of-care (SOC) radiotherapy contributes to sub-optimal patient outcomes. The digital twin is initialized through prior distributions derived from population-level clinical data in the literature for a mechanistic model's parameters. Then the digital twin is personalized using Bayesian model calibration for assimilating patient-specific magnetic resonance imaging data. The calibrated digital twin is used to propose optimal radiotherapy treatment regimens by solving a multi-objective risk-based optimization under uncertainty problem. The solution leads to a suite of patient-specific optimal radiotherapy treatment regimens exhibiting varying levels of trade-off between the two competing clinical objectives: (i) maximizing tumor control (characterized by minimizing the risk of tumor volume growth) and (ii) minimizing the toxicity from radiotherapy. The proposed digital twin framework is illustrated by generating an in silico cohort of 100 patients with high-grade glioma growth and response properties typically observed in the literature. For the same total radiation dose as the SOC, the personalized treatment regimens lead to median increase in tumor time to progression of around six days. Alternatively, for the same level of tumor control as the SOC, the digital twin provides optimal treatment options that lead to a median reduction in radiation dose by 16.7% (10 Gy) compared to SOC total dose of 60 Gy. The range of optimal solutions also provide options with increased doses for patients with aggressive cancer, where SOC does not lead to sufficient tumor control.
This paper addresses the design of a partly-parallel cascaded FFT-IFFT architecture that does not require any intermediate buffer. Folding can be used to design partly-parallel architectures for FFT and IFFT. While many cascaded FFT-IFFT architectures can be designed using various folding sets for the FFT and the IFFT, for a specified folded FFT architecture, there exists a unique folding set to design the IFFT architecture that does not require an intermediate buffer. Such a folding set is designed by processing the output of the FFT as soon as possible (ASAP) in the folded IFFT. Elimination of the intermediate buffer reduces latency and saves area. The proposed approach is also extended to interleaved processing of multi-channel time-series. The proposed FFT-IFFT cascade architecture saves about N/2 memory elements and N/4 clock cycles of latency compared to a design with identical folding sets. For the 2-interleaved FFT-IFFT cascade, the memory and latency savings are, respectively, N/2 units and N/2 clock cycles, compared to a design with identical folding sets.
Rainer Liebhart, Mansoor Shafi, Gajan Shivanandan
et al.
Mobile communications have been undergoing a generational change every ten years. Whilst we are just beginning to roll out 5G networks, significant efforts are planned to standardize 6G that is expected to be commercially introduced by 2030. This paper looks at the use cases for 6G and their impact on the network architecture to meet the anticipated performance requirements. The new architecture is based on integrating various network functions in virtual cloud environments, leveraging the advancement of artificial intelligence in all domains, integrating different sub-networks constituting the 6G system, and on enhanced means of exposing data and services to third parties.
With the vigorous development of space communication technology and the continuous advancement of space-integrated-ground information network, satellite communication networks resource management and control is becoming more and more complex.Due to the scarcity of satellite resources, the slowness of resource scheduling compared to the status refreshing, and uneven distribution of business, eff ciently managing resources has become one of urgent problems to be solved in the development of satellite communications.In view of the heterogeneous network system architecture of high and low orbit satellites, the challenges, its network resource management and control facing, were analyzed.Integrating on the basis of traditional management and control architecture, the collaborative management and control architecture based on group management was introduced.The management strategy of satellite network virtual resource pool was explained to relieved resources scarcity.Resource scheduling algorithms based on deep reinforcement learning(DRL) was introduced to solved the mismatch problem of traditional scheduling methods in complex environments.Beam-hopping technology was adopted to deal with the two-dimensional unevenness of service distribution in time and space.