D-Legion: A Scalable Many-Core Architecture for Accelerating Matrix Multiplication in Quantized LLMs
Ahmed J. Abdelmaksoud, Cristian Sestito, Shiwei Wang
et al.
The performance gains obtained by large language models (LLMs) are closely linked to their substantial computational and memory requirements. Quantized LLMs offer significant advantages with extremely quantized models, motivating the development of specialized architectures to accelerate their workloads. This paper proposes D-Legion, a novel scalable many-core architecture, designed using many adaptive-precision systolic array cores, to accelerate matrix multiplication in quantized LLMs. The proposed architecture consists of a set of Legions where each Legion has a group of adaptive-precision systolic arrays. D-Legion supports multiple computation modes, including quantized sparse and dense matrix multiplications. The block structured sparsity is exploited within a fully-sparse, or partially-sparse windows. In addition, memory accesses of partial summations (psums) are spatially reduced through parallel accumulators. Furthermore, data reuse is maximized through optimized scheduling techniques by multicasting matrix tiles across the Legions. A comprehensive design space exploration is performed in terms of Legion/core granularity to determine the optimal Legion configuration. Moreover, D-Legion is evaluated on attention workloads from two BitNet models, delivering up to 8.2$\times$ lower latency, up to 3.8$\times$ higher memory savings, and up to 3$\times$ higher psum memory savings compared to state-of-the-art work. D-Legion, with eight Legions and 64 total cores, achieves a peak throughput of 135.68 TOPS at a frequency of 1 GHz. A scaled version of D-Legion, with 32 Legions, is compared to Google TPUv4i, achieving up to 2.5$\times$ lower total latency, up to 2.3$\times$ higher total throughput, and up to 2.7$\times$ higher total memory savings.
An FPGA-Based SoC Architecture with a RISC-V Controller for Energy-Efficient Temporal-Coding Spiking Neural Networks
Mohammad Javad Sekonji, Ali Mahani, Maryam Mirsadeghi
et al.
Spiking Neural Networks (SNNs) offer high energy efficiency and event-driven computation, ideal for low-power edge AI. Their hardware implementation on FPGAs, however, faces challenges due to heavy computation, large memory use, and limited flexibility. This paper proposes a compact System-on-Chip (SoC) architecture for temporal-coding SNNs, integrating a RISC-V controller with an event-driven SNN core. It replaces multipliers with bitwise operations using binarized weights, includes a spike-time sorter for active spikes, and skips noninformative events to reduce computation. The architecture runs fully on a Xilinx Artix-7 FPGA, achieving up to 16x memory reduction for weights and lowering computational overhead and latency, with 97.0% accuracy on MNIST and 88.3% on FashionMNIST. This self-contained design provides an efficient, scalable platform for real-time neuromorphic inference at the edge.
Optimising Architectural Studio Spaces: Applying a Psychological Needs Framework to Enhance Student Well-Being
Ezgi Bay-Sahin, Nadia Shah
This study investigates how architectural studio environments influence student well-being through the lens of Self-Determination Theory (SDT). According to SDT, the fulfilment of three core psychological needs, autonomy, competence, and relatedness, is essential for motivation and learning. Using a convergent mixed-methods case study, we examined the lived experiences of undergraduate interior architecture design students at Osmaniye Korkut Ata University (Türkiye). Data were collected through an online survey ( n = 114), classroom observations, and a design exercise in which students reimagined their ideal studio space. Survey results revealed consistent concerns about spatial inflexibility, inadequate lighting and insufficient equipment, which students perceived as undermining their autonomy and competence. Observations confirmed these limitations, while design proposals emphasised flexible layouts, individualised workstations, improved lighting, and informal gathering spaces to foster relatedness and collaboration. By triangulating quantitative and qualitative data, the study demonstrates how deficiencies in current studio design hinder learning outcomes while also identifying strategies to create environments that support psychological well-being. The findings provide evidence-based recommendations for aligning architecture studio design with SDT principles, offering practical guidance for institutions seeking to create learning environments that foster student motivation, engagement, and well-being.
History of scholarship and learning. The humanities, Social Sciences
Designing Spatial Architectures for Sparse Attention: STAR Accelerator via Cross-Stage Tiling
Huizheng Wang, Taiquan Wei, Hongbin Wang
et al.
Large language models (LLMs) rely on self-attention for contextual understanding, demanding high-throughput inference and large-scale token parallelism (LTPP). Existing dynamic sparsity accelerators falter under LTPP scenarios due to stage-isolated optimizations. Revisiting the end-to-end sparsity acceleration flow, we identify an overlooked opportunity: cross-stage coordination can substantially reduce redundant computation and memory access. We propose STAR, a cross-stage compute- and memory-efficient algorithm-hardware co-design tailored for Transformer inference under LTPP. STAR introduces a leading-zero-based sparsity prediction using log-domain add-only operations to minimize prediction overhead. It further employs distributed sorting and a sorted updating FlashAttention mechanism, guided by a coordinated tiling strategy that enables fine-grained stage interaction for improved memory efficiency and latency. These optimizations are supported by a dedicated STAR accelerator architecture, achieving up to 9.2$\times$ speedup and 71.2$\times$ energy efficiency over A100, and surpassing SOTA accelerators by up to 16.1$\times$ energy and 27.1$\times$ area efficiency gains. Further, we deploy STAR onto a multi-core spatial architecture, optimizing dataflow and execution orchestration for ultra-long sequence processing. Architectural evaluation shows that, compared to the baseline design, Spatial-STAR achieves a 20.1$\times$ throughput improvement.
FlexSAN: A Flexible Regenerative Satellite Access Network Architecture
Weize Kong, Chaoqun You, Xuming Pei
et al.
The regenerative satellite access network (SAN) architecture deploys next-generation NodeB (gNBs) on satellites to enable enhanced network management capabilities. It supports two types of regenerative payload, on-board gNB and on-board gNB-Distributed Unit (gNB-DU). Measurement results based on our prototype implementation show that the on-board gNB offers lower latency, while the on-board gNB-DU is more cost-effective, and there is often a trade-off between Quality-of-Service (QoS) and operational expenditure (OPEX) when choosing between the two payload types. However, current SAN configurations are static and inflexible -- either deploying the full on-board gNB or only the on-board gNB-DU. This rigidity can lead to resource waste or poor user experiences. In this paper, we propose Flexible SAN (FlexSAN), an adaptive satellite access network architecture that dynamically configures the optimal regenerative payload based on real-time user demands. FlexSAN selects the lowest OPEX payload configuration when all user demands are satisfied, and otherwise maximizes the number of admitted users while ensuring QoS for connected users. To address the computational complexity of dynamic payload selection, we design an adaptive greedy heuristic algorithm. Extensive experiments validate FlexSAN's effectiveness, showing a 36.1% average improvement in user admission rates and a 15% OPEX reduction over static SANs.
A practical NbS framework for ecological landscape design: The Pınarbaşı example
Gizem Dinç
Recreation and leisure activities have become vital components for enhancing social welfare and overall quality of life in contemporary urban and rural environments. Within this context, natural areas play a crucial role in strengthening social interaction, maintaining ecological resilience, and supporting public health. The growing interest in nature-based recreation highlights the need for design strategies that integrate ecological preservation with user-oriented functionality. This study develops a landscape design model explicitly grounded in Nature-Based Solutions (NbS), positioning NbS as the core conceptual and methodological framework of the project. Conducted in the Pınarbaşı Public Garden located in the Şarkikaraağaç district of Isparta, Türkiye, the research demonstrates how NbS principles can be operationalized in ecological landscape design to address environmental, social, and functional needs simultaneously. The design process was structured around a comprehensive analysis of topography, vegetation, hydrology, land use, and social dynamics. The proposed design integrates ecological services such as shading, water management, carbon sequestration, and soil protection with multifunctional recreational facilities including picnic areas, bicycle paths, wooden bridges, and local product stands. Sustainable materials, minimal intervention strategies, and universal accessibility standards were prioritized throughout the design process. The findings demonstrate that NbS-based landscape design enhances ecological continuity, supports local identity, and strengthens the interaction between humans and nature. This case study offers a replicable model for developing resilient recreational landscapes that contribute to environmental sustainability and community well-being.
A carbon-centric evaluation framework for building-integrated agriculture: a comparison of three farm types and building standards
Mohamed Imam, Alesandros Glaros, Cheney Chen
et al.
This paper explores the potential of Building-Integrated Agriculture (BIA) as a strategy to align urban agriculture systems with building lifecycle sustainability goals. BIA systems such as indoor vertical farms, rooftop greenhouses, and soil-based urban farms promise to bolster urban food security and resource circularity. However, their environmental impacts can be further optimized via integration with building resources and strategic design, which requires a standardized framework for evaluating life-cycle metrics. This study develops a cross-industry Life Cycle Assessment (LCA) framework that harmonizes agricultural and building performance indicators, using carbon as a unifying metric to evaluate operational and embodied impacts. The research combines a meta-analysis of existing LCA studies, detailed case study evaluations, and novel paired metrics to quantify energy use, water use, and greenhouse gas emissions within a case study. Key findings identify operational carbon hotspots, infrastructure inefficiencies, and embodied carbon challenges while highlighting opportunities for integrating resource recovery strategies, such as greywater reuse and waste heat recovery. The results reveal trade-offs between productivity and environmental impact, with vertical farms demonstrating high yields but significant energy intensity, while soil-based systems excel in resource efficiency but exhibit lower output. This work introduces a structured methodology for cross-industry data integration and offers actionable insights for designers, growers and developers. By redefining system boundaries and incorporating reciprocal benefits between BIA and host buildings, this framework provides a pathway toward more sustainable urban agricultural practices and resilient urban ecosystems.
Nutrition. Foods and food supply, Food processing and manufacture
Thermal and hydrodynamic characteristics of Therminol VP-1 oil flow across perforated conical hollow turbulence promoter in Scheffler dish receiver tube
Anil Kumar, Ram Kunwer, Nikhil Kanojia
et al.
Abstract This study examines the thermal and hydrodynamic characteristics of Therminol VP-1 oil flow through perforated conical hollow-type turbulence promoters installed in a solar Scheffler dish collector receiver tube, utilizing computational fluid dynamics (CFD) analysis. The research examines these configurations using the RNG k-ε turbulence model with conventional wall functions. Simulations are conducted at Reynolds numbers ranging from 3000 to 15,000, with relative perforated conical hollow-type turbulence promoters ratios (Per ID /Per OD ) varying from 2.11 to 2.33, relative turbulence promoter pitch (P TP /D tube ) spanning from 2.25 to 3.08, and a relative turbulence promoter diameter (DB inlet /DB outlet ) is constant at 2.0 to evaluate heat transfer and friction factor characteristics. An experimental analysis has been conducted on a solar Scheffler dish collector receiver using a plain tube with Therminol VP-1 as the heat transfer fluid to validate the CFD results for the current study. Moreover, the CFD results have been verified through a comparison with a conventional surface solar Scheffler dish collector receiver tube utilizing Therminol VP-1 as the heat transfer fluid. This comparison encompassed theoretical relationships and empirical data pertaining to the Nusselt number and friction factor. The CFD results for the plain surface solar receiver tube demonstrated important alignment with experimental data and theoretical predictions based on the standard Dittus and Blasius equations, exhibiting reasonable deviation throughout the analyzed range. Overall, the CFD results demonstrate that Therminol VP-1, combined with perforated conical hollow-type turbulence promoters, improves thermal efficiency, providing an effective approach for enhancing Scheffler dish receiver tubes while reducing excess pressure losses. According to thermal and hydraulic performance data, hollow-type conical turbulence promoters enhanced heat transfer, with the best performance achieved at Per ID /Per OD of 2.25 and a (P TP /D tube ) of 2.83.
GBsim: A Robust GCN-BERT Approach for Cross-Architecture Binary Code Similarity Analysis
Jiang Du, Qiang Wei, Yisen Wang
et al.
Recent advances in graph neural networks have transformed structural pattern learning in domains ranging from social network analysis to biomolecular modeling. Nevertheless, practical deployments in mission-critical scenarios such as binary code similarity detection face two fundamental obstacles: first, the inherent noise in graph construction processes exemplified by incomplete control flow edges during binary function recovery; second, the substantial distribution discrepancies caused by cross-architecture instruction set variations. Conventional GNN architectures demonstrate severe performance degradation under such low signal-to-noise ratio conditions and cross-domain operational environments, particularly in security-sensitive vulnerability identification tasks where feature instability or domain shifts could trigger critical false judgments. To address these challenges, we propose GBsim, a novel approach that combines graph neural networks with natural language processing. GBsim employs a cross-architecture language model to transform binary functions into semantic graphs, leverages a multilayer GCN for structural feature extraction, and employs a Transformer layer to integrate semantic information, generates robust cross-architecture embeddings that maintain high performance despite significant distribution shifts. Extensive experiments on a large-scale cross-architecture dataset show that GBsim achieves an MRR of 0.901 and a Recall@1 of 0.831, outperforming state-of-the-art methods. In real-world vulnerability detection tasks, GBsim achieves an average recall rate of 81.3% on a 1-day vulnerability dataset, demonstrating its practical effectiveness in identifying security threats and outperforming existing methods by 2.1%. This performance advantage stems from GBsim’s ability to maximize information preservation across architectural boundaries, enhancing model robustness in the presence of noise and distribution shifts.
Assessment of Ecological Carrying Capacity and Spatiotemporal Evolution Analysis for Arid Areas Based on the AHP-EW Model: A Case Study of Urumqi, China
Xiaoyan Tang, Funan Liu, Xinling Hu
et al.
Ecological carrying capacity (ECC) is central to assessing the sustainability of ecosystems, aiming to quantify the limits of natural systems to support human activities while maintaining biodiversity and resource regeneration. To assess ECC, earlier studies typically used the analytic hierarchy process (AHP) method for modeling. In this study, we developed an AHP-EW method based on a combination of AHP and the entropy weight method, which considered important indicators including land use, vegetation, soil, location, topography, climate, and socio-economics, and constructed an ECC evaluation system. The new AHP-EW method was applied to analyze the spatiotemporal ECC patterns in Urumqi from 2000 to 2020. The results showed a general decreasing trend in ECC during the period 2000–2020. Among them, the ECC decreased significantly by 19.05% from 2000 to 2010. After 2010, the rate of decline in ECC slowed to 14.12% due to ecological conservation policies. In addition, Midong District, Dabancheng District, and Urumqi County had worse ECC. Still, in general, the distribution of ECC in each district and county showed a trend of decreasing in areas with low ECC and increasing in areas with high ECC. Cluster analysis showed that ECC improved in ecological reserve areas, while some built-up areas showed a decrease in ECC due to economic development and human activities. Driving factor analysis shows that NDVI, climate change, and land-use conversion are the key factors influencing the change in ECC in Urumqi. This study provides new ideas and technical support for ECC assessment in arid areas, which can help formulate more effective ecological protection strategies and promote the healthy and stable development of regional ecosystems.
Optimal Operation Strategy of Cascade Hydro-Wind-Solar-Pumped Storage Complementary System Considering Flexible Regulation Ability
XIA Jinlei, TANG Yijie, WANG Lingling, JIANG Chuanwen, GU Jiu
In the context of “carbon peaking and carbon neutrality”, the large-scale integration and consumption of wind and solar resources is an inevitable trend in future energy development. However, as the capacity of wind and solar power integration increases, the power system also requires more flexible resources to ensure secure operation. To investigate the flexible regulation of hydropower in the system, this study focuses on the downstream stations of the hydro-wind-solar-pumped storage clean energy base in the Yalong River Basin. Considering its flexible regulation capabilities, the study conducts day-ahead optimized operational strategy research for the complementary system. First, to address the challenges of site selection and high costs associated with independent pumped storage, steady-state models for hybrid pumped storage stations in a cascade hydro-wind-solar-pumped storage system are established. To overcome the limitations of traditional models such as low predictive accuracy and the subjective selection of long short-term memory (LSTM) hyperparameters, the particle swarm optimization (PSO) algorithm is used to optimize the parameters of LSTM and the optimized LSTM model is then used to forecast the output of wind and solar power. Next, in order to fully harness the flexible regulation potential of the complementary system, a multi-objective optimal dispatching model is developed considering the economic benefits and flexible regulation margin of the complementary system in the day-ahead time. The normal boundary intersection (NBI) method is employed to solve the multi-objective problem, which can obtain the Pareto optimal solutions with an even distribution. Finally, case studies are conducted based on the actual conditions of the Yalong River Basin. By analyzing different scenarios, the effectiveness of the proposed model and the supportive role of pumped storage in enhancing system flexibility are validated. The results demonstrate that the proposed approach not only balances system profits but also fully exploits the flexible regulation potential of the system, ensuring stable operation of the system.
Engineering (General). Civil engineering (General), Chemical engineering
Energy-Efficiency Architectural Enhancements for Sensing-Enabled Mobile Networks
Filipe Conceicao, Filipe B. Teixeira, Luis M. Pessoa
et al.
Sensing will be a key technology in 6G networks, enabling a plethora of new sensing-enabled use cases. Some of the use cases relate to deployments over a wide physical area that needs to be sensed by multiple sensing sources at different locations. The efficient management of the sensing resources is pivotal for sustainable sensing-enabled mobile network designs. In this paper, we provide an example of such use case, and show the energy consumption due to sensing has potential to scale to prohibitive levels. We then propose architectural enhancements to solve this problem, and discuss energy saving and energy efficient strategies in sensing, that can only be properly quantified and applied with the proposed architectural enhancements.
Accelerating PageRank Algorithmic Tasks with a new Programmable Hardware Architecture
Md Rownak Hossain Chowdhury, Mostafizur Rahman
Addressing the growing demands of artificial intelligence (AI) and data analytics requires new computing approaches. In this paper, we propose a reconfigurable hardware accelerator designed specifically for AI and data-intensive applications. Our architecture features a messaging-based intelligent computing scheme that allows for dynamic programming at runtime using a minimal instruction set. To assess our hardware's effectiveness, we conducted a case study in TSMC 28nm technology node. The simulation-based study involved analyzing a protein network using the computationally demanding PageRank algorithm. The results demonstrate that our hardware can analyze a 5,000-node protein network in just 213.6 milliseconds over 100 iterations. These outcomes signify the potential of our design to achieve cutting-edge performance in next-generation AI applications.
ADAFT:an storage architecture of large-scale SDN flow tables based on adaptive deep aggregations
XIONG Bing, YUAN Yue, ZHAO Jinyuan
et al.
To solve the problem of resource shortage of ternary content addressable memory (TCAM) in the data plane of software defined network (SDN), a deep flow table aggregation method was proposed based on content entry trees, and a storage architecture of large-scale SDN flow tables named ADAFT was established. The architecture relaxed the Hamming distance requirement between ag-gregated flow entries, and a content entry tree was constructed to aggregate flow entries with different action sets, for significantly en-hancing the aggregation degree of flow tables. Then a dynamic limitation mechanism was designed for the height of content entry trees based on the awareness of TCAM load ratio, to minimize the lookup overhead of aggregated flow tables. Meanwhile, an adaptive selec-tion strategy of flow entry aggregation was presented in the light of TCAM load ratio, to strike a balance between the aggregation degree and lookup overhead of flow tables. Experimental results indicate that the ADAFT architecture achieves much higher flow table com-pression ratios up to 65.74% than existing methods.
Deep neural network-based prediction of tsunami wave attenuation by mangrove forests
Didit Adytia, Dede Tarwidi, Deni Saepudin
et al.
The goal of this research is to develop a model employing deep neural networks (DNNs) to predict the effectiveness of mangrove forests in attenuating the impact of tsunami waves. The dataset for the DNN model is obtained by simulating tsunami wave attenuation using the Boussinesq model with a staggered grid approximation. The Boussinesq model for wave attenuation is validated using laboratory experiments exhibiting a mean absolute error (MAE) ranging from 0.003 to 0.01. We employ over 40,000 data points generated from the Boussinesq numerical simulations to train the DNN. Efforts are made to optimize hyperparameters and determine the neural network architecture to attain optimal performance during the training process. The prediction results of the DNN model exhibit a coefficient of determination (R2) of 0.99560, an MAE of 0.00118, a root mean squared error (RMSE) of 0.00151, and a mean absolute percentage error (MAPE) of 3 %. When comparing the DNN model with three alternative machine learning models— support vector regression (SVR), multiple linear regression (MLR), and extreme gradient boosting (XGBoost)— the performance of DNN is superior to that of SVR and MLR, but it is similar to XGBoost. • High-accuracy DNN models require hyperparameter optimization and neural network architecture selection. • The error of DNN models in predicting the attenuation of tsunami waves by mangrove forests is less than 3 %. • DNN can serve as an alternate predictive model to empirical formulas or classical numerical models.
Applying a transformer architecture to intraoperative temporal dynamics improves the prediction of postoperative delirium
Niklas Giesa, Maria Sekutowicz, Kerstin Rubarth
et al.
Abstract Background Patients who experienced postoperative delirium (POD) are at higher risk of poor outcomes like dementia or death. Previous machine learning models predicting POD mostly relied on time-aggregated features. We aimed to assess the potential of temporal patterns in clinical parameters during surgeries to predict POD. Methods Long short-term memory (LSTM) and transformer models, directly consuming time series, were compared to multi-layer perceptrons (MLPs) trained on time-aggregated features. We also fitted hybrid models, fusing either LSTM or transformer models with MLPs. Univariate Spearman’s rank correlations and linear mixed-effect models establish the importance of individual features that we compared to transformers’ attention weights. Results Best performance is achieved by a transformer architecture ingesting 30 min of intraoperative parameter sequences. Systolic invasive blood pressure and given opioids mark the most important input variables, in line with univariate feature importances. Conclusions Intraoperative temporal dynamics of clinical parameters, exploited by a transformer architecture named TRAPOD, are critical for the accurate prediction of POD.
AutoML for neuromorphic computing and application-driven co-design: asynchronous, massively parallel optimization of spiking architectures
Angel Yanguas-Gil, Sandeep Madireddy
In this work we have extended AutoML inspired approaches to the exploration and optimization of neuromorphic architectures. Through the integration of a parallel asynchronous model-based search approach with a simulation framework to simulate spiking architectures, we are able to efficiently explore the configuration space of neuromorphic architectures and identify the subset of conditions leading to the highest performance in a targeted application. We have demonstrated this approach on an exemplar case of real time, on-chip learning application. Our results indicate that we can effectively use optimization approaches to optimize complex architectures, therefore providing a viable pathway towards application-driven codesign.
CHARM: Composing Heterogeneous Accelerators for Matrix Multiply on Versal ACAP Architecture
Jinming Zhuang, Jason Lau, Hanchen Ye
et al.
Dense matrix multiply (MM) serves as one of the most heavily used kernels in deep learning applications. To cope with the high computation demands of these applications, heterogeneous architectures featuring both FPGA and dedicated ASIC accelerators have emerged as promising platforms. For example, the AMD/Xilinx Versal ACAP architecture combines general-purpose CPU cores and programmable logic with AI Engine processors optimized for AI/ML. With 400 AIEs, it provides up to 6.4 TFLOPs performance for 32-bit floating-point data. However, machine learning models often contain both large and small MM operations. While large MM operations can be parallelized efficiently across many cores, small MM operations typically cannot. We observe that executing some small MM layers from the BERT natural language processing model on a large, monolithic MM accelerator in Versal ACAP achieved less than 5% of the theoretical peak performance. Therefore, one key question arises: How can we design accelerators to fully use the abundant computation resources under limited communication bandwidth for applications with multiple MM layers of diverse sizes? We identify the biggest system throughput bottleneck resulting from the mismatch of massive computation resources of one monolithic accelerator and the various MM layers of small sizes in the application. To resolve this problem, we propose the CHARM framework to compose multiple diverse MM accelerator architectures working concurrently towards different layers in one application. We deploy the CHARM framework for four different applications, including BERT, ViT, NCF, MLP, on the AMD Versal ACAP VCK190 evaluation board. Our experiments show that we achieve 1.46 TFLOPs, 1.61 TFLOPs, 1.74 TFLOPs, and 2.94 TFLOPs inference throughput for BERT, ViT, NCF and MLP, which obtain 5.40x, 32.51x, 1.00x and 1.00x throughput gains compared to one monolithic accelerator.
Development of E-Service Provision System Architecture Based on IoT and WSNs for Monitoring and Management of Freight Intermodal Transportation
Dalė Dzemydienė, Aurelija Burinskienė, Kristina Čižiūnienė
et al.
The problems of developing intelligent service provision systems face difficulties in the representation of dynamic aspects of cargo transportation processes and integration of different and heterogeneous ICT components to support the systems’ necessary functionality. This research aims to develop the architecture of the e-service provision system that can help in traffic management, coordination of works at trans-shipment terminals, and provide intellectual service support during intermodal transportation cycles. The objectives concern the secure application of the Internet of Things (IoT) technology and wireless sensor networks (WSNs) to monitor transport objects and context data recognition. The means for safety recognition of moving objects by integrating them with the infrastructure of IoT and WSNs are proposed. The architecture of the construction of the e-service provision system is proposed. The algorithms of identification, authentication, and safety connection of moving objects into an IoT platform are developed. The solution of application of blockchain mechanisms for the identification of stages of identification of moving objects is described by analysing ground transport. The methodology combines a multi-layered analysis of intermodal transportation with extensional mechanisms of identification of objects and methods of synchronization of interactions between various components. Adaptable e-service provision system architecture properties are validated during the experiments with NetSIM network modelling laboratory equipment and show their usability.
Prediction of surface roughness based on fused features and ISSA-DBN in milling of die steel P20
Miaoxian Guo, Jin Zhou, Xing Li
et al.
Abstract The roughness of the part surface is one of the most crucial standards for evaluating machining quality due to its relationship with service performance. For a preferable comprehension of the evolution of surface roughness, this study proposes a novel surface roughness prediction model on the basis of the unity of fuse d signal features and deep learning architecture. The force and vibration signals produced in the milling of P20 die steel are collected, and time and frequency domain feature from the acquired signals are extracted by variational modal decomposition. The GA-MI algorithm is taken to select the signal features that are relevant to the surface roughness of the workpiece. The optimal feature subset is analyzed and used as the input of the prediction model. DBN is adopted to estimate the surface roughness and the model parameters are optimized by ISSA. (Reviewer 1, Q1) The separate force, vibration and fusion signal information are brought into the DBN model and the ISSA-DBN model for the prediction of surface roughness, and the results show that the accuracy of the roughness prediction is as follows, respectively DBN: 78.1%, 68.8% and 84.4%, and ISSA-DBN: 93.8%, 87.5% and 100%.