P. Langley, J. Laird, Seth Rogers
Hasil untuk "Architecture"
Menampilkan 20 dari ~2886201 hasil · dari DOAJ, CrossRef, arXiv, Semantic Scholar
I. F. Akyildiz, E. Ekici, Gaofeng Yue
Charles Eckert, Xiaowei Wang, Jingcheng Wang et al.
This paper presents the Neural Cache architecture, which re-purposes cache structures to transform them into massively parallel compute units capable of running inferences for Deep Neural Networks. Techniques to do in-situ arithmetic in SRAM arrays, create efficient data mapping and reducing data movement are proposed. The Neural Cache architecture is capable of fully executing convolutional, fully connected, and pooling layers in-cache. The proposed architecture also supports quantization in-cache. Our experimental results show that the proposed architecture can improve inference latency by 8.3× over state-of-art multi-core CPU (Xeon E5), 7.7× over server class GPU (Titan Xp), for Inception v3 model. Neural Cache improves inference throughput by 12.4× over CPU (2.2× over GPU), while reducing power consumption by 50% over CPU (53% over GPU).
Andrej Lavrič, Matjaž Vidmar, Boštjan Batagelj
Microwave photonics has recently come to the forefront as a valuable approach to generating, processing, and measuring signals in high-performance domains such as communication, radar, and timing systems. Recent studies have introduced a range of photonics-based phase-noise analyzers (PNAs) that utilize a variety of architectures, including phase detection, frequency discrimination, and hybrid mechanisms that combine optical with electronic processing. This review focuses on microwave photonic techniques for phase-noise measurement based on the fiber-optic delay-line method, by exploring their fundamental principles, system design frameworks, and performance indicators. The fiber-optic delay-line method is examined as the core architecture, due to the exceptionally low loss and wide bandwidth of the optical fiber, which enable long delays and high measurement sensitivity. Through the integration of insights garnered from recent publications, our objective is to deliver a comprehensive understanding of the strengths and limitations associated with fiber-optic delay-line-based PNAs and to pinpoint new and promising areas for advancing research in the field of oscillator metrology.
Junaid Khan, Júlia Marí-Guaita, Joshua D. Forero et al.
Abstract Metal-halide perovskites are promising materials for optoelectronic applications due to their strong light absorption, tunable bandgaps, and solution-processability. However, their use in photodetectors is often limited by low carrier mobility and degradation over time as compared to advanced 2D nano-materials. Here, we report ultrasensitive photodetectors based on inkjet-printed nanocrystalline films of mixed-phase raisin bread CsPbBr3/Cs4PbBr6 perovskite integrated on graphene platforms. The combination of a photoconductive mixed-phase perovskite and a high-mobility 2D graphene channel enables efficient photogating and broadband charge transport. This device architecture achieves exceptional performance with responsivities surpassing 5.7 × 104 A W-1 and detectivities exceeding 1016 Jones at 312 nm. The enhanced performance arises from the synergistic interplay between charge confinement in the perovskite domains and ultrafast carrier extraction by graphene. Moreover, the fabricated photodetectors exhibit remarkable operational stability, a longevity primarily attributed to the unique composite raisin-bread architecture of the inkjet-printed perovskite films. This work offers a scalable and sustainable strategy for high-performance broadband photodetection.
Myung-Su Yi, Joo-Shin Park
The living quarters (LQ) on jack-up rigs play a critical role in ensuring crew safety and operational functionality under extreme offshore conditions. This study presents a comprehensive structural engineering procedure for the design and analysis of LQ structures, addressing the absence of specific industry standards. The methodology integrates global and local load effects from critical equipment, such as helidecks and lifeboat stations, under harsh environmental conditions during wet towing. A multi-level analysis approach, including finite element analysis (FEA), nonlinear evaluations, and fatigue assessments, was employed to verify structural resilience. The study successfully validates the LQ structures against ultimate limit state (ULS), serviceability limit state (SLS), and accidental limit state (ALS) criteria. The maximum plastic strain observed under green water pressure was 3.8 %, well below the allowable threshold of 15 %, demonstrating adequate safety margins. Fatigue analysis confirmed resistance to vortex-induced vibrations (VIV), ensuring the durability of tubular members. Optimization efforts reduced LQ structural weight by 20 %, enhancing efficiency without compromising safety. The proposed procedure bridges the gap in industry standards, providing a robust framework for designing safer and more reliable LQ structures. This study advances offshore engineering practices by addressing complex loading scenarios and operational challenges, thereby supporting the development of resilient jack-up rigs capable of enduring extreme marine conditions.
Nanhao Liang, Xiaoyuan Yang, Yingwei Xia et al.
Abstract Panoptic Scene Graph Generation (PSG) aims to segment objects and predict the relation triplets <subject, relation, object> within an image. Despite the impressive achievements in PSG, current methods still struggle to capture fine-grained visual context, eschewing spatial and situational information in favor of visual features related to object identity. This limitation naturally impedes the model’s ability to distinguish subtle visual differences between relation triplets, such as “cat-on-person” and “cat-lying on-person”. To address this challenge, we propose CVCPSG, a novel DETR-based method that uncovers composite visual clues for PSG. Specifically, drawing inspiration from how humans capture visual context using diverse visual clues, we first construct a composite visual clues bank based on three key aspects: object, spatial, and situational. Then, we introduce a multi-level visual extractor to align visual features from objects, interactions, and image levels with the composite visual clues bank. Additionally, we incorporate a cross-modal learning module with a multitower architecture to seamlessly integrate visual clues into the relation decoder, thereby improving PSG detection. Extensive experiments on two PSG benchmarks confirm the effectiveness and interpretability of CVCPSG.
Suman Lata Yadav
The concept of life skills is related to the way of life that emphasises the mutual exchange of knowledge, attitudes, and interpersonal skills in education. Its objective is to develop diverse skills among students and prepare them to face life’s challenges with determination. The World Health Organization has defined life skills as “the positive behaviours and tendencies that enable a person to adapt in day-to-day life.” Life skills are the abilities that enable a person to adapt and exhibit positive behaviour, allowing them to deal effectively with the problems and challenges of daily life. Life is a unique gift. Therefore, by equipping life with various skills, happiness, peace, and prosperity are created. In this research, with the objectives of the study in mind, an analytical examination of life skills among secondary-level students has been conducted. This research study examines the effects of living conditions, gender, and social class on students’ life skills and presents the findings. Future researchers can build upon this, and other factors affecting the research can also be explored.
Shady Agwa, Yihan Pan, Georgios Papandroulidakis et al.
Artificial Intelligence models are currently driven by a significant up-scaling of their complexity, with massive matrix multiplication workloads representing the major computational bottleneck. In-memory computing architectures are proposed to avoid the Von Neumann bottleneck. However, both digital/binary-based and analogue in-memory computing architectures suffer from various limitations, which significantly degrade the performance and energy efficiency gains. This work proposes OISMA, a novel in-memory computing architecture that utilizes the computational simplicity of a quasi-stochastic computing domain (Bent-Pyramid system), while keeping the same efficiency, scalability, and productivity of digital memories. OISMA converts normal memory read operations into in-situ stochastic multiplication operations with a negligible cost. An accumulation periphery then accumulates the output multiplication bitstreams, achieving the matrix multiplication functionality. Extensive matrix multiplication benchmarking was conducted to analyze the accuracy of the Bent-Pyramid system, using matrix dimensions ranging from 4x4 to 512x512. The accuracy results show a significant decrease in the average relative Frobenius error, from 9.42% (for 4x4) to 1.81% (for 512x512), compared to 64-bit double precision floating-point format. A 1T1R OISMA array of 4 KB capacity was implemented using a commercial 180nm technology node and in-house RRAM technology. At 50 MHz, OISMA achieves 0.891 TOPS/W and 3.98 GOPS/mm2 for energy and area efficiency, respectively, occupying an effective computing area of 0.804241 mm2. Scaling OISMA from 180nm to 22nm technology shows a significant improvement of two orders of magnitude in energy efficiency and one order of magnitude in area efficiency, compared to dense matrix multiplication in-memory computing architectures.
Ana Rita Ortigoso, Gabriel Vieira, Daniel Fuentes et al.
Events such as catastrophes and disasters are, in most cases, unpredictable. Consequently, reusing existing infrastructures to develop alternative communication strategies after disasters is essential to minimise the impact of these events on the population's ability to communicate and promptly receive alerts from authorities. In this context, the emergence of smart cities, characterised by dense and geographically distributed IoT networks, presents significant potential for such reuse. This work proposes HaLert, a resilient architecture for smart cities based on a Wi-Fi HaLow IEEE 802.11s mesh network, whose resources can be readily reallocated to support a emergency communication system to exchange messages (including text, location, image, audio, and video) between citizens, authorities, and between both parties. To facilitate remote monitoring and configuration of the network, the architecture incorporates the SDN (Software-Defined Networking) paradigm, supported by a LoRa controlled flooding mesh network. A prototype was developed based on this architecture and tested in a real urban scenario comprising both indoor and outdoor environments. The results demonstrated that, despite the significant impact of obstacles, lack of line-of-sight, and terrain slopes on the latency (average latency between 15 and 54.8 ms) and throughput (upload bitrates between 134 and 726 Kbps and download bitrates between 117 and 682 Kbps) of the Wi-Fi HaLow network, it remained stable and resilient, successfully providing all functionalities associated with the HaLert architecture. The tests conducted on the LoRa network revealed a high average message success rate of 94.96%.
André Coelho, Pedro Ribeiro, Helder Fontes et al.
This position paper presents A4FN, an Agentic Artificial Intelligence (AI) architecture for intent-driven automation in Flying Networks (FNs) using Unmanned Aerial Vehicles (UAVs) as access nodes. A4FN leverages Generative AI and Large Language Models (LLMs) to enable real-time, context-aware network control via a distributed agentic system. It comprises two components: the Perception Agent (PA), which semantically interprets multimodal input -- including imagery, audio, and telemetry data -- from UAV-mounted sensors to derive Service Level Specifications (SLSs); and the Decision-and-Action Agent (DAA), which reconfigures the network based on inferred intents. A4FN embodies key properties of Agentic AI, including autonomy, goal-driven reasoning, and continuous perception-action cycles. Designed for mission-critical, infrastructure-limited scenarios such as disaster response, it supports adaptive reconfiguration, dynamic resource management, and interoperability with emerging wireless technologies. The paper details the A4FN architecture, its core innovations, and open research challenges in multi-agent coordination and Agentic AI integration in next-generation FNs.
Soroush Ahadi, Mehdi Modarressi, Masoud Daneshtalab
Large language models demand massive computational power and memory resources, posing significant challenges for efficient deployment. While quantization has been widely explored to reduce model size and computation, this paper demonstrates an additional benefit: quantization increases parameter locality, creating opportunities for computation reuse. Building on this insight, we propose AxLLM, a hardware accelerator architecture designed for quantized models. Axllm introduces a novel redundancy elimination technique that caches and reuses multiplication results for repeated weight values, substantially reducing redundant operations. The architecture features dual multiply and reuse pipelines, efficiently supporting both base models and LoRA fine-tuned models without altering parameters, retraining, or requiring offline preprocessing. Experimental results show that AxLLM achieves up to 90% reduction in computations, delivering 28% lower energy consumption and a 1.7x speedup over baseline execution. These results highlight Axllm as a scalable and efficient solution for accelerating LLMs on specialized hardware.
Nicola Giuseppe Marchioro, Yannis Velegrakis, Valentine Anantharaj et al.
Ensuring the trustworthiness and long-term verifiability of scientific data is a foundational challenge in the era of data-intensive, collaborative research. Provenance metadata plays a key role in this context, capturing the origin, transformation, and usage of research artifacts. However, existing solutions often fall short when applied to distributed, multi-institutional settings. This paper introduces a modular, domain-agnostic architecture for provenance tracking in federated environments, leveraging permissioned blockchain infrastructure to guarantee integrity, immutability, and auditability. The system supports decentralized interaction, persistent identifiers for artifact traceability, and a provenance versioning model that preserves the history of updates. Designed to interoperate with diverse scientific domains, the architecture promotes transparency, accountability, and reproducibility across organizational boundaries. Ongoing work focuses on validating the system through a distributed prototype and exploring its performance in collaborative settings.
Fangting Zhou, Ala Arvidsson, Jiaming Wu et al.
In this paper, we develop a profit-sharing-based optimal routing mechanism to incentivize horizontal collaboration among urban goods distributors. The core of this mechanism is based on exchanging goods at meet points, which is optimally planned en route. We propose a Collaborative Electric Vehicle Routing Problem with Meet Points (CoEVRPMP) considering constraints such as time windows, opportunity charging, and meet-point synchronization. The proposed CoEVRPMP is formulated as a mixed-integer nonlinear programming model. We present an exact method via branching and a matheuristic that combines adaptive large neighborhood search with linear programming. The viability and scalability of the collaborative method are demonstrated through numerical case studies, including a real-world case and a large-scale experiment with up to 500 customers. The findings underscore the significance of horizontal collaboration among delivery companies in attaining both higher individual profits and lower total costs. Moreover, collaboration helps to reduce the environmental footprint by decreasing travel distance.
Pavel Khazov, Vladimir Erofeev, Olga Vediaikina et al.
В последние десятилетия наблюдается повышенный интерес к изучению напряженно-деформированного состояния трубобетонных конструкций – композитных элементов, состоящих из стальной трубы-оболочки и бетонного сердечника, находящегося в состоянии трехосного сжатия. В таком сочетании сталь и бетон позволяют достичь лучших прочностных и деформативных характеристик, чем при их раздельной работе, что позволяет проектировать безопасные и экономичные конструкции. В настоящей статье приводятся результаты экспериментального исследования процесса деформирования изгибаемых трубобетонных элементов малогабаритных круглых сечений. Показано, что при трехточечном поперечном изгибе трубобетонной балки помимо прогиба за счет искривления оси стержня, существенное влияние на вертикальные перемещению оказывают деформации локального смятия в местах приложения сосредоточенных нагрузок. Проводится оценка возможности применения классической теории изгиба полой стальной балки по модели Бернулли. Сопоставляются результаты испытаний и диаграммы деформирования полых стальных труб (аналитические и экспериментальные данные) и сталежелезобетонных балок труб, заполненных бетоном. Произведена качественная и количественная оценка вклада наличия бетонного сердечника на несущую способность и деформативность (в т.ч. местное смятие) элемента. Наличие бетонного сердечника в композитном стержне дает существенное утяжеление конструкции, однако в случае внецентренного сжатия несущих вертикальных элементов многоэтажных зданий эффективность применения трубобетонных элементов весьма эффективно.
Christophoros Christophorou, Iacovos Ioannou, Vasos Vassiliou et al.
In the upcoming 6G era, mobile networks must deal with more challenging applications (e.g., holographic telepresence and immersive communication) and meet far more stringent application requirements stemming along the edge-cloud continuum. These new applications will create an elevated level of expectations on performance, reliability, ubiquity, trustworthiness, security, openness, and sustainability, pushing the boundaries of innovation and driving transformational change across the architecture of future mobile networks. Towards this end, ADROIT6G proposes a set of disruptive innovations with a clear vision on setting a 6G network architecture that can be tailored to the requirements of innovative applications and match the ambitious KPIs set for 6G networks. More specifically, the key transformations that ADROIT6G considers essential to 6G network evolution are: i) AI/ML-powered optimisations across the network, exploring solutions in the "Distributed Artificial Intelligence (DAI)" domain for high performance and automation; ii) Transforming to fully cloud-native network software, which can be implemented across various edge-cloud platforms, with security built integrally into the network user plan; and iii) Software driven, zero-touch operations and ultimately automation of every aspect of the network and the services it delivers.
Neelay Fruitwala, Gang Huang, Yilun Xu et al.
Quantum circuits utilizing real time feedback techniques (such as active reset and mid-circuit measurement) are a powerful tool for NISQ-era quantum computing. Such techniques are crucial for implementing error correction protocols, and can reduce the resource requirements of certain quantum algorithms. Realizing these capabilities requires flexible, low-latency classical control. We have developed a custom FPGA-based processor architecture for QubiC, an open source platform for superconducting qubit control. Our architecture is distributed in nature, and consists of a bank of lightweight cores, each configured to control a small (1-3) number of signal generator channels. Each core is capable of executing parameterized control and readout pulses, as well as performing arbitrary control flow based on mid-circuit measurement results. We have also developed a modular compiler stack and domain-specific intermediate representation for programming the processor. Our representation allows users to specify circuits using both gate and pulse-level abstractions, and includes high-level control flow constructs (e.g. if-else blocks and loops). The compiler stack is designed to integrate with quantum software tools and programming languages, such as TrueQ, pyGSTi, and OpenQASM3. In this work, we will detail the design of both the processor and compiler stack, and demonstrate its capabilities with a quantum state teleportation experiment using transmon qubits at the LBNL Advanced Quantum Testbed.
Jeong Kuk Kim, Byongug Jeong, Jae-Hyuk Choi et al.
This study aimed to evaluate the environmental impact of using liquefied petroleum gas (LPG) in small fishing vessels by conducting a life cycle assessment (LCA) in Korea. For the first time in the country, LPG engines designed for small fishing ships were utilized in this study. In addition, this research examined the potential benefits of employing Bio LPG, a renewable LPG produced from two distinct raw materials (crude palm oil (CPO) and refined, bleached, and deodorized (RBD) palm oil), instead of conventional LPG. The LCA findings reveal that utilizing LPG fuel in small fishing vessels can reduce greenhouse gas (GHG) emissions by more than 30% over conventional gasoline and diesel fuels. During the life cycle of vessels that use LPG fuel instead of gasoline and diesel fuels, there is a reduction of 2.2 and 1.2 million tons of GHG emissions, respectively. Moreover, substituting conventional fossil fuels with Bio LPG can result in over 65% reduction in GHG emissions. For the life cycle of boats that use Bio LPG fuel in place of gasoline and diesel fuels, the reduction of GHG emissions was 4.9 million tons and 2.5 million tons for CPO and 5.2 million tons and 2.7 million tons for RBD, respectively. This study not only underscores the substantial advantages of using Bio LPG over conventional fossil fuels but also presents conventional LPG as a way to reduce GHG emissions and promote sustainable practices in the fishing industry.
Xun Ji, Guo-Peng Liu, Cheng-Tao Cai
Underwater object detection (UOD) has attracted widespread attention, being of great significance for marine resource management, underwater security and defense, underwater infrastructure inspection, etc. However, high-quality UOD tasks often encounter challenges such as image quality degradation, complex backgrounds, and occlusions between objects at different scales. This paper presents a collaborative framework for UOD via joint image enhancement and super-resolution to address the above problems. Specifically, a joint-oriented framework is constructed incorporating underwater image enhancement and super-resolution techniques. The proposed framework is capable of generating a detection-favoring appearance to provide more visual cues for UOD tasks. Furthermore, a plug-and-play self-attention mechanism, termed multihead blurpooling fusion network (MBFNet), is developed to capture sufficient contextual information by focusing on the dependencies between multiscale feature maps, so that the UOD performance of our proposed framework can be further facilitated. A comparative study on the popular URPC2020 and Brackish datasets demonstrates the superior performance of our proposed collaborative framework, and the ablation study also validates the effectiveness of each component within the framework.
Edward Vendrow, Ethan Schonfeld
The image captioning task is increasingly prevalent in artificial intelligence applications for medicine. One important application is clinical report generation from chest radiographs. The clinical writing of unstructured reports is time consuming and error-prone. An automated system would improve standardization, error reduction, time consumption, and medical accessibility. In this paper we demonstrate the importance of domain specific pre-training and propose a modified transformer architecture for the medical image captioning task. To accomplish this, we train a series of modified transformers to generate clinical reports from chest radiograph image input. These modified transformers include: a meshed-memory augmented transformer architecture with visual extractor using ImageNet pre-trained weights, a meshed-memory augmented transformer architecture with visual extractor using CheXpert pre-trained weights, and a meshed-memory augmented transformer whose encoder is passed the concatenated embeddings using both ImageNet pre-trained weights and CheXpert pre-trained weights. We use BLEU(1-4), ROUGE-L, CIDEr, and the clinical CheXbert F1 scores to validate our models and demonstrate competitive scores with state of the art models. We provide evidence that ImageNet pre-training is ill-suited for the medical image captioning task, especially for less frequent conditions (e.g.: enlarged cardiomediastinum, lung lesion, pneumothorax). Furthermore, we demonstrate that the double feature model improves performance for specific medical conditions (edema, consolidation, pneumothorax, support devices) and overall CheXbert F1 score, and should be further developed in future work. Such a double feature model, including both ImageNet pre-training as well as domain specific pre-training, could be used in a wide range of image captioning models in medicine.
Halaman 38 dari 144311