Personalized ultra-fractionated stereotactic adaptive radiotherapy (PULSAR) is a novel treatment that delivers radiation in pulses of protracted intervals. Accurate prediction of gross tumor volume (GTV) changes through regression models has substantial prognostic value. This study aims to develop a multi-omics based support vector regression (SVR) model for predicting GTV change. A retrospective cohort of 39 patients with 69 brain metastases was analyzed, based on radiomics (magnetic resonance image images) and dosiomics (dose maps) features. Delta features were computed to capture relative changes between two time points. A feature selection pipeline using least absolute shrinkage and selection operator (Lasso) algorithm with weight- or frequency-based ranking criterion was implemented. SVR models with various kernels were evaluated using the coefficient of determination ( R ^2 ) and relative root mean square error (RRMSE). Five-fold cross-validation with 10 repeats was employed to mitigate the limitation of small data size. Multi-omics models that integrate radiomics, dosiomics, and their delta counterparts outperform individual-omics models. Delta-radiomic features play a critical role in enhancing prediction accuracy relative to features at single time points. The top-performing model achieves an R ^2 of 0.743 and an RRMSE of 0.022. The proposed multi-omics SVR model shows promising performance in predicting continuous change of GTV. It provides a more quantitative and personalized approach to assist patient selection and treatment adjustment in PULSAR.
Federica Restelli, Laura A. Pellegrini, Saif Z. S. Al Ghafri
et al.
Implementing hydrogen supply chains using liquid hydrogen (LH2) as a carrier presents significant challenges, among which the design of export terminals due to the boil-off gas (BOG) generation associated with the storage and handling of a cryogenic fluid. This study investigates BOG management at an export terminal through dynamic simulations conducted with Aspen HYSYS® V12.1. The terminal, operating with a LH2 production rate of 44,000 kg/d, is designed to transport green hydrogen from North Africa to Northern Italy over a harbour-to-harbour distance of 2,500 km. The operational cycle, which includes storage tank filling and LH2 carrier loading, spans approximately 10 days, with all generated BOG potentially recoverable through reliquefaction or high-pressure storage. The levelized cost of hydrogen for terminal storage and shipment is estimated at 3.09 €/kg, highlighting the necessity for further research to reduce costs and enable the economic feasibility of LH2 supply chain.
Chemical engineering, Computer engineering. Computer hardware
Antonio Ramón Gómez García, Francisco Luis Rivas Flor
Las lesiones por accidentes de trabajo siguen siendo un problema significativo en el sector de la construcción en Ecuador, especialmente en provincias con alta actividad urbanística y población laboral. Esta investigación busca identificar las causas de las diferencias en la incidencia de accidentes entre Guayas y Pichincha. Utilizando datos de 2014 a 2023, se calcularon las tasas de incidencia estandarizadas por edad (ASIR) y la razón de tasas (IRR). Además, se diseñó y aplicó un cuestionario para explorar las diferencias entre expertos (U de Mann-Whitney - Índice Kappa de Cohen). Los resultados muestran que Guayas presenta ASIRs más altas y el doble de IRR en comparación con Pichincha. Los expertos de Guayas identificaron factores a nivel macro como predominantes, mientras que en Pichincha se enfocaron en factores micro. No se encontraron diferencias significativas a nivel meso. Las disparidades podrían deberse a la aplicación desigual de normativas y actitudes culturales hacia la seguridad. Se sugiere mejorar la inspección laboral en Guayas y realizar estudios nacionales para una comprensión más amplia.
Abstract Network protocol fuzzing is a critical method for detecting vulnerabilities in network protocol programs. However, traditional selection algorithms used in network protocol fuzzing often fail to accurately select effective states and seeds. To address this limitation, this paper proposes a fuzzing framework called Contextual AFLnet (CAFLnet), which employs a selection algorithm that utilizes enhanced contextual information. This framework introduces key metrics, such as state in-degree, state out-degree, and trace-adjacent call count, to enhance contextual information. The selection algorithm is divided into two parts: (1) a state selection algorithm based on the linear upper confidence bound, which optimizes the balance between exploration and exploitation by utilizing enhanced contextual information, and (2) a tri-factor seed selection algorithm, designed to utilize contextual information such as seed labels, execution information, and session information to thoroughly and effectively evaluate seed value in the selection process. We evaluated our framework and AFLnet using eleven benchmark programs from ProFuzzBench and the real-world. The results demonstrate that our framework outperformed AFLnet by an average of 6.86% in terms of branch coverage, with a notable increase of 18.79% on PureFTPD. In addition, our framework slightly outperformed AFLnet in state discovery and exhibited superior performance in vulnerability detection, triggering known vulnerabilities earlier and more frequently and successfully exposing a previously unknown vulnerability.
Large language models (LLMs) are rapidly pushing the limits of contemporary computing hardware. For example, training GPT-3 has been estimated to consume around 1300 MWh of electricity, and projections suggest future models may require city-scale (gigawatt) power budgets. These demands motivate exploration of computing paradigms beyond conventional von Neumann architectures. This review surveys emerging photonic hardware optimized for next-generation generative AI computing. We discuss integrated photonic neural network architectures (e.g., Mach-Zehnder interferometer meshes, lasers, wavelength-multiplexed microring resonators) that perform ultrafast matrix operations. We also examine promising alternative neuromorphic devices, including spiking neural network circuits and hybrid spintronic-photonic synapses, which combine memory and processing. The integration of two-dimensional materials (graphene, TMDCs) into silicon photonic platforms is reviewed for tunable modulators and on-chip synaptic elements. Transformer-based LLM architectures (self-attention and feed-forward layers) are analyzed in this context, identifying strategies and challenges for mapping dynamic matrix multiplications onto these novel hardware substrates. We then dissect the mechanisms of mainstream LLMs, such as ChatGPT, DeepSeek, and LLaMA, highlighting their architectural similarities and differences. We synthesize state-of-the-art components, algorithms, and integration methods, highlighting key advances and open issues in scaling such systems to mega-sized LLM models. We find that photonic computing systems could potentially surpass electronic processors by orders of magnitude in throughput and energy efficiency, but require breakthroughs in memory, especially for long-context windows and long token sequences, and in storage of ultra-large datasets.
Reconfigurable computing (RC) theory aims to take advantage of the flexibility of general‐purpose processors (GPPs) alongside the performance of application specific integrated circuits (ASICs). Numerous RC architectures have been proposed since the 1960s, but all are struggling to become mainstream. The main factor that prevents RC to be used in general‐purpose CPUs, GPUs, and mobile devices is that it requires extensive knowledge of digital circuit design which is lacked in most software programmers. In an RC development, a processor cooperates with a reconfigurable hardware accelerator (HA) which is usually implemented on a field‐programmable gate arrays (FPGAs) chip and can be reconfigured dynamically. It implements crucial portions of software (kernels) in hardware to increase overall performance, and its design requires substantial knowledge of digital circuit design. In this paper, a novel RC architecture is proposed that provides the exact same instruction set that a standard general‐purpose RISC microprocessor (e.g., ARM Cortex‐M0) has while automating the generation of a tightly coupled RC component to improve system performance. This approach keeps the decades‐old assemblers, compilers, debuggers and library components, and programming practices intact while utilizing the advantages of RC. The proposed architecture employs the LLVM compiler infrastructure to translate an algorithm written in a high‐level language (e.g., C/C++) to machine code. It then finds the most frequent instruction pairs and generates an equivalent RC circuit that is called miniature accelerator (MA). Execution of the instruction pairs is performed by the MA in parallel with consecutive instructions. Several kernel algorithms alongside EEMBC CoreMark are used to assess the performance of the proposed architecture. Performance improvement from 4.09% to 14.17% is recorded when HA is turned on. There is a trade‐off between core performance and combination of compilation time, die area, and program startup load time which includes the time required to partially reconfigure an FPGA chip.
Yiwen ZHANG, Manchun CAI, Yonghao CHEN, Yi ZHU, Lifeng YAO
With the rapid advancement in deep learning, deepfake technology has gained significant momentum as a form of image manipulation based on generative models. The proliferation of deepfake videos and images has a detrimental sociopolitical impact, highlighting the increasing significance of deepfake detection techniques. Existing deepfake detection methods based on Convolutional Neural Networks (CNN) and Vision Transformers (ViT) commonly suffer from challenges such as large sizes of model parameters, slow training speeds, susceptibility to overfitting, and limited robustness against video compression and noise. To address these challenges, a multi-scale deepfake detection method that integrates spatial features is proposed herein. Firstly, an Automatic White Balance (AWB) algorithm is employed to adjust the contrast of input images, thereby enhancing robustness of the model. Subsequently, Multi-scale ViT (MViT) and CNN are separately utilized to extract the multi-scale global and local features, respectively, of the input images. These global and local features are then fused together using an improved sparse cross-attention mechanism to enhance the recognition performance of the model. Finally, the fused features are classified using a Multi-Layer Perceptron (MLP). According to the experimental results, the proposed model achieves frame-level Area Under the Curve (AUC) scores of 0.986, 0.984, and 0.988 on the Deepfakes, FaceSwap, and Celeb-DF (v2) datasets, respectively, demonstrating strong robustness in cross-compression experiments. Additionally, comparative experiments before and after specific model improvements have validated the gains provided by each module in terms of detection results.
Mariangela Guastaferro, Vincenzo Vaiano, Lucia Baldino
et al.
In recent years, scientific research has faced the numerous problems deriving from the presence of active ingredients in surface and groundwater. Traditional removal methods, such as adsorption and bioremediation, have several disadvantages; thus, in this work, membranes based on cellulose acetate loaded with Fe-N-TiO2, were tested for the photocatalytic degradation of Ceftriaxone Sodium from aqueous solution. The immobilization of the photocatalyst allows to overcome the limits of the photocatalytic process in suspension, which requires expensive and time-consuming post-treatments. Membranes were obtained by supercritical CO2 phase inversion process and were subjected to characterizations such as EDX, TGA, FT-IR, Raman spectroscopy and, subsequently, tested in adsorption tests in the dark and in the presence of visible light, to evaluate their photocatalytic activity. The variations in the concentration of the antibiotic, during the tests conducted, were monitored by HPLC chromatographic analysis. Samples with 10% and 30% by weight of Fe-N-TiO2 demonstrated relatively low adsorption efficiencies of the target contaminant, respectively equal to 22% and 18% in 180 minutes, for reasons related both to the morphology of the samples products, which changes from cellular to finger-like as the photocatalyst load increases, and to the quality of the dispersion. The membrane loaded with 20% by weight of Fe-N-TiO2 allowed a degradation of the model pollutant of 35% in 180 minutes; moreover, the reusability of the membranes was verified. The photocatalytic tests showed that the photocatalytic efficiency was highly correlated to the dispersion of the photocatalyst nanoparticles and to its loading in the polymeric membranes.
Chemical engineering, Computer engineering. Computer hardware
Sebastian Siegel, Ming-Jay Yang, John-Paul Strachan
Processing long temporal sequences is a key challenge in deep learning. In recent years, Transformers have become state-of-the-art for this task, but suffer from excessive memory requirements due to the need to explicitly store the sequences. To address this issue, structured state-space sequential (S4) models recently emerged, offering a fixed memory state while still enabling the processing of very long sequence contexts. The recurrent linear update of the state in these models makes them highly efficient on modern graphics processing units (GPU) by unrolling the recurrence into a convolution. However, this approach demands significant memory and massively parallel computation, which is only available on the latest GPUs. In this work, we aim to bring the power of S4 models to edge hardware by significantly reducing the size and computational demand of an S4D model through quantization-aware training, even achieving ternary weights for a simple real-world task. To this end, we extend conventional quantization-aware training to tailor it for analog in-memory compute hardware. We then demonstrate the deployment of recurrent S4D kernels on memrisitve crossbar arrays, enabling their computation in an in-memory compute fashion. To our knowledge, this is the first implementation of S4 kernels on in-memory compute hardware.
Waleed El-Geresy, Christos Papavassiliou, Deniz Gündüz
In this paper, we build a general modelling framework for memristors, suitable for the simulation of event-based systems such as hardware spiking neural networks, and more generally, neuromorphic computing systems composed of three independent components: i) an event-based modelling approach, extending and generalising an existing general model of memristors - the Generalised Metastable Switch Model (GMSM) - eliminating errors associated with discrete time approximation, as well as offering potential improvements in terms of suitability for neuromorphic memristive system simulations; ii) a volatility state variable to allow for the unified understanding of disparate non-linear and volatile phenomena, including state relaxation, structural disruption, Joule heating, and non-linear drift in different memristive devices; and iii) a readout equation that separates the latent state variable evolution from explicit variables of interest such as an instantaneous resistance. We exhibit an illustrative implementation of this framework, fit to a resistive drift dataset for titanium dioxide memristors, based on a proposed linear conductance model for resistive drift in the devices. Finally, we highlight the application of the model to neuromorphic computing, through demonstrating the contribution of the volatility state variable to switching dynamics, resulting in frequency-dependent switching (for stable memristors acting as programmable synaptic weights) and the generation of action potentials (for unstable memristors, acting as spike-generators).
Abdulaziz A. Alsulami, Qasem Abu Al-Haija, Badraddin Alturki
et al.
Abstract Cyber-physical systems (CPSs) are emergent systems that enable effective real-time communication and collaboration (C&C) of physical components such as control systems, sensors, actuators, and the surrounding environment through a cyber communication infrastructure. As such, autonomous vehicles (AVs) are one of the fields that have significantly adopted the CPS approach to improving people's lives in smart cities by reducing energy consumption and air pollution. Therefore, autonomous vehicle-cyber physical systems (AV-CPSs) have attracted enormous investments from major corporations and are projected to be widely used. However, AV-CPS is vulnerable to cyber and physical threat vectors due to the deep integration of information technology (IT), including cloud computing, with the communication process. Cloud computing is critical in providing the scalable infrastructure required for real-time data processing, storage, and analysis in AV-CPS, allowing these systems to work seamlessly in smart cities. CPS components such as sensors and control systems through network infrastructure are particularly vulnerable to cyber-attacks targeted by attackers using the communication system. This paper proposes an intelligent intrusion detection system (IIDS) for AV-CPS using transfer learning to identify cyberattacks launched against connected physical components of AVs through a network infrastructure. First, AV-CPS was developed by implementing the controller area network (CAN) and integrating it into the AV simulation model. Second, the dataset was generated from the AV-CPS. The collected dataset was then preprocessed to be trained and tested via pre-trained CNNs. Third, eight pre-trained networks were implemented, namely, InceptionV3, ResNet-50, ShuffleNet, MobileNetV2, GoogLeNet, ResNet-18, SqueezeNet, and AlexNet. The performance of the implemented models was evaluated. According to the experimental evaluation results, GoogLeNet outperformed all other pre-rained networks, scoring an F1- score of 99.47%.
This paper presents a mechanical model of the partitioned-pipe mixer (PPM) in case where pipe of the static mixer rotates with angular periodic velocity. Mixing becomes more efficient if the forcing of fluid mixing process is time periodic. Chaos in duct flows can be achieved by time modulation or by spatial changes along the duct axis. The values of Lyapunov exponents for flow in PPM are calculated.
Computer engineering. Computer hardware, Mechanics of engineering. Applied mechanics
The scope of this paper was to find out how the students in Computer Science perceive different teaching styles and how the teaching style impacts the learning desire and interest in the course. To find out, we designed and implemented an experiment in which the same groups of students (86 students) were exposed to different teaching styles (presented by the same teacher at a difference of two weeks between lectures). We tried to minimize external factors' impact by carefully selecting the dates (close ones), having the courses in the same classroom and on the same day of the week, at the same hour, and checking the number and the complexity of the introduced items to be comparable. We asked for students' feedback and we define a set of countable body signs for their involvement in the course. The results were comparable by both metrics (body language) and text analysis results, students prefer a more interactive course, with a relaxing atmosphere, and are keener to learn in these conditions.
As large language models (LLMs) like ChatGPT exhibited unprecedented machine intelligence, it also shows great performance in assisting hardware engineers to realize higher-efficiency logic design via natural language interaction. To estimate the potential of the hardware design process assisted by LLMs, this work attempts to demonstrate an automated design environment that explores LLMs to generate hardware logic designs from natural language specifications. To realize a more accessible and efficient chip development flow, we present a scalable four-stage zero-code logic design framework based on LLMs without retraining or finetuning. At first, the demo, ChipGPT, begins by generating prompts for the LLM, which then produces initial Verilog programs. Second, an output manager corrects and optimizes these programs before collecting them into the final design space. Eventually, ChipGPT will search through this space to select the optimal design under the target metrics. The evaluation sheds some light on whether LLMs can generate correct and complete hardware logic designs described by natural language for some specifications. It is shown that ChipGPT improves programmability, and controllability, and shows broader design optimization space compared to prior work and native LLMs alone.
This dissertation gives an overview of Martin Lof's dependant type theory, focusing on its computational content and addressing a question of possibility of fully canonical and computable semantic presentation.
D. Koblah, R. Acharya, Olivia P. Dizon-Paradis
et al.
Artificial intelligence (AI) and machine learning (ML) techniques have been increasingly used in several fields to improve performance and the level of automation. In recent years, this use has exponentially increased due to the advancement of high-performance computing and the ever increasing size of data. One of such fields is that of hardware design—specifically the design of digital and analog integrated circuits, where AI/ ML techniques have been extensively used to address ever-increasing design complexity, aggressive time to market, and the growing number of ubiquitous interconnected devices. However, the security concerns and issues related to integrated circuit design have been highly overlooked. In this article, we summarize the state-of-the-art in AI/ML for circuit design/optimization, security and engineering challenges, research in security-aware computer-aided design/electronic design automation, and future research directions and needs for using AI/ML for security-aware circuit design.
Learning from computer science can make medical devices fair for all races and sexes The hardware or software that operates medical devices can be biased. A biased device is one that operates in a manner that disadvantages certain demographic groups and influences health inequity. As one measure of fairness, reducing bias is related to increasing fairness in the operation of a medical device. Initiatives to promote fairness are rapidly growing in a range of technical disciplines, but this growth is not rapid enough for medical engineering. Although computer science companies terminate lucrative but biased facial recognition systems, biased medical devices continue to be sold as commercial products. It is important to address bias in medical devices now. This can be achieved by studying where and how bias arises, and understanding these can inform mitigation strategies.
A novel hybrid pulse width modulation (PWM) and pulse amplitude modulation (PAM) (HPP) driving method is proposed for improving the low gray-level expression of a micro light-emitting diode (µLED) display. At the high and middle gray-levels, PWM is adopted in order to suppress the wavelength shift of µLEDs. At the low gray-level, PAM is applied when the emission time and current of µLEDs simultaneously decrease. The HPP driving method is simulated by using a simplified p-type low-temperature polycrystalline silicon (LTPS) thin-film transistor (TFT)-based µLED pixel circuit. HPP driving exhibits stable PWM and PAM operations. Furthermore, HPP driving guarantees a data voltage range approximately 14 times larger than PWM driving, thus resulting in a robust operation with a maximum error rate of 3.83% under data signal distortion. Consequently, the µLED pixel circuit adopting HPP driving improves the low gray-level expression and demonstrates a robust circuit operation.