CFD Analysis of a Crucible Furnace Using Recycled PET as Fuel for Smelting Non-ferrous Metals
Beatriz E. Rubio-Campos, Edilberto Murrieta-Luna, Lázaro Canizalez Dávalos
et al.
The increasing accumulation of waste polyethylene terephthalate (PET) presents serious environmental problems, prompting the scientific community to develop waste management strategies. Despite the efforts, traditional mechanical recycling methods for PET face numerous limitations, leading to the exploration of alternative recycling approaches. Research has focused on exploring techniques such as PET combustion, but it has resulted in a complex process due to each plastics reacting differently when exposed to heat. A major advantage of quaternary recycling of PET is reducing the mass of solid by 70 % and can generate 475.73 kJ/kg of energy. This article presents a numerical study of a crucible furnace for non-ferrous metal smelting, fueled by methane-air and ground PET. The combustion is conducted under controlled conditions. A hydrodynamic analysis is performed by analyzing the pressure contours, velocity contours, and pathlines in the combustion chamber. A thermal and chemical study is also performed, analyzing temperature profiles and predicting flue gas emissions. The results show that the turbulence model used predicted eddy formation. The average temperature in the combustion chamber was 900 K, and species analysis at the furnace outlet indicates that this method is a sustainable and effective solution for waste plastic management.
Chemical engineering, Computer engineering. Computer hardware
Advanced Numerical Evaluation of the Jet Fire Caused by Accidental Releases of Liquid Hydrogen
Gianmaria Pio, Ernesto Salzano, Alessandro Tugnoli
Considering the ongoing energy transition and the role of hydrogen within the process, a complete and comprehensive understanding of the safety aspects of hydrogen storage technologies is paramount to guarantee the sustainable development of these alternative solutions at an industrial scale. Among the available storage conditions, the possibility of using cryogenic temperatures to liquefy hydrogen has become more credible also due to the knowledge gained by the use of liquefied natural gas. Nevertheless, the peculiar properties of hydrogen promote dedicated analysis of the safety aspects. For these reasons, the current work presents an advanced numerical investigation on the jet fires deriving from the accidental release of liquid hydrogen, performed employing the open-source software Open Field Operation and Manipulation (OpenFOAM). Considering the dearth of experimental data at the boundary conditions of interest, preliminary investigations were carried out to assess the suitability of the existing sub-models and parameters for the evaluation of hydrogen jet fire caused by an accidental release from high pressure and atmospheric temperature. A maximum temperature of ~ 2300 K was observed within the core of the flame. The locations where the maximum temperatures were observed are in line with experimental data available in the current literature, confirming the validity of the implemented models for the evaluation of near-field fluid dynamics and overall chemistry. In addition, the case of a non-ignited release was also analyzed in terms of temporal and spatial profiles of temperature and hydrogen content within the numerical domain. Based on the gathered information, the maximum distance between the releasing point and the edge of the flammable cloud was obtained as a function of the releasing conditions. In conclusion, the availability of robust and validated models for the characterization of the accidental releases of liquid hydrogen paves the way for further development and wider adoption of this technology as well as the optimized design of mitigation systems and safety procedures.
Chemical engineering, Computer engineering. Computer hardware
Not All Water Consumption Is Equal: A Water Stress Weighted Metric for Sustainable Computing
Yanran Wu, Inez Hua, Yi Ding
Water consumption is an increasingly critical dimension of computing sustainability, especially as AI workloads rapidly scale. However, current water impact assessment often overlooks where and when water stress is more severe. To fill in this gap, we present SCARF, the first general framework that evaluates water impact of computing by factoring in both spatial and temporal variations in water stress. SCARF calculates an Adjusted Water Impact (AWI) metric that considers both consumption volume and local water stress over time. Through three case studies on LLM serving, datacenters, and semiconductor fabrication plants, we show the hidden opportunities for reducing water impact by optimizing location and time choices, paving the way for water-sustainable computing. The code is available at https://github.com/jojacola/SCARF.
Multimodal Programming in Computer Science with Interactive Assistance Powered by Large Language Model
Rajan Das Gupta, Md. Tanzib Hosain, M. F. Mridha
et al.
LLM chatbot interfaces allow students to get instant, interactive assistance with homework, but doing so carelessly may not advance educational objectives. In this study, an interactive homework help system based on DeepSeek R1 is developed and first implemented for students enrolled in a large computer science beginning programming course. In addition to an assist button in a well-known code editor, our assistant also has a feedback option in our command-line automatic evaluator. It wraps student work in a personalized prompt that advances our educational objectives without offering answers straight away. We have discovered that our assistant can recognize students' conceptual difficulties and provide ideas, plans, and template code in pedagogically appropriate ways. However, among other mistakes, it occasionally incorrectly labels the correct student code as incorrect or encourages students to use correct-but-lesson-inappropriate approaches, which can lead to long and frustrating journeys for the students. After discussing many development and deployment issues, we provide our conclusions and future actions.
Ausschreibung des Unterrichtspreises der GI 2025 & Veranstaltungstermine der Fachgruppen
Fachausschuss IBS, FG DDI
Ausschreibung des Unterrichtspreises der GI 2025 und Veranstaltungstermine der Fachgruppen - siehe PDF-Version
Computer engineering. Computer hardware
Optimisation of Crystallisation Recipe for Varied Cloud Points Characteristics in Palm Oil Fractions
John Ting Zhi Zhang, Jeng Shiun Lim
The management of product quality in palm oil crystallisation poses a formidable challenge. Although various model-based optimisation control strategies have been widely applied, their effectiveness hinges on understanding the intricate and highly nonlinear dynamic behavior of crystallisation. Notably, existing research has predominantly focused on diverse applications, such as wastewater treatment, sugar cane crystallisation, and the pharmaceutical industry, leaving a notable research gap in the crystallisation processes specific to the palm oil industry. This research attempts to fill this gap by investigating the impact of an optimisation tool that combines artificial neural network and genetic algorithm (ANN-GA) to optimize the crystallisation recipe, specifically the cooling segments of palm oil, for three different cloud points of palm olein (CP 6, CP 8, and CP 10). The artificial neural network (ANN), which uses the Levenberg-Marquardt algorithm, serves as an internal model for predicting process output, whereas the genetic algorithm (GA) investigates a wide range of recipe combinations to maximise yield. Using MATLAB for optimisation, the ANN-GA approach goes through training, testing, and validation steps with industry-derived datasets. The results show root mean sqaure error (RMSE) of 0.8411 for CP 6, 0.4317 for CP 8, and 0.4105 for CP 10, indicating that ANN is sensitive to dataset volumes. Using GA as an optimisation tool, it generates optimal input variables for industrial validation. Validation results reveal an enhanced yield of 63 % for CP 6 palm olein, 74 % for CP 8 palm olein, which is within industrial range (66-76 %), and 77.26 % for CP 10 palm olein, which is within the range of 76-79 %. Overall, the ANN-GA technique is effective in predicting complicated systems such as palm olein and palm stearin crystallisation processes.
Chemical engineering, Computer engineering. Computer hardware
Stochastic gradient descent with random label noises: doubly stochastic models and inference stabilizer
Haoyi Xiong, Xuhong Li, Boyang Yu
et al.
Random label noise (or observational noise) widely exists in practical machine learning settings. While previous studies primarily focused on the effects of label noise to the performance of learning, our work intends to investigate the implicit regularization effects of label noise, under mini-batch sampling settings of stochastic gradient descent (SGD), with the assumption that label noise is unbiased. Specifically, we analyze the learning dynamics of SGD over the quadratic loss with unbiased label noise (ULN), where we model the dynamics of SGD as a stochastic differentiable equation with two diffusion terms (namely a doubly stochastic model). While the first diffusion term is caused by mini-batch sampling over the (label-noiseless) loss gradients, as in many other works on SGD (Zhu et al 2019 ICML 7654–63; Wu et al 2020 Int. Conf. on Machine Learning (PMLR) pp 10367–76), our model investigates the second noise term of SGD dynamics, which is caused by mini-batch sampling over the label noise, as an implicit regularizer. Our theoretical analysis finds such an implicit regularizer would favor some convergence points that could stabilize model outputs against perturbations of parameters (namely inference stability ). Though similar phenomenon have been investigated by Blanc et al (2020 Conf. on Learning Theory (PMLR) pp 483–513), our work does not assume SGD as an Ornstein–Uhlenbeck-like process and achieves a more generalizable result with convergence of the approximation proved. To validate our analysis, we design two sets of empirical studies to analyze the implicit regularizer of SGD with unbiased random label noise for deep neural network training and linear regression. Our first experiment studies the noisy self-distillation tricks for deep learning, where student networks are trained using the outputs from well-trained teachers with additive unbiased random label noise. Our experiment shows that the implicit regularizer caused by the label noise tends to select models with improved inference stability. We also carry out experiments on SGD-based linear regression with ULN, where we plot the trajectories of parameters learned in every step and visualize the effects of implicit regularization. The results back up our theoretical findings.
Computer engineering. Computer hardware, Electronic computers. Computer science
Next-generation Probabilistic Computing Hardware with 3D MOSAICs, Illusion Scale-up, and Co-design
Tathagata Srimani, Robert Radway, Masoud Mohseni
et al.
The vast majority of 21st century AI workloads are based on gradient-based deterministic algorithms such as backpropagation. One of the key reasons for the dominance of deterministic ML algorithms is the emergence of powerful hardware accelerators (GPU and TPU) that have enabled the wide-scale adoption and implementation of these algorithms. Meanwhile, discrete and probabilistic Monte Carlo algorithms have long been recognized as one of the most successful algorithms in all of computing with a wide range of applications. Specifically, Markov Chain Monte Carlo (MCMC) algorithm families have emerged as the most widely used and effective method for discrete combinatorial optimization and probabilistic sampling problems. We adopt a hardware-centric perspective on probabilistic computing, outlining the challenges and potential future directions to advance this field. We identify two critical research areas: 3D integration using MOSAICs (Monolithic/Stacked/Assembled ICs) and the concept of Illusion, a hardware-agnostic distributed computing framework designed to scale probabilistic accelerators.
A Fully Hardware Implemented Accelerator Design in ReRAM Analog Computing without ADCs
Peng Dang, Huawei Li, Wei Wang
Emerging ReRAM-based accelerators process neural networks via analog Computing-in-Memory (CiM) for ultra-high energy efficiency. However, significant overhead in peripheral circuits and complex nonlinear activation modes constrain system energy efficiency improvements. This work explores the hardware implementation of the Sigmoid and SoftMax activation functions of neural networks with stochastically binarized neurons by utilizing sampled noise signals from ReRAM devices to achieve a stochastic effect. We propose a complete ReRAM-based Analog Computing Accelerator (RACA) that accelerates neural network computation by leveraging stochastically binarized neurons in combination with ReRAM crossbars. The novel circuit design removes significant sources of energy/area efficiency degradation, i.e., the Digital-to-Analog and Analog-to-Digital Converters (DACs and ADCs) as well as the components to explicitly calculate the activation functions. Experimental results show that our proposed design outperforms traditional architectures across all overall performance metrics without compromising inference accuracy.
Hardware-Efficient Fault Tolerant Quantum Computing with Bosonic Grid States in Superconducting Circuits
Marc-Antoine Lemonde, Dany Lachance-Quirion, Guillaume Duclos-Cianci
et al.
Quantum computing holds the promise of solving classically intractable problems. Enabling this requires scalable and hardware-efficient quantum processors with vanishing error rates. This perspective manuscript describes how bosonic codes, particularly grid state encodings, offer a pathway to scalable fault-tolerant quantum computing in superconducting circuits. By leveraging the large Hilbert space of bosonic modes, quantum error correction can operate at the single physical unit level, therefore reducing drastically the hardware requirements to bring fault-tolerant quantum computing to scale. Going beyond the well-known Gottesman-Kitaev-Preskill (GKP) code, we discuss how using multiple bosonic modes to encode a single qubit offers increased protection against control errors and enhances its overall error-correcting capabilities. Given recent successful demonstrations of critical components of this architecture, we argue that it offers the shortest path to achieving fault tolerance in gate-based quantum computing processors with a MHz logical clock rate.
An Adaptive Scheduling Mechanism Optimized for V2N Communications over Future Cellular Networks
Athanasios Kanavos, Sokratis Barmpounakis, Alexandros Kaloxylos
Automated driving requires the support of critical communication services with strict performance requirements. Existing fifth-generation (5G) schedulers residing at the base stations are not optimized to differentiate between critical and non-critical automated driving applications. Thus, when the traffic load increases, there is a significant decrease in their performance. Our paper introduces SOVANET, a beyond 5G scheduler that considers the Radio Access Network (RAN) load, as well as the requirements of critical, automated driving applications and optimizes the allocation of resources to them compared to non-critical services. The proposed scheduler is evaluated through extensive simulations and compared to the typical Proportional Fair scheduler. Results show that SOVANET’s performance for critical services presents clear benefits.
Computer engineering. Computer hardware, Electronic computers. Computer science
Assessment of Failure Frequencies of Pipelines in Natech Events Triggered by Earthquakes
Fabiola Amaducci, Alessio Misuri, Ernesto Salzano
et al.
During a seismic event, underground pipelines can undergo to significant damages with severe implications in terms of life safety and economic impact. This type of scenarios falls under the definition of Natech. In recent years, quantitative risk analysis became a pivotal tool to assess and manage Natech risk. Among the tools required to perform the quantitative assessment of Natech risk, vulnerability models are required to characterize equipment damages from natural events. This contribution is focused on the review of the pipeline vulnerability models available for the case of earthquakes. Two main categories of models have been identified in the literature. A first category proposes the repair rate as performance indicator for the damage of pipeline due to seismic load, and gives as output the number of required repairs per unit length. A second category proposes fragility curves associated with risk states depending on the mechanism of ground failure. In the framework of Natech risk assessment, the latter have the important advantage of having clearly and unambiguously defined the risk status (and thus the extent of the release) with which they are associated. A subset of vulnerability models deemed more appropriate to be applied in the framework of Natech risk assessment is then identified. Their application to the assessment of the expected frequencies of release events due to pipeline damage is provided, enabling their comparison and the discussion of the relative strengths and weaknesses.
Chemical engineering, Computer engineering. Computer hardware
Decision trees for regular factorial languages
Mikhail Moshkov
In this paper, we study arbitrary regular factorial languages over a finite alphabet Σ. For the set of words L(n)of the length n belonging to a regular factorial language L, we investigate the depth of decision trees solving the recognition and the membership problems deterministically and nondeterministically. In the case of recognition problem, for a given word from L(n), we should recognize it using queries each of which, for some i∈{1,…,n}, returns the ith letter of the word. In the case of membership problem, for a given word over the alphabet Σ of the length n, we should recognize if it belongs to the set L(n)using the same queries. For a given problem and type of trees, instead of the minimum depth h(n)of a decision tree of the considered type solving the problem for L(n), we study the smoothed minimum depth H(n)=max{h(m):m≤n}. With the growth of n, the smoothed minimum depth of decision trees solving the problem of recognition deterministically is either bounded from above by a constant, or grows as a logarithm, or linearly. For other cases (decision trees solving the problem of recognition nondeterministically, and decision trees solving the membership problem deterministically and nondeterministically), with the growth of n, the smoothed minimum depth of decision trees is either bounded from above by a constant or grows linearly. As corollaries of the obtained results, we study joint behavior of smoothed minimum depths of decision trees for the considered four cases and describe five complexity classes of regular factorial languages. We also investigate the class of regular factorial languages over the alphabet {0,1}each of which is given by one forbidden word.
Computer engineering. Computer hardware, Electronic computers. Computer science
Implementation of a Simplified Micromixing Model Inside a Lagrangian Particle Dispersion Code for the Estimation of Concentration Variances and Peaks
Gianni Luigi Tinarelli, Roberto Sozzi, Daniela Barbero
In the frame of the modelling simulation of odour nuisances, the estimation of concentration peaks, representing values averaged over a relatively short time of the order of the interval between subsequent breathes, is of fundamental importance. Dispersion models currently used in this field cannot reconstruct this kind of values at relatively high frequency, due to their intrinsic theoretical design that allows to give time- or ensemble average- concentrations only. The scope of this work is to describe the implementation of a simplified micromixing model inside a standard ensemble average Lagrangian Particle Dispersion Model, with the aim of simulating the field of concentration variances together with concentration averages. A simplified micromixing model represents a way to describe the interaction between the part of the emitted plume and the rest of the atmospheric flow, derived through bulk entrainment relationships. This simplified view allows the description of the first two moments of the concentration distribution which is however sufficient to describe a peak-to-mean relationship making some hypotheses about the form of the distribution. Some preliminary results of the application of this method inside the SPRAY Lagrangian Particle Dispersion Model are shown, comparing both the instantaneous concentration and the peak-to-mean ratio together with their spatial behaviour derived in some controlled conditions with those obtained from the application of other schemes currently included in the code.
Chemical engineering, Computer engineering. Computer hardware
A Comprehensive Approach to Establish the Impact of Worksites Air Emissions
Marco Barozzi, Carmelo Dimauro, Martina Silvia Scotton
et al.
Worksite activities are time-limited events associated with continuous releases of airborne pollutants, such as carbon monoxide, particulate matter, and NOx, and they impact potentially vast areas. The side-effects on the environment can be severe, and they are subject of literature studies, with the final aim of proposing solutions that may improve the management of air emissions. No general assessment method or approach is yet available to estimate their effects on the environment and workers’ health. In this work, a general procedure that can be potentially applied to every type of worksite is proposed (i.e., construction sites, upgrading of chemical plants, road sites, etc..). The approach involves a detailed assessment of emissions and their expected pollutant concentrations. A dedicated mathematical model has been defined to assess pollutant emissions over time, consistent with all the different phases of foreseen activities. Emissions are defined on base of the GANTT descriptions of the activities and air pollutant dispersion is simulated with a dedicated model. Finally, the obtained results are evaluated against air quality thresholds as defined by laws and conditioning the human health risks for workers and citizens potentially exposed to pollutants.
Keywords: Atmospheric pollution, CALPUFF, Environmental impact, Work
Chemical engineering, Computer engineering. Computer hardware
OpenPodcar: an Open Source Vehicle for Self-Driving Car Research
Fanta Camara, Chris Waltham, Grey Churchill
et al.
OpenPodcar is a low-cost, open source hardware and software, autonomous vehicle research platform based on an off-the-shelf, hard-canopy, mobility scooter donor vehicle. Hardware and software build instructions are provided to convert the donor vehicle into a low-cost and fully autonomous platform. The open platform consists of (a) hardware components: CAD designs, bill of materials, and build instructions; (b) Arduino, ROS and Gazebo control and simulation software files which provide standard ROS interfaces and simulation of the vehicle; and (c) higher-level ROS software implementations and configurations of standard robot autonomous planning and control, including the move_base interface with Timed-Elastic-Band planner which enacts commands to drive the vehicle from a current to a desired pose around obstacles. The vehicle is large enough to transport a human passenger or similar load at speeds up to 15km/h, for example for use as a last-mile autonomous taxi service or to transport delivery containers similarly around a city center. It is small and safe enough to be parked in a standard research lab and be used for realistic human-vehicle interaction studies. System build cost from new components is around USD7,000 in total in 2022. OpenPodcar thus provides a good balance between real world utility, safety, cost and research convenience.
A Community Discovery Algorithm Fused with Adjacent Edge Attribute for Personal Social Network
LI Youhong, WANG Xuejun, CHEN Yuyong, ZHAO Yuelong, XU Wenxian
The traditional intelligent evolution community discovery algorithms are usually have the problems such as weakening node attributes and prone to premature convergence.To address the problems,this paper proposes a community discovery algorithm,NLA/SCD,using swarm-intelligence-based clustering of adjacent edge attributes for personal social networks.By fusing the structures of adjacent edges and the similar features of their node attributes,the adaptive function of the Social Spider Optimization(SSO) algorithm is defined,and the increment of the community modularity is selected as the iterative criterion of the operator.Then,as the male and female individuals evolve and mate,the adaptive function and the modularity increment function are used to locally and globally optimize the process of the community division optimization.Experimental results show that the NLA/SCD algorithm can effectively detect the personal social networks with diverse attribute information,and it maintains a high division accuracy while running fast.
Computer engineering. Computer hardware, Computer software
Memoryless non‐linearity in B‐Substitution doped and undoped graphene FETs: A comparative investigation
Chandrasekar Lakshumanan, Kumar P Pradhan
Abstract An accurate electrical equivalent circuit model for boron‐substitution doped graphene field effect transistor (GFET) is proposed to analyse the effects of memoryless non‐linearity on transconductance. The proposed equivalent circuit model is verified with the simulated results of an industry‐standard circuit simulation tool. The fundamental figures of merit (FOMs), such as the second‐ and third‐order harmonic distortion terms (HD2 and HD3), gain compression point (Ain,1dB), second‐ and third‐order intermodulation distortion terms (IM2 and IM3), and second‐ and third‐order input intercept points (AIIP2 and AIIP3) are mathematically modelled for B‐substitution doped GFET to examine the linear behaviour of the device. The expressions are validated by performing the single tone and double tone simulation test to the proposed equivalent circuit model using the industry‐standard circuit simulator. The proposed model is compatible and predicts accurate results for both B‐substitution doped and undoped GFET. The simulation results are having an excellent agreement with the mathematical model, which are also compared with the undoped GFET and conventional MOSFET. It is also observed that by B‐substitution doping the graphene sheet significantly induces the bandgapt and hence enhances the linear behaviour of the B‐substitution doped GFET and promises highly desirable linearity requirement in the analog/RF applications.
Computer engineering. Computer hardware
Computer Architecture-Aware Optimisation of DNA Analysis Systems
Hasindu Gamaarachchi
DNA sequencing is revolutionising the field of medicine. DNA sequencers, the machines which perform DNA sequencing, have evolved from the size of a fridge to that of a mobile phone over the last two decades. The cost of sequencing a human genome also has reduced from billions of dollars to hundreds of dollars. Despite these improvements, DNA sequencers output hundreds or thousands of gigabytes of data that must be analysed on computers to discover meaningful information with biological implications. Unfortunately, the analysis techniques have not kept the pace with rapidly improving sequencing technologies. Consequently, even today, the process of DNA analysis is performed on high-performance computers, just as it was a couple of decades ago. Such high-performance computers are not portable. Consequently, the full utility of an ultra-portable sequencer for sequencing in-the-field or at the point-of-care is limited by the lack of portable lightweight analytic techniques. This thesis proposes computer architecture-aware optimisation of DNA analysis software. DNA analysis software is inevitably convoluted due to the complexity associated with biological data. Modern computer architectures are also complex. Performing architecture-aware optimisations requires the synergistic use of knowledge from both domains, (i.e, DNA sequence analysis and computer architecture). This thesis aims to draw the two domains together. In this thesis, gold-standard DNA sequence analysis workflows are systematically examined for algorithmic components that cause performance bottlenecks. Identified bottlenecks are resolved through architecture-aware optimisations at different levels, i.e., memory, cache, register and processor. The optimised software tools are used in complete end-to-end analysis workflows and their efficacy is demonstrated by running on prototypical embedded systems.
Bio-Inspired Stereo Vision Calibration for Dynamic Vision Sensors
M. Domínguez-Morales, A. Jiménez-Fernandez, G. Jiménez-Moreno
et al.
Many advances have been made in the field of computer vision. Several recent research trends have focused on mimicking human vision by using a stereo vision system. In multi-camera systems, a calibration process is usually implemented to improve the results accuracy. However, these systems generate a large amount of data to be processed; therefore, a powerful computer is required and, in many cases, this cannot be done in real time. Neuromorphic Engineering attempts to create bio-inspired systems that mimic the information processing that takes place in the human brain. This information is encoded using pulses (or spikes) and the generated systems are much simpler (in computational operations and resources), which allows them to perform similar tasks with much lower power consumption, thus these processes can be developed over specialized hardware with real-time processing. In this work, a bio-inspired stereo-vision system is presented, where a calibration mechanism for this system is implemented and evaluated using several tests. The result is a novel calibration technique for a neuromorphic stereo vision system, implemented over specialized hardware (FPGA - Field-Programmable Gate Array), which allows obtaining reduced latencies on hardware implementation for stand-alone systems, and working in real time.
36 sitasi
en
Computer Science