Biomethane Production Plants: a Case Study Aimed at Atex Zones Classification
Roberto Lauri
Biomethane is the purified version of biogas and it is one of the main renewable gases of the future and available to help decarbonise the European Union (EU) energy system. For these reasons, there is a clear need to increase the biomethane production by 2030, as reported in the RepowerEU (18 May 2022). In particular, the european biomethane production needs to reach 35 billion m3 by 2030. The strategic biomethane importance requires a specific attention to the production plants safety. Indeed, one of the main hazards associated with its production process is the possible formation of potentially explosive atmospheres (Atex zones) due to accidental releases from several components, such as valves, flanges, compressors, etc. In accordance with Atex Directive 99/92/EC, the employer is obliged to classify the workplaces zones, where explosive mixtures could occur. The paper is focused on a biomethane production plant and the goal is the classification of Atex zone, which could be generated by a potential biofuel release from the compressor. In particular, the biofuel compression unit has been examined, because it is the potentially more hazardous place. This hazardousness is due to the exiguous dilution of the natural ventilation (indoor place) and to the maximum biomethane pressure, which strongly increases the released mass flow. In the paper, a specific software has been used to study the biofuel outflow from the potential emission source (compressor) and classify the zone (hazardous or non-hazardous area).
Chemical engineering, Computer engineering. Computer hardware
Force Measurement Technology of Vision‐Based Tactile Sensor
Bin Fang, Jie Zhao, Nailong Liu
et al.
Marker‐type vision‐based tactile sensors (VTS) realize force sensing by calibrating marker vector information. The tactile visualization can provide high‐precision and multimodal force information to promote robotic dexterous manipulation development. Considering VTS's contribution to force measurement, this article reviews the advanced force measurement technologies of VTSs. First, the working principle of marker‐type VTSs is introduced, including single‐layer markers, double‐layer markers, color coding, and optical flow. Then, the relationship between the marker type and the category of force measurement is discussed in detail. On this basis, the process of marker feature extraction is summarized, including image processing and marker‐matching technologies. According to the learning approach, force measurement methods are classified into physical and deep learning models. Further, branches of each method are analyzed in terms of input types. Combined with measuring range and precision, the correlation of sensor design, materials, and recognition methods to force measurement performance is further discussed. Finally, the difficulties and challenges are analyzed, and future developments are proposed. This review aims to deepen understanding of the research progress and applications and provide a reference for the research community to promote technology generations in related fields.
Computer engineering. Computer hardware, Control engineering systems. Automatic machinery (General)
The Effect of Intermittent Turbulence on Drop Size in Immiscible Liquid-liquid Dispersion Mechanically Agitated by High-shear Sawtooth Impeller
Roman Formanek, Radek Sulc
The effect of intermittent turbulence on drop size reported by Baldyga and Podgórska (1998) was analysed and the multifractal exponent ?FT was evaluated in a model system of immiscible silicon oil-water dispersion mechanically agitated by a high-shear sawtooth impeller. The ?FT values of approximately 1.64 on average and 1.46 were found for regions close to the impeller and the region outside the impeller, respectively. Finally, the relation between Sauter mean diameter d32 and maximum drop size dmax was investigated. The d32/dmax values of approximately 0.6 on average and 0.5 were found for regions close to the impeller and the region outside the impeller, respectively. The droplet sizes were obtained by the in-situ measurements technique and by the image analysis method.
Chemical engineering, Computer engineering. Computer hardware
Hardware-Software Co-Design for Accelerating Transformer Inference Leveraging Compute-in-Memory
Dong Eun Kim, Tanvi Sharma, Kaushik Roy
Transformers have become the backbone of neural network architecture for most machine learning applications. Their widespread use has resulted in multiple efforts on accelerating attention, the basic building block of transformers. This paper tackles the challenges associated with accelerating attention through a hardware-software co-design approach while leveraging compute-in-memory(CIM) architecture. In particular, our energy- and area-efficient CIM based accelerator, named HASTILY, aims to accelerate softmax computation, an integral operation in attention, and minimize their high on-chip memory requirements that grows quadratically with input sequence length. Our architecture consists of novel CIM units called unified compute and lookup modules(UCLMs) that integrate both lookup and multiply-accumulate functionality within the same SRAM array, incurring minimal area overhead over standard CIM arrays. Designed in TSMC 65nm, UCLMs can be used to concurrently perform exponential and matrix-vector multiplication operations. Complementing the proposed architecture, HASTILY features a fine-grained pipelining strategy for scheduling both attention and feed-forward layers, to reduce the quadratic dependence on sequence length to linear dependence. Further, for fast softmax computation which involves computing the maxima and sum of exponential values, such operations are parallelized across multiple cores using reduce and gather strategy. We evaluate our proposed architecture using a compiler tailored towards attention computation and a standard cycle-level CIM simulator. Our evaluation shows end-to-end throughput(TOPS) improvement of 4.4x-9.8x and 1.7x-5.9x over Nvidia A40 GPU and baseline CIM hardware, respectively, for BERT models with INT-8 precision. Additionally, it shows gains of 16x-36x in energy-efficiency(TOPS/W) over A40 GPU and similar energy-efficiency as baseline CIM hardware.
Understanding Computational Science and Engineering (CSE) and Domain Science Skills Development in National Laboratory Postgraduate Internships
Morgan M. Fong, Hilary Egan, Marc Day
et al.
Background: Harnessing advanced computing for scientific discovery and technological innovation demands scientists and engineers well-versed in both domain science and computational science and engineering (CSE). However, few universities provide access to both integrated domain science/CSE cross-training and Top-500 High-Performance Computing (HPC) facilities. National laboratories offer internship opportunities capable of developing these skills. Purpose: This student presents an evaluation of federally-funded postgraduate internship outcomes at a national laboratory. This study seeks to answer three questions: 1) What computational skills, research skills, and professional skills do students improve through internships at the selected national laboratory. 2) Do students gain knowledge in domain science topics through their internships. 3) Do students' career interests change after these internships? Design/Method: We developed a survey and collected responses from past participants of five federally-funded internship programs and compare participant ratings of their prior experience to their internship experience. Findings: Our results indicate that participants improve CSE skills and domain science knowledge, and are more interested in working at national labs. Participants go on to degree programs and positions in relevant domain science topics after their internships. Conclusions: We show that national laboratory internships are an opportunity for students to build CSE skills that may not be available at all institutions. We also show a growth in domain science skills during their internships through direct exposure to research topics. The survey instrument and approach used may be adapted to other studies to measure the impact of postgraduate internships in multiple disciplines and internship settings.
Life Cycle Assessment of Electro-submersible Pumps System Enhanced with Permanent Magnet Motor
Manolo Córdova, Juan A. Córdova, Fabian Silva
et al.
Climate change has triggered environmental awareness around the world with the use of clean technologies that help to control the inventory of Greenhouse Gas (GHG) emissions in their operations, but the calculation of environmental mitigation requires the knowledge of Life Cycle Assessment (LCA) in the production processes. Oil Artificial Lift Systems use Permanent Magnet Motors (PMM) as a viable option in their configuration, which is why it is important to know the Greenhouse Gas Inventory of these alternatives. This research compares the Greenhouse Gas inventory for the Life Cycle Assessment (LCA) of the product of an Electro-Submersible Pumps System (ESPs) with a Normal Induction Motor (NIM) and a Permanent Magnet Motor (PMM). First, the Functional Unit (FU) and LCA were defined according to the ISO 14067:2018 standard. Then the Greenhouse Gas Inventory was carried out according to ISO 14064-1:2019. As results, 5 defined stages of the LCA were determined. 14 activities related to the LCA were found. For the Electro-Submersible Pumps System (ESPs) with MIN, 999.9 kg of raw material were calculated, 1491.66 kW/h for manufacturing, 1491.66 kW/h for storage and 5.77E04 kW/h for use. For the ESPs with PMM 656 kg of raw material were calculated, 1491.66 kW/h for Manufacturing, 1,491.66 kW/h for storage and 4.72E04 kW/h for use. A 18.99 % improvement in the GHG emissions inventory was achieved due to a substantial 18.19 % decrease in energy consumption in the Greenhouse Gas Inventory. The values showed a reduction of 18.99 % of the Carbon Footprint (CF) for the ESPs with the application of new technology. The use of clean technology through the use of PMM could be a feasible alternative in Ecuador, since LCA accounts for 96.39 % of total CF.
Chemical engineering, Computer engineering. Computer hardware
A Laser SLAM Algorithm for Indoor Dynamic Pedestrian Scenes
YE Zhiqi, ZHANG Guobao, ZHU Hongwei
Eliminating the interference of dynamic pedestrians in real-time mapping is a core challenge in laser Simultaneous Localization And Mapping (SLAM) algorithms, particularly in complex indoor environments. Most existing SLAM algorithms focus primarily on static scenes and overlook the presence of moving objects. However, in indoor environments, the frequent appearance of moving pedestrians significantly degrades the quality of the global point-cloud map and increases uncertainty in subsequent localization and navigation tasks. To address this issue, this study proposes a tightly coupled laser SLAM algorithm specifically designed for dynamic pedestrian scenarios in indoor environments, with the aim of better adapting to such complex scenarios. In addition to the traditional SLAM framework, this study introduces a pre-processing module based on point-cloud clustering and segmentation to accurately eliminate dynamic pedestrian point clouds. Our algorithm first applies an enhanced two-stage clustering algorithm based on the Euclidean distance to cluster and segment point clouds. Subsequently, multidimensional slice and intensity features are extracted from the clustering results and combined with the classification results of a Support Vector Machine (SVM) to identify pedestrian instances at the scene. Meanwhile, the algorithm utilizes a static point cloud to estimate ego motion in real time and constructs a high-resolution point cloud map. To evaluate the performance of the algorithm, assessments are performed on both the Hilti public dataset and real-world scenario data, specifically focusing on the effectiveness of dynamic point-cloud removal and real-time capability. Experimental results demonstrate that the algorithm significantly improves the point cloud map construction quality and remarkably reduces the proportion of dynamic pedestrian points compared to state-of-the-art laser SLAM algorithms such as Removert and Dynablox. The processing time of the system for a single frame image does not exceed 100 ms, meeting real-time requirements.
Computer engineering. Computer hardware, Computer software
Recommendation of Learning Resource Based on Knowledge Graph Convolutional Network
TANG Zhikang, WU Yuqi, LI Chunying, TANG Yong
Aiming at random sampling and the selection of neighborhoods that may lead to unstable recommendation results in existing Knowledge Graph Convolutional Network(KGCN) models, this study constructs a sampling model for Structural Holes and Common Neighbors(SHCN) importance ranking. SHCN leverages the advantages of KGCN in processing higher-dimensional heterogeneous data. This study proposes a KGCN recommendation model based on SHCN, named KGCN-SHCN. First, the SHCN sampling method is used to sort the receiving domain of each entity in a Knowledge Graph(KG). Then, the entity information and information collected from the entity neighborhood are aggregated according to a Graph Convolutional Network(GCN) to obtain the feature representation of the learning resources. Finally, the feature representations of learners and learning resources are obtained using a prediction function to obtain the interaction probabilities. Experiments are conducted on three datasets, and the experimental results show that the proposed model, especially using the sum aggregation, yields better results in terms of the AUC and ACC evaluation indexes than the KGCN, RippleNet, and other recommendation models based on KG. These results prove that the proposed model is superior.
Computer engineering. Computer hardware, Computer software
Synthetic Aperture Interferometric Passive Radiometer Imaging to Locate Electromagnetic Leakage From Spacecraft Surface
Yuting Zhang, Jie Zhang, Yuhan Huang
et al.
The localization of electromagnetic radiation leakage through cabin gaps is a critical and challenging aspect of electromagnetic compatibility (EMC) design for spacecraft with complex electromagnetic environments. This paper proposes a localization method based on synthetic aperture interferometric passive radiometry imaging. Electromagnetic radiation signals are measured at a certain distance from the spacecraft surface to form visibility samples. A Fourier transform pair between the visibility sample and the corrected brightness temperature for electromagnetic radiation leakage is established. The spacecraft surface electromagnetic leakage location image is obtained through the inverse Fourier transform. A sparse sampling method based on ant colony optimization was proposed to improve testing efficiency. The impacts of various factors, including positional parameters, positioning accuracy of the test antenna, scanning parameters, and measurement receiver amplitude/phase errors on the imaging results are analyzed. Experiments were conducted on a 1 m × 1 m × 1 m cabin with 51 holes on one surface, and the algorithm proposed in this paper was validated to effectively image and locate electromagnetic leakage points at different frequencies. The effectiveness of sparse sampling was also verified, with a localization accuracy of 90.2% and a testing time savings of 81.9%.
Computer engineering. Computer hardware
Prediction of Rice Production to Support Food Security in Bogor Regency using Linear Regression and Support Vector Machine (SVM)
Ani Apriani, Nono Carsono, Mas Dadang Enjat Munajat
A prediction is an estimation of something that has not yet occurred. Its purpose is to minimize uncertainty and reduce errors in planning. Bogor Regency, with the largest population in West Java, requires a substantial amount of food. Rice production must meet the consumption needs of the population. To anticipate potential rice shortages, effective planning, and reduced dependence on rice imports, research is needed to predict rice production. This study aims to predict rice production using Linear Regression and Support Vector Machine (SVM) algorithms. Secondary data from the Department of Food Crops and Horticulture, and the Central Statistics Agency (BPS) of Bogor Regency were utilized. Results show that the Linear Regression method outperformed SVM, with MSE 236202.323, RMSE 486.007, MAE 388.712, and R2 1.000. In contrast, SVM yielded MSE 1461472466.751, RMSE 38229.2.10, MAE 303333.535, and R2 -0.065. In conclusion, the prediction using Linear Regression demonstrated better accuracy than SVM.
Keywords: Prediction, Algorithm, SVM. Linear Regression.
Engineering (General). Civil engineering (General), Computer engineering. Computer hardware
Preface to Special Issue on Scientific Computing and Learning Analytics for Smart Healthcare Systems (Part II)
Chinmay Chakraborty, Sayonara Barbosa, Lalit Garg
This special issue introduces emerging intelligent healthcare technologies that incorporate big medical data, artificial intelligence, scientific computing, federated learning, bio-inspired computation, the Internet of Medical Things, security and privacy, semantic databases, etc. Health monitoring and diagnosis for the target structure of interest are achieved through the interpretation of collected data. Advances in sensor technologies and data acquisition tools have led to a new era of big data, where massive amounts of medical data are collected by different sensors. This special issue offers valuable insights to researchers and engineers on designing intelligent bio-inspired Health 4.0 technologies and improving remote patient information delivery and care. By intelligently investigating and collecting large amounts of healthcare data (i.e., big data), sensors can enhance the decision-making process and help in early disease diagnosis. Hence, scalable machine learning, deep learning, and intelligent algorithms are needed to develop more interoperable solutions and make effective decisions in emerging sensor technologies. Optimization algorithms can be applied to acquire sensor data from multiple sources for fast and accurate health monitoring. In this special issue, seven manuscripts are published. The papers are directly or indirectly related to advanced clustering, imaging, and computing for bio-signal acquisition systems with intelligent computing.
Computer engineering. Computer hardware, Mechanics of engineering. Applied mechanics
Socially Responsible Computing in an Introductory Course
Aakash Gautam, Anagha Kulkarni, Sarah Hug
et al.
Given the potential for technology to inflict harm and injustice on society, it is imperative that we cultivate a sense of social responsibility among our students as they progress through the Computer Science (CS) curriculum. Our students need to be able to examine the social complexities in which technology development and use are situated. Also, aligning students' personal goals and their ability to achieve them in their field of study is important for promoting motivation and a sense of belonging. Promoting communal goals while learning computing can help broaden participation, particularly among groups who have been historically marginalized in computing. Keeping these considerations in mind, we piloted an introductory Java programming course in which activities engaging students in ethical and socially responsible considerations were integrated across modules. Rather than adding social on top of the technical content, our curricular approach seeks to weave them together. The data from the class suggests that the students found the inclusion of the social context in the technical assignments to be more motivating and expressed greater agency in realizing social change. We share our approach to designing this new introductory socially responsible computing course and the students' reflections. We also highlight seven considerations for educators seeking to incorporate socially responsible computing.
Enhanced OpenMP Algorithm to Compute All-Pairs Shortest Path on x86 Architectures
Sergio Calderón, Enzo Rucci, Franco Chichizola
Graphs have become a key tool when modeling and solving problems in different areas. The Floyd-Warshall (FW) algorithm computes the shortest path between all pairs of vertices in a graph and is employed in areas like communication networking, traffic routing, bioinformatics, among others. However, FW is computationally and spatially expensive since it requires O(n^3) operations and O(n^2) memory space. As the graph gets larger, parallel computing becomes necessary to provide a solution in an acceptable time range. In this paper, we studied a FW code developed for Xeon Phi KNL processors and adapted it to run on any Intel x86 processors, losing the specificity of the former. To do so, we verified one by one the optimizations proposed by the original code, making adjustments to the base code where necessary, and analyzing its performance on two Intel servers under different test scenarios. In addition, a new optimization was proposed to increase the concurrency degree of the parallel algorithm, which was implemented using two different synchronization mechanisms. The experimental results show that all optimizations were beneficial on the two x86 platforms selected. Last, the new optimization proposal improved performance by up to 23%.
ONNX-to-Hardware Design Flow for Adaptive Neural-Network Inference on FPGAs
Federico Manca, Francesco Ratto, Francesca Palumbo
The challenges involved in executing neural networks (NNs) at the edge include providing diversity, flexibility, and sustainability. That implies, for instance, supporting evolving applications and algorithms energy-efficiently. Using hardware or software accelerators can deliver fast and efficient computation of the NNs, while flexibility can be exploited to support long-term adaptivity. Nonetheless, handcrafting an NN for a specific device, despite the possibility of leading to an optimal solution, takes time and experience, and that's why frameworks for hardware accelerators are being developed. This work, starting from a preliminary semi-integrated ONNX-to-hardware toolchain [21], focuses on enabling approximate computing leveraging the distinctive ability of the original toolchain to favor adaptivity. The goal is to allow lightweight adaptable NN inference on FPGAs at the edge.
Study of the Electric Spark and Combustion Characteristic Times in a Mike 3 Apparatus
Elisabetta Sieni, Marco Barozzi, Martina Scotton
et al.
Understanding how dust can ignite and explode in an industrial contest is an important and complex task, and much of the work around this is mainly performed via experimental measurements, in accordance to specific standards. However, those same properties are straightforwardly closely related to the nature of the experimental tests. Among these, the Minimum Ignition Energy (MIE) of a dust cloud, that is usually measured in a MIKE 3 apparatus, can be affected by several factors, as: delay time of the electric spark with respect to the dust-air dispersion formation inside the apparatus, dust concentration, humidity content, dust granulometry, etc. The delay time is one of the worst parameters to adjust, because the fluid-dynamics of the dust-air mixture inside the tube is not easily predictable. Within this work, a study on the characteristic times of all the relevant phenomena occurring within a MIKE 3 apparatus was done by means of slow-motion videos of the tests. Particularly, three different characteristic times were compared referring to a given sample of niacin dust: dust lifting and settling times, effective spark delay time (that is, the time at which the spark is visible) and combustion time (that is, the time at which the flame is visible). According to the results, the effective delay time is almost always quite different with respect to the theoretical one, influencing the effective concentration of dust between the electrodes and, finally, the possibility to have a flame ignition or not within the apparatus. This means that the value of the MIE parameter can be profoundly influenced by the effective delay.
Keywords: Process Safety; Dust Explosions; Minimum Ignition Energy; Spark Delay
Chemical engineering, Computer engineering. Computer hardware
A hybrid recommendation scheme for delay-tolerant networks: The case of digital marketplaces
Victor M. Romero, II, Bea D. Santiago, Jay Martin Z. Nuevo
Recommender systems are widely-adopted by numerous popular e-commerce sites, such as Amazon and E-bay, to help users find products that they might like. Although much has been achieved in the area, most recommender systems are designed to work on top of centralized platforms that are traditionally supported by fixed infrastructure like the Internet. Hence, additional work is warranted to examine the applicability and performance of recommender systems in challenging environments that are characterized by dynamic network topology and variable transmission delays. This study deals with the design of a recommender system that is compatible in a delay-tolerant network where communication is supported by opportunistic encounters between participating nodes. The proposed approach combines collaborative filtering and content-based filtering techniques to generate rating predictions for users. To make the system more tolerant against interruptions, each node maintains a local recommender that generates predictions using user profiles that are obtained through opportunistic exchanges over a clustered topology. Simulation results indicate that the proposed approach is able to improve coverage while alleviating the cold-start problem.
Computer engineering. Computer hardware, Electronic computers. Computer science
Unleashing quantum algorithms with Qinterpreter: bridging the gap between theory and practice across leading quantum computing platforms
Wilmer Contreras Sepúlveda, Ángel David Torres-Palencia, José Javier Sánchez Mondragón
et al.
Quantum computing is a rapidly emerging and promising field that has the potential to revolutionize numerous research domains, including drug design, network technologies and sustainable energy. Due to the inherent complexity and divergence from classical computing, several major quantum computing libraries have been developed to implement quantum algorithms, namely IBM Qiskit, Amazon Braket, Cirq, PyQuil, and PennyLane. These libraries allow for quantum simulations on classical computers and facilitate program execution on corresponding quantum hardware, e.g., Qiskit programs on IBM quantum computers. While all platforms have some differences, the main concepts are the same. QInterpreter is a tool embedded in the Quantum Science Gateway QubitHub using Jupyter Notebooks that translates seamlessly programs from one library to the other and visualizes the results. It combines the five well-known quantum libraries: into a unified framework. Designed as an educational tool for beginners, Qinterpreter enables the development and execution of quantum circuits across various platforms in a straightforward way. The work highlights the versatility and accessibility of Qinterpreter in quantum programming and underscores our ultimate goal of pervading Quantum Computing through younger, less specialized, and diverse cultural and national communities.
Calcium Alginate Encapsulated Pillared Clay Beads for Adsorption of Ni(Ii) from Aqueous Solution
Hanieh Najafi, Neda Asasian-Kolur, Seyedmehdi Sharifian
et al.
In the search for new adsorbents for wastewater treatment, a modified clay-based adsorbent for the adsorption of Ni(II) was proposed in the present study. Silica pillared clays (SPCs) are adsorbents with high specific surface area and thermal stability, which have not been thoroughly investigated for their metal adsorption capacity. SPC was prepared by intercalating tetraethoxysilane (TEOS) as a silica source and a cationic surfactant (ethyl hexadecyl dimethylammonium bromide) between layers of a Na-saturated Iranian clay, followed by calcination. In order to make SPC a more efficient adsorbent for use in large-scale commercial applications, the present study addresses the conversion of a powdered SPC adsorbent into a granular one (ALG-SPC) by entrapping it in a polymeric matrix of calcium alginate and then using it for the adsorption of Ni(II) from aqueous solutions. The process of pillarization/granulation increased the specific surface area of the clay from 40 m2/g to 506 m2/g. A strong pH dependence of Ni(II) adsorption on pH was observed, showing the role of electrostatic interaction as the dominant mechanism in Ni(II) adsorption. The pseudo-second order model was the most appropriate kinetic model to describe Ni(II) adsorption. The isotherm models of Sips and Freundlich fit the equilibrium data best. According to the isotherm model of Sips, the maximum adsorption capacity was 52.58 mg/g. Some preliminary binary adsorption experiments were performed, which showed the negative effect of the presence of aniline in the solution on Ni(II) adsorption. Further investigations on multicomponent and continuous adsorption can be carried out.
Chemical engineering, Computer engineering. Computer hardware
Distributed Quantum Computing: a Survey
Marcello Caleffi, Michele Amoretti, Davide Ferrari
et al.
Nowadays, quantum computing has reached the engineering phase, with fully-functional quantum processors integrating hundred of noisy qubits available. Yet -- to fully unveil the potential of quantum computing out of the labs and into business reality -- the challenge ahead is to substantially scale the qubit number, reaching orders of magnitude exceeding the thousands (if not millions) of noise-free qubits. To this aim, there exists a broad consensus among both academic and industry communities about considering the distributed computing paradigm as the key solution for achieving such a scaling, by envision multiple moderate-to-small-scale quantum processors communicating and cooperating to execute computational tasks exceeding the computational resources available within a single processing device. The aim of this survey is to provide the reader with an overview about the main challenges and open problems arising with distributed quantum computing, and with an easy access and guide towards the relevant literature and the prominent results from a computer/communications engineering perspective.
Automated Level Crossing System: A Computer Vision Based Approach with Raspberry Pi Microcontroller
Rafid Umayer Murshed, Sandip Kollol Dhruba, Md. Tawheedul Islam Bhuian
et al.
In a rapidly flourishing country like Bangladesh, accidents in unmanned level crossings are increasing daily. This study presents a deep learning-based approach for automating level crossing junctions, ensuring maximum safety. Here, we develop a fully automated technique using computer vision on a microcontroller that will reduce and eliminate level-crossing deaths and accidents. A Raspberry Pi microcontroller detects impending trains using computer vision on live video, and the intersection is closed until the incoming train passes unimpeded. Live video activity recognition and object detection algorithms scan the junction 24/7. Self-regulating microcontrollers control the entire process. When persistent unauthorized activity is identified, authorities, such as police and fire brigade, are notified via automated messages and notifications. The microcontroller evaluates live rail-track data, and arrival and departure times to anticipate ETAs, train position, velocity, and track problems to avoid head-on collisions. This proposed scheme reduces level crossing accidents and fatalities at a lower cost than current market solutions. Index Terms: Deep Learning, Microcontroller, Object Detection, Railway Crossing, Raspberry Pi