The emergence of memristors with potential applications in data storage and artificial intelligence has attracted wide attentions. Memristors are assembled in crossbar arrays with data bits encoded by the resistance of individual cells. Despite the proposed high density and excellent scalability, the sneak‐path current causing cross interference impedes their practical applications. Therefore, developing novel architectures to mitigate sneak‐path current and improve efficiency, reliability, and stability may benefit next‐generation storage‐class memory (SCM). Moreover, conventional digital computers face the von‐Neumann bottleneck and the slowdown of transistors’ scaling, imposing a big challenge to hardware artificial intelligence. Memristive crossbar features colocation of memory and processing units, as well as superior scalability, making it a promising candidate for hardware accelerating machine learning and neuromorphic computing. Herein, first, crossbar architecture is introduced. Then, for storage, the origin of sneak‐path current is reviewed and techniques to mitigate this issue from the angle of materials and circuits are discussed. Computing wise, the applications of memristive crossbars in both machine learning and neuromorphic computing are surveyed, focusing on the structure of unit cells, the network topology, and the learning types. Finally, a perspective on future engineering and applications of memristive crossbars is discussed.
The proliferation of data-intensive IoT applications has created unprecedented demand for wireless spectrum, necessitating more efficient bandwidth management. Spectrum sensing allows unlicensed secondary users to dynamically access idle channels assigned to primary users. However, traditional sensing techniques are hindered by their sensitivity to noise and reliance on prior knowledge of primary user signals. This limitation has propelled research into machine learning (ML) and deep learning (DL) solutions, which operate without such constraints. This study presents a comprehensive performance assessment of prominent ML models: random forest (RF), K-nearest neighbor (KNN), and support vector machine (SVM) against DL architectures, namely a convolutional neural network (CNN) and an Autoencoder. Evaluated using a robust suite of metrics (probability of detection, false alarm, missed detection, accuracy, and F1-score), the results reveal the clear and consistent superiority of RF. Notably, RF achieved a probability of detection of 95.7%, accuracy of 97.17%, and an F1-score of 96.93%, while maintaining excellent performance in low signal-to-noise ratio (SNR) conditions, even surpassing existing hybrid DL models. These findings underscore RF’s exceptional noise resilience and establish it as an ideal, high-performance candidate for practical spectrum sensing in wireless networks.
Hybrid models are recognized as one of the most effective approaches to address the imbalanced data problem. In these models, data-level methods such as over-sampling are combined with algorithm-level methods, such as ensemble approaches. However, the resulting models can face challenges concerning inefficiency and ineffectiveness. A solution to tackle these issues is proposed in this paper, which includes a novel weighted F1-ordered pruning technique integrated with two state-of-the-art hybrid models, Balanced Bagging and Balanced One-versus-One. Unlike prior hybrid models designed primarily to address the binary imbalance problem, the proposed approach is specifically designed to tackle the challenging multi-class classification imbalance problem. An extensive experimental evaluation and statistical validation were conducted, and demonstrated that the Pruned Balanced Bagging ensemble remarkably outperforms the considered hybrid models.
Aiza Shabir, Khawaja Tehseen Ahmed, Khadija Kanwal
et al.
Abstract Despite deep learning helping image analysis many more data representation challenges remain along with performance consistency across different image types. Using convolutional neural networks (CNNs) architectures, a novel method is presented in this study that merges multilevel CNN features from top-performing networks like AlexNet, DenseNet, GoogLeNet, InceptionNet, and ResNet-101. The presented method starts with preprocessed inputs that move through CNN architecture layers for feature extraction while fusing output from various models to classify results across different testing datasets. The presented framework undergoes testing on seven datasets such as Tropical-Fruits, 101-ObjectCategories, CIFAR-10, ALOT, Corel-10k, 17-Flowers, and Zubud to confirm its usage across various scenarios. It uses top 10 to top 50 retrieval evaluations to demonstrate quick and precise image retrieval performance. The presented method consistently achieves superior results throughout multiple tests with high-quality image retrieval accuracy rates alongside effective classification and flexible use across various real-world situations.
Computer engineering. Computer hardware, Information technology
Abstract Accurately and quickly predicting hydrogen embrittlement performance is critical for the service of metal materials. However, due to multi‐source heterogeneity, existing hydrogen embrittlement data are missing, making it impractical to train reliable machine learning models. In this study, we proposed an ensemble learning training strategy for missing data based on the Adaboost algorithm. This method introduced a mask matrix with missing data and enabled each round of training to generate sub‐datasets, considering missing value information. The strategy first trained a subset of features based on the existing dataset and a selected method and continuously focused on the combination of features with the highest error for iterative training, where the mask matrix of the missing data was used as the input to fit the weights of each base learner using a neural network. Compared with directly modeling on highly sparse data, the predictive ability of this strategy was significantly improved by approximately 20%. In addition, in the testing of new samples, the predicted mean absolute error of the new model was successfully reduced from 0.2 to 0.09. This strategy offers good adaptability to the hydrogen embrittlement sensitivity of different sizes and can avoid interference from feature importance caused by filling data.
Materials of engineering and construction. Mechanics of materials, Computer engineering. Computer hardware
Jorge E. Coyac-Torres, Grigori Sidorov, Eleazar Aguirre-Anaya
et al.
Social networks have captured the attention of many people worldwide. However, these services have also attracted a considerable number of malicious users whose aim is to compromise the digital assets of other users by using messages as an attack vector to execute different types of cyberattacks against them. This work presents an approach based on natural language processing tools and a convolutional neural network architecture to detect and classify four types of cyberattacks in social network messages, including malware, phishing, spam, and even one whose aim is to deceive a user into spreading malicious messages to other users, which, in this work, is identified as a bot attack. One notable feature of this work is that it analyzes textual content without depending on any characteristics from a specific social network, making its analysis independent of particular data sources. Finally, this work was tested on real data, demonstrating its results in two stages. The first stage detected the existence of any of the four types of cyberattacks within the message, achieving an accuracy value of 0.91. After detecting a message as a cyberattack, the next stage was to classify it as one of the four types of cyberattack, achieving an accuracy value of 0.82.
Quantum computing is aninterdisciplinary field that lies at the intersection of mathematics, quantum physics, and computer science, and finds applications in areas including optimization, machine learning, and simulation of chemical, physical, and biological systems. It has the potential to help solve problems that so far have no satisfying method solving them, and to provide significant speedup to solutions when compared with their best classical approaches. In turn, quantum computing may allow us to solve problems for inputs that so far are deemed practically intractable. With the computational power of quantum computers and the proliferation of quantum development kits, quantum computing is anticipated to become mainstream, and the demand for a skilled workforce in quantum computing is expected to increase significantly. Therefore, quantum computing education is ramping up. This article describes our experiences in designing and delivering quantum computing workshops for youth (Grades 9–12). We introduce students to the world of quantum computing in innovative ways, such as newly designed unplugged activities for teaching basic quantum computing concepts. We also take a programmatic approach and introduce students to the IBM Quantum Experience using Qiskit and Jupyter notebooks. Our contributions are as follows. First, we present creative ways to teach quantum computing to youth with little or no experience in science, technology, engineering, and mathematics areas; second, we discuss diversity and highlight various pathways into quantum computing from quantum software to quantum hardware; and third, we discuss the design and delivery of online and in-person motivational, introductory, and advanced workshops for youth.
A quantum computer exhibits quantum advantage when it can perform a calculation that a classical computer is unable to complete. It follows that a company with a quantum computer would be a monopolist in the market for such a calculation if its only competitor was a company with a classical computer. Conversely, economic outcomes are unclear if quantum computers do not exhibit a quantum advantage, but classical and quantum computers have different cost structures. We model a Cournot duopoly where a quantum computing company competes against a classical computing company. The model features an asymmetric variable cost structure between the two companies and the potential for an asymmetric fixed cost structure, where each firm can invest in scaling its hardware to expand its respective market. We find that even if (1) the companies can complete identical calculations, and thus there is no quantum advantage, and (2) it is more expensive to scale the quantum computer, the quantum computing company may be more profitable and also invest more in market creation due to efficiency gains from using quantum algorithms. Finally, we provide examples of settings where the classical computer can also perform a calculation, but not in a cost-effective enough manner to be commercially viable. In such a setting, the quantum computing company becomes a monopolist despite exhibiting no quantum advantage. Taken together, quantum computers may not need to display a quantum advantage to be able to generate a quantum economic advantage for the companies that deploy them. This paper was accepted by D. J. Wu, information systems. Funding: R. G. Melko is supported by the Natural Sciences and Engineering Research Council of Canada, Canada Research Chair program, and the Perimeter Institute for Theoretical Physics. Research at the Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. A. Goldfarb is supported by the Sloan Foundation and the Social Sciences and Humanities Council of Canada. Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2022.4578 .
Abstract High-order numerical methods for unstructured grids combine the superior accuracy of high-order spectral or finite difference methods with the geometric flexibility of low-order finite volume or finite element schemes. The Flux Reconstruction (FR) approach unifies various high-order schemes for unstructured grids within a single framework. Additionally, the FR approach exhibits a significant degree of element locality, and is thus able to run efficiently on modern streaming architectures, such as Graphical Processing Units (GPUs). The aforementioned properties of FR mean it offers a promising route to performing affordable, and hence industrially relevant, scale-resolving simulations of hitherto intractable unsteady flows within the vicinity of real-world engineering geometries. In this paper we present PyFR, an open-source Python based framework for solving advection–diffusion type problems on streaming architectures using the FR approach. The framework is designed to solve a range of governing systems on mixed unstructured grids containing various element types. It is also designed to target a range of hardware platforms via use of an in-built domain specific language based on the Mako templating engine. The current release of PyFR is able to solve the compressible Euler and Navier–Stokes equations on grids of quadrilateral and triangular elements in two dimensions, and hexahedral elements in three dimensions, targeting clusters of CPUs, and NVIDIA GPUs. Results are presented for various benchmark flow problems, single-node performance is discussed, and scalability of the code is demonstrated on up to 104 NVIDIA M2090 GPUs. The software is freely available under a 3-Clause New Style BSD license (see www.pyfr.org ). Program summary Program title: PyFR v0.1.0 Catalogue identifier: AETY_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AETY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: New style BSD license No. of lines in distributed program, including test data, etc.: 12 733 No. of bytes in distributed program, including test data, etc.: 214 183 Distribution format: tar.gz Programming language: Python, CUDA and C. Computer: Variable, up to and including GPU clusters. Operating system: Recent version of Linux/UNIX. RAM: Variable, from hundreds of megabytes to gigabytes. Classification: 6.5, 12. External routines: Python 2.7, numpy, PyCUDA, mpi4py, SymPy, Mako Nature of problem: Compressible Euler and Navier–Stokes equations of fluid dynamics; potential for any advection–diffusion type problem. Solution method: High-order flux reconstruction approach suitable for curved, mixed, unstructured grids. Unusual features: Code makes extensive use of symbolic manipulation and runtime code generation through a domain specific language. Running time: Many small problems can be solved on a recent workstation in minutes to hours.
Solid-state qubits have recently advanced to the level that enables them, in-principle, to be scaled-up into fault-tolerant quantum computers. As these physical qubits continue to advance, meeting the challenge of realising a quantum machine will also require the engineering of new classical hardware and control architectures with complexity far beyond the systems used in today's few-qubit experiments. Here, we report a micro-architecture for controlling and reading out qubits during the execution of a quantum algorithm such as an error correcting code. We demonstrate the basic principles of this architecture in a configuration that distributes components of the control system across different temperature stages of a dilution refrigerator, as determined by the available cooling power. The combined setup includes a cryogenic field-programmable gate array (FPGA) controlling a switching matrix at 20 millikelvin which, in turn, manipulates a semiconductor qubit.
Micael J. T. Oliveira, Nick R. Papior, Y. Pouillon
et al.
First-principles electronic structure calculations are now accessible to a very large community of users across many disciplines, thanks to many successful software packages, some of which are described in this special issue. The traditional coding paradigm for such packages is monolithic, i.e., regardless of how modular its internal structure may be, the code is built independently from others, essentially from the compiler up, possibly with the exception of linear-algebra and message-passing libraries. This model has endured and been quite successful for decades. The successful evolution of the electronic structure methodology itself, however, has resulted in an increasing complexity and an ever longer list of features expected within all software packages, which implies a growing amount of replication between different packages, not only in the initial coding but, more importantly, every time a code needs to be re-engineered to adapt to the evolution of computer hardware architecture. The Electronic Structure Library (ESL) was initiated by CECAM (the European Centre for Atomic and Molecular Calculations) to catalyze a paradigm shift away from the monolithic model and promote modularization, with the ambition to extract common tasks from electronic structure codes and redesign them as open-source libraries available to everybody. Such libraries include "heavy-duty" ones that have the potential for a high degree of parallelization and adaptation to novel hardware within them, thereby separating the sophisticated computer science aspects of performance optimization and re-engineering from the computational science done by, e.g., physicists and chemists when implementing new ideas. We envisage that this modular paradigm will improve overall coding efficiency and enable specialists (whether they be computer scientists or computational scientists) to use their skills more effectively and will lead to a more dynamic evolution of software in the community as well as lower barriers to entry for new developers. The model comes with new challenges, though. The building and compilation of a code based on many interdependent libraries (and their versions) is a much more complex task than that of a code delivered in a single self-contained package. Here, we describe the state of the ESL, the different libraries it now contains, the short- and mid-term plans for further libraries, and the way the new challenges are faced. The ESL is a community initiative into which several pre-existing codes and their developers have contributed with their software and efforts, from which several codes are already benefiting, and which remains open to the community.
César Guerra Tejada, José Luis Tapia, Juan Pablo Valencia
El Instituto Superior Tecnológico Guayaquil, elabora una app cuyo objetivo es ayudar en la batalla contra la pandemia que, a inicios del año 2020, el mundo empezó a experimentar por el aparecimiento de un nuevo virus, “Sars Cov-2” o COVID-19, que ha causado centenares de muertes a nivel mundial y en muchos casos ha dejado graves secuelas a personas que han sobrevivido a este mal. La imprudencia de las personas ha propiciado que este virus se propague de forma exponencial llegando a colapsar los sistemas de salud de casi todos los países en donde el COVID-19 se ha presentado. Las tecnologías de la información han sido un actor primordial en esta pandemia, mediante el uso de bases de datos centralizadas , El Instituto Superior Tecnológico Guayaquil, desarrolla una app que utiliza encuestas para alimentar la base de datos y lograr identificar lugares dentro del área urbana donde se concentren individuos contagiados, de esta forma se intentara conocer la ubicación de los contagios mediante la identificación de focos de calor, el prototipo de esta app, fue distribuido entre los estudiantes del Instituto Superior Tecnológico Guayaquil y se espera masificar su uso en el próximo semestre, para de esta forma lograr obtener un completo esquema de la situación de la población estudiantil así como de los docentes del Instituto Superior Tecnológico Guayaquil.
Keeping track of blood glucose levels non-invasively is now possible due to diverse breakthroughs in wearable sensors technology coupled with advanced biomedical signal processing. However, each user might have different requirements and priorities when it comes to selecting a self-monitoring solution. After extensive research and careful selection, we have presented a comprehensive survey on noninvasive/pain-free blood glucose monitoring methods from the recent five years (2012–2016). Several techniques, from bioinformatics, computer science, chemical engineering, microwave technology, etc., are discussed in order to cover a wide variety of solutions available for different scales and preferences. We categorize the noninvasive techniques into nonsample- and sample-based techniques, which we further grouped into optical, nonoptical, intermittent, and continuous. The devices manufactured or being manufactured for noninvasive monitoring are also compared in this paper. These techniques are then analyzed based on certain constraints, which include time efficiency, comfort, cost, portability, power consumption, etc., a user might experience. Recalibration, time, and power efficiency are the biggest challenges that require further research in order to satisfy a large number of users. In order to solve these challenges, artificial intelligence (AI) has been employed by many researchers. AI-based estimation and decision models hold the future of noninvasive glucose monitoring in terms of accuracy, cost effectiveness, portability, efficiency, etc. The significance of this paper is twofold: first, to bridge the gap between IT and medical field; and second, to bridge the gap between end users and the solutions (hardware and software).
Based on the correlation between temporal relations and causal relations of events,this paper proposes a joint identification method using neural network.The method takes the identification of temporal relations as the main task,and that of causal relations as auxiliary task.On this basis,three types of joint identification models of sharing encoding layer,decoding layer and encoding-decoding layer in auxiliary tasks are designed to enable information sharing through the network layer of the main task model and the auxiliary task model.Then feature information of joint identification models is learnt.Experimental results show that the joint identification method can use the causal information between events to significantly improve the identification performance of temporal relations.Also,the joint identification model of sharing encoding-decoding layer in auxiliary tasks is more suitable for the joint identification of temporal and causal relations of events.
Chlorinated compounds are widely used in the chemicals manufacturing industry. They have high toxicity and good stability, leading to the persistent pollution and damage of the ecological environment. In this study, p-chlorophenol (p-CP) was selected as the model, and a new dechlorination system was developed, that is, tannic acid (TA) and sodium hydroxide (NaOH) were used together to activate persulfate (PS). The effect of TA, NaOH, PS, and temperature on the degradation of chlorophenol was investigated. When 200 mg/L of TA, 0.4 M of NaOH and 100 mM of persulfate were used, the degradation efficiency of p-chlorophenol (50 mg/L) can reach 99 wt.%. There is no significant change in degradation efficiency when the reaction temperature is changed from 20 oC to 40 oC. The mechanism on the co-activation of persulfate with tannic acid and NaOH was proposed. The hydroxyl radicals and superoxide radicals play an important role in the degradation of p-CP. The dechlorination system showed excellent degradation performance for the chlorinated compounds, making it to be a potential oxidant for treatment of the contaminated groundwater.
Chemical engineering, Computer engineering. Computer hardware