Hartree-Fock on a superconducting qubit quantum computer
F. Arute, K. Arya, R. Babbush
et al.
Twelve-qubit quantum computing for chemistry Accurate electronic structure calculations are considered one of the most anticipated applications of quantum computing that will revolutionize theoretical chemistry and other related fields. Using the Google Sycamore quantum processor, Google AI Quantum and collaborators performed a variational quantum eigensolver (VQE) simulation of two intermediate-scale chemistry problems: the binding energy of hydrogen chains (as large as H12) and the isomerization mechanism of diazene (see the Perspective by Yuan). The simulations were performed on up to 12 qubits, involving up to 72 two-qubit gates, and show that it is possible to achieve chemical accuracy when VQE is combined with error mitigation strategies. The key building blocks of the proposed VQE algorithm are potentially scalable to larger systems that cannot be simulated classically. Science, this issue p. 1084; see also p. 1054 Accurate quantum simulations of chemistry are performed using up to 12 superconducting qubits and 72 two-qubit gates. The simulation of fermionic systems is among the most anticipated applications of quantum computing. We performed several quantum simulations of chemistry with up to one dozen qubits, including modeling the isomerization mechanism of diazene. We also demonstrated error-mitigation strategies based on N-representability that dramatically improve the effective fidelity of our experiments. Our parameterized ansatz circuits realized the Givens rotation approach to noninteracting fermion evolution, which we variationally optimized to prepare the Hartree-Fock wave function. This ubiquitous algorithmic primitive is classically tractable to simulate yet still generates highly entangled states over the computational basis, which allowed us to assess the performance of our hardware and establish a foundation for scaling up correlated quantum chemistry simulations.
824 sitasi
en
Physics, Medicine
Sisvar: a computer statistical analysis system
D. F. Ferreira
Sisvar is a statistical analysis system, first released in 1996 although its development began in 1994. The first version was done in the programming language Pascal and compiled with Borland Turbo Pascal 3. Sisvar was developed to achieve some specific goals. The first objective was to obtain software that could be used directly on the statistical experimental course of the Department of Exact Science at the Federal University of Lavras. The second objective was to initiate the development of a genuinely Brazilian free software program that met the demands and peculiarities of research conducted in the country. The third goal was to present statistical analysis software for the Brazilian scientific community that would allow research results to be analyzed efficiently and reliably. All of the initial goals were achieved. Sisvar gained acceptance by the scientific community because it provides reliable, accurate, precise, simple and robust results, and allows users a greater degree of interactivity.
5435 sitasi
en
Computer Science
Fruits and vegetables quality evaluation using computer vision: A review
Anuja Bhargava, A. Bansal
Abstract In agriculture science, automation increases the quality, economic growth and productivity of the country. The export market and quality evaluation are affected by assorting of fruits and vegetables. The crucial sensory characteristic of fruits and vegetables is appearance that impacts their market value, the consumer’s preference and choice. Although, the sorting and grading can be done by human but it is inconsistent, time consuming, variable, subjective, onerous, expensive and easily influenced by surrounding. Hence, an astute fruit grading system is needed. In recent years, various algorithms for sorting and grading are done by various researchers using computer vision. This paper presents a detailed overview of various methods i.e. preprocessing, segmentation, feature extraction, classification which addressed fruits and vegetables quality based on color, texture, size, shape and defects. In this paper, a critical comparison of different algorithm proposed by researchers for quality inspection of fruits and vegetables has been carried out.
524 sitasi
en
Computer Science
The Design and Analysis of Computer Algorithms
A. Aho, J. Hopcroft, J. Ullman
9548 sitasi
en
Computer Science
Graph Theory with Applications to Engineering and Computer Science
N. Deo
1201 sitasi
en
Computer Science
Computer Security: Art and Science
M. Bishop
1369 sitasi
en
Computer Science
Computer Science Curricula 2013
M. Sahami, S. Roach, E. Cuadros-Vargas
et al.
446 sitasi
en
Computer Science
Changing Movements in a Changing World: Modelling Early Pleistocene and Early Middle Pleistocene Climatic and Ecological Environments and Influences on Hominin Dispersal in Eurasia
Kamilla L. Lomborg, Carolina Cucart-Mora, Jan-Olaf Reschke
et al.
In a world of drastic climatic and ecological changes, our knowledge of how the environment influenced hominin behaviour is of the utmost importance. Archaeology plays a key role in this domain, as it is the only discipline that studies empirical evidence of past societies’ responses to environmental change. Computational models generating predictions about past climatic and ecological conditions are vital for understanding the archaeological record and how these factors shaped the dispersal of hominins out of Africa and into Eurasia during the Early and early Middle Pleistocene. In this paper, various models for past reconstructions of climatic and ecological conditions and simulation techniques are presented to provide an overview of the diverse approaches, possibilities, advantages and constraints of using computational reconstructions in archaeological research. Focusing on studies of hominin dispersals out of Africa and into Eurasia during the Early and early Middle Pleistocene, this paper discusses the links between environmental factors and hominin dispersal behaviour. The use of simulation techniques to represent hominin populations, such as cellular automata or agent-based modelling, can contribute to connecting small-scale environment-induced influences on hominins to large-scale patterns, supported by ecological theories of species survival and spatial behaviour. Collectively, these approaches provide an elaborate foundation for understanding environmental influences on past hominin dispersals.
Archaeology, Electronic computers. Computer science
A new method of accurate pedicle screw navigation
Daniel Suter, Aidana Massalimova, Christoph Johannes Laux
et al.
Abstract One of the most established approaches to navigate pedicle screws is the planning and alignment (PA) method. Thereby a trajectory and associated entry point (EP) is planned and navigated after referencing to patient anatomy. However, deviations from the planned EP potentially lead to an altered screw position. The aim of this study was to investigate the influence of these EP deviations and to examine possible alternative methods. The merits of two new points of reference (screw tip point STP and midpoint MP) were therefore analyzed. STP represents the point on the optimal screw tip, MP the point at the center/midportion of the pedicle at its narrowest portion. The adapted screw trajectory was defined as the directional vector from any chosen EP to the STP or MP. First, computer simulations were used to evaluate the performance of these new approaches. Subsequently, the navigation technique yielding more acceptable screws in case of an EP deviation was analyzed on phantom-sawbone models. Both new methods showed a significantly larger number of possible screw trajectories in the simulations (p < 0.01). Even with a deliberate deviation of 4.5 mm (IQR 3.3) from the optimal EP, a perforation-free screw diameter of 4.9 mm (IQR 5.7 mm) could be achieved using the new navigation techniques. The simulated perforations were mainly located laterally with a median of 8.45 mm (IQR 3.95) distance to the medial pedicle wall. The PA method seems to be susceptible to EP deviations. The STP and MP methods are possible improvement mechanisms to overcome this disadvantage.
The Effects of the Weak Allee Effect and Disease on the Dynamics of a Predator–Prey System: Stability and Bifurcation Properties
Yurong Dong, Hua Liu, Jianhua Ye
et al.
In this paper, an eco-epidemiological model with a weak Allee effect and prey disease dynamics is discussed. Mathematical features such as non-negativity, boundedness of solutions, and local stability of the feasible equilibria are discussed. Additionally, the transcritical bifurcation, saddle-node bifurcation, and Hopf bifurcation are proven using Sotomayor’s theorem and Poincare–Andronov–Hopf theorems. In addition, the correctness of the theoretical analysis is verified by numerical simulation. The numerical simulation results show that the eco-epidemiological model with a weak Allee effect has complex dynamics. If the prey population is not affected by disease, the predator becomes extinct due to a lack of food. Under low infection rates, all populations are maintained in a coexistent state. The Allee effect does not influence this coexistence. At high infection rates, if the prey population is not affected by the Allee effect, the infected prey is found to coexist in an oscillatory state. The predator population and the susceptible prey population will be extinct. If the prey population is affected by the Allee effect, all species will be extinct.
Optimized CNN-BiLSTM framework for reactive power management and voltage profile improvement in renewable energy based power grids
Lijo Jacob Varghese, Suma Sira Jacob, Jaisiva Selvaraj
et al.
Abstract This article describes a method for improving power grid voltage profiles by more effectively regulating reactive power through the integration of hybrid renewable energy systems (HRES) in smart grids. The unpredictable nature of renewable energy sources RES, such as wind turbines and solar systems, causes an unstable voltage profile throughout the grid, underscoring the problem of voltage fluctuation in power grids. This article proposes DSTATCOM, a reactive power adjustment device, to address these voltage fluctuations and provide the grid with the required var. DSTATCOM assists in preserving voltage stability by consistently lowering the voltage drop, which guarantees an increase in active power flow. Therefore, the overall voltage profile throughout the electrical grid gets improved. Convolutional neural networks (CNN) with bidirectional long short-term memory (BiLSTM) combined to form the proposed solution, which controls and maximizes DSTATCOM performance. These advanced artificial intelligence (AI) methods helps in dynamic reactive power management, improving the grid’s voltage profile and DSTATCOM’s performance. In smart grid situations, this method works well for real-time voltage regulation since CNNs are employed for feature extraction and BiLSTM helps capture temporal dependencies in the grid’s power behavior. The CNN-BiLSTM network’s weights are also adjusted using an adaptive parrot optimizer (APO). The proposed approach was implemented using the MATLAB/Simulink environment, and three different scenarios were used to assess its performance. Simulation results confirm that the method achieved up to 33.4% loss reduction, improved voltage stability index (VSI) to 1.02 p.u, minimized total harmonic distortion (THD) below 1.7%, and cut settling time to 0.075 s. The hybrid PV/wind setup ensured superior voltage stability, while the model attained high prediction accuracy with an R2 of 0.9672 and RMSE of 3.0094. By controlling reactive power balance, the created system assures grid stability, improves the voltage profile, and reduces power loss.
Effect of Heated Wall Corrugation on Thermal Performance in an L-Shaped Vented Cavity Crossed by Metal Foam Saturated with Copper–Water Nanofluid
Luma F. Ali, Hussein Togun, Abdellatif M. Sadeq
Practical applications such as solar power energy systems, electronic cooling, and the convective drying of vented enclosures require continuous developments to enhance fluid and heat flow. Numerous studies have investigated the enhancement of heat transfer in L-formed vented cavities by inserting heat-generating components, filling the cavity with nanofluids, providing an inner rotating cylinder and a phase-change packed system, etc. Contemporary work has examined the thermal performance of L-shaped porous vented enclosures, which can be augmented by using metal foam, using nanofluids as a saturated fluid, and increasing the wall surface area by corrugating the cavity’s heating wall. These features are not discussed in published articles, and their exploration can be considered a novelty point in this work. In this study, a vented cavity was occupied by a copper metal foam with <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>P</mi><mi>P</mi><mi>I</mi><mo>=</mo><mn>10</mn></mrow></semantics></math></inline-formula> and saturated with a copper–water nanofluid. The cavity walls were well insulated except for the left wall, which was kept at a hot isothermal temperature and was either non-corrugated or corrugated with rectangular waves. The Darcy–Brinkman–Forchheimer model and local thermal non-equilibrium models were adopted in momentum and energy-governing equations and solved numerically by utilizing commercial software. The influences of various effective parameters, including the Reynolds number (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>20</mn><mo>≤</mo><mi>R</mi><mi>e</mi><mo>≤</mo><mn>1000</mn></mrow></semantics></math></inline-formula>), the nanoparticle volume fraction (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0</mn><mo>%</mo><mo>≤</mo><mi>φ</mi><mo>≤</mo><mn>20</mn><mo>%</mo></mrow></semantics></math></inline-formula>), the inflow and outflow vent aspect ratios (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0.1</mn><mo>≤</mo><mrow><mrow><mi>D</mi></mrow><mo>/</mo><mrow><mi>H</mi></mrow></mrow><mo>≤</mo><mn>0.4</mn></mrow></semantics></math></inline-formula>), the rectangular wave corrugation number (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>N</mi><mo>=</mo><mn>5</mn></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>N</mi><mo>=</mo><mn>10</mn></mrow></semantics></math></inline-formula>), and the corrugation dimension ratio (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>C</mi><mi>R</mi><mo>=</mo><mn>1</mn></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>C</mi><mi>R</mi><mo>=</mo><mn>0.5</mn></mrow></semantics></math></inline-formula>) were determined. The results indicate that the flow field and heat transfer were affected mainly by variations in <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>R</mi><mi>e</mi></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mrow><mrow><mi>D</mi></mrow><mo>/</mo><mrow><mi>H</mi></mrow></mrow></mrow></semantics></math></inline-formula>, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>φ</mi></mrow></semantics></math></inline-formula> for a non-corrugated left wall; they were additionally influenced by <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>N</mi></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>C</mi><mi>R</mi></mrow></semantics></math></inline-formula> when the wall was corrugated. The fluid- and solid-phase temperatures of the metal foam increased with an increase in <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>R</mi><mi>e</mi></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mrow><mrow><mi>D</mi></mrow><mo>/</mo><mrow><mi>H</mi></mrow></mrow></mrow></semantics></math></inline-formula>. The fluid-phase Nusselt number near the hot left sidewall increased with an increase in <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>φ</mi></mrow></semantics></math></inline-formula> by <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mfenced separators="|"><mrow><mn>25</mn><mo>–</mo><mn>60</mn></mrow></mfenced><mo>%</mo></mrow></semantics></math></inline-formula>, while the solid-phase Nusselt number decreased by <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mfenced separators="|"><mrow><mn>10</mn><mo>–</mo><mn>30</mn></mrow></mfenced><mo>%</mo></mrow></semantics></math></inline-formula>, and these numbers rose by around <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>3.5</mn></mrow></semantics></math></inline-formula> times when the Reynolds number increased from <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>20</mn></mrow></semantics></math></inline-formula> to <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>1000</mn></mrow></semantics></math></inline-formula>. For the corrugated hot wall, the Nusselt numbers of the two metal foam phases increased with an increase in <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>R</mi><mi>e</mi></mrow></semantics></math></inline-formula> and decreased with an increase in <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mrow><mrow><mi>D</mi></mrow><mo>/</mo><mrow><mi>H</mi></mrow></mrow></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>C</mi><mi>R</mi></mrow></semantics></math></inline-formula>, or <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>N</mi></mrow></semantics></math></inline-formula> by <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>10</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>19</mn><mo>%</mo></mrow></semantics></math></inline-formula>, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>37</mn><mo>%</mo></mrow></semantics></math></inline-formula>. The original aspect of this study is its use of a thermal, non-equilibrium, nanofluid-saturated metal foam in a corrugated L-shaped vented cavity. We aimed to investigate the thermal performance of this system in order to reinforce the viability of applying this material in thermal engineering systems.
Electronic computers. Computer science
Multiple Approaches for Teaching Responsible Computing
Stacy A. Doore, Michelle Trim, Joycelyn Streator
et al.
Teaching applied ethics in computer science has shifted from a perspective of teaching about professional codes of conduct and an emphasis on risk management towards a broader understanding of the impacts of computing on humanity and the environment and the principles and practices of responsible computing. One of the primary shifts in the approach to teaching computing ethics comes from research in the social sciences and humanities. This position is grounded in the idea that all computing artifacts, projects, tools, and products are situated within a set of ideas, attitudes, goals, and cultural norms. This means that all computing endeavors have embedded within them a set of values. To teach responsible computing always requires us to first recognize that computing happens in a context that is shaped by cultural values, including our own professional culture and values. The purpose of this paper is to highlight current scholarship, principles, and practices in the teaching of responsible computing in undergraduate computer science settings. The paper is organized around four primary sections: 1) a high-level rationale for the adoption of different pedagogical approaches based on program context and course learning goals, 2) a brief survey of responsible computing pedagogical approaches; 3) illustrative examples of how topics within the CS 2023 Social, Ethical, and Professional (SEP) knowledge area can be implemented and assessed across the broad spectrum of undergraduate computing courses; and 4) links to examples of current best practices, tools, and resources for faculty to build responsible computing teaching into their specific instructional settings and CS2023 knowledge areas.
The Empowerment of Science of Science by Large Language Models: New Tools and Methods
Guoqiang Liang, Jingqian Gong, Mengxuan Li
et al.
Large language models (LLMs) have exhibited exceptional capabilities in natural language understanding and generation, image recognition, and multimodal tasks, charting a course towards AGI and emerging as a central issue in the global technological race. This manuscript conducts a comprehensive review of the core technologies that support LLMs from a user standpoint, including prompt engineering, knowledge-enhanced retrieval augmented generation, fine tuning, pretraining, and tool learning. Additionally, it traces the historical development of Science of Science (SciSci) and presents a forward looking perspective on the potential applications of LLMs within the scientometric domain. Furthermore, it discusses the prospect of an AI agent based model for scientific evaluation, and presents new research fronts detection and knowledge graph building methods with LLMs.
Weed Detection from Unmanned Aerial Vehicle Imagery Using Deep Learning—A Comparison between High-End and Low-Cost Multispectral Sensors
Anna Teresa Seiche, Lucas Wittstruck, Thomas Jarmer
In order to meet the increasing demand for crops under challenging climate conditions, efficient and sustainable cultivation strategies are becoming essential in agriculture. Targeted herbicide use reduces environmental pollution and effectively controls weeds as a major cause of yield reduction. The key requirement is a reliable weed detection system that is accessible to a wide range of end users. This research paper introduces a self-built, low-cost, multispectral camera system and evaluates it against the high-end MicaSense Altum system. Pixel-based weed and crop classification was performed on UAV datasets collected with both sensors in maize using a U-Net. The training and testing data were generated via an index-based thresholding approach followed by annotation. As a result, the F1-score for the weed class reached 82% on the Altum system and 76% on the low-cost system, with recall values of 75% and 68%, respectively. Misclassifications occurred on the low-cost system images for small weeds and overlaps, with minor oversegmentation. However, with a precision of 90%, the results show great potential for application in automated weed control. The proposed system thereby enables sustainable precision farming for the general public. In future research, its spectral properties, as well as its use on different crops with real-time on-board processing, should be further investigated.
Enhancing Online Security: A Novel Machine Learning Framework for Robust Detection of Known and Unknown Malicious URLs
Shiyun Li, Omar Dib
The rapid expansion of the internet has led to a corresponding surge in malicious online activities, posing significant threats to users and organizations. Cybercriminals exploit malicious uniform resource locators (URLs) to disseminate harmful content, execute phishing schemes, and orchestrate various cyber attacks. As these threats evolve, detecting malicious URLs (MURLs) has become crucial for safeguarding internet users and ensuring a secure online environment. In response to this urgent need, we propose a novel machine learning-driven framework designed to identify known and unknown MURLs effectively. Our approach leverages a comprehensive dataset encompassing various labels—including benign, phishing, defacement, and malware—to engineer a robust set of features validated through extensive statistical analyses. The resulting malicious URL detection system (MUDS) combines supervised machine learning techniques, tree-based algorithms, and advanced data preprocessing, achieving a high detection accuracy of 96.83% for known MURLs. For unknown MURLs, the proposed framework utilizes CL_K-means, a modified k-means clustering algorithm, alongside two additional biased classifiers, achieving 92.54% accuracy on simulated zero-day datasets. With an average processing time of under 14 milliseconds per instance, MUDS is optimized for real-time integration into network endpoint systems. These outcomes highlight the efficacy and efficiency of the proposed MUDS in fortifying online security by identifying and mitigating MURLs, thereby reinforcing the digital landscape against cyber threats.
Multi-objective Particle Swarm Optimization Algorithm Guided by Extreme Learning Decision Network
ZHANG Yifan, SONG Wei
When solving multi-objective optimization problems, particle swarm optimization algorithms usually employ preset example selection methods and search strategies, which cannot be adjusted according to specific optimization states. In the face of different optimization problems, inappropriate search strategies cannot effectively guide the population, resulting in low search performance of the population. To solve the above problems, a multi-objective particle swarm optimization algorithm guided by extreme learning decision network (ELDN-PSO) is proposed. First of all, the multi-objective optimization problem is decomposed into several scalar subproblems, and an extreme learning decision network is constructed. The network takes the particle position as input, and selects appropriate search actions for each particle according to the optimization state. The fitness change of a particle on the subproblem is obtained as the training sample for the reinforcement learning, and the training speed is improved by extreme learning machine. In the process of optimization, the network is automatically adjusted according to the optimization states, and it selects the appropriate search strategy for the particles at different search stages. Secondly, the non-dominated solutions in the multi-objective optimization problem are difficult to compare. Thus, the leadership of each solution is quantified into a comparable value, so that the examples are more clearly selected for the particles. In addition, an external archive is used to store better particles to maintain the quality of the solutions and guide the population. Comparative experiments are carried out on the ZDT and DTLZ test functions. The results show that ELDN-PSO can effectively cope with different Pareto front shapes, improving the optimization speed as well as the convergence and diversity of the solutions.
Electronic computers. Computer science
Tachyon: Enhancing stacked models using Bayesian optimization for intrusion detection using different sampling approaches
T. Anitha Kumari, Sanket Mishra
The integration of sensors in the monitoring of essential bodily measurements, air quality, and energy consumption in buildings demonstrates the importance of the Internet of Things (IoT) in everyday life. These security breaches are caused by rudimentary and immature security protocols that are implemented on IoT devices. An intrusion detection system is used to detect security threats and system-level applications to detect malicious activities. This paper introduces Tachyon, a combination of various statistical and tree-based Artificial Intelligence (AI) techniques, such as Extreme Gradient Boosting (XGBoost), Random Forest (RF), Bidirectional Auto-Regressive Transformers (BART), Logistic Regression (LR), Multivariate Adaptive Regression Splines (MARS), Decision Tree (DT), and a top k stack ensemble to distinguish between normal and malicious attacks in a binary classification setting. The IoTID2020 dataset used in this study consists of 6,25,783 samples with 83 features. An initial examination of the data reveals its unbalanced nature. To create a balanced dataset, a range of sampling techniques were used, including Oversampling, Undersampling, Synthetic Minority Oversampling Technique (SMOTE), Random Oversampling Examples (ROSE), Borderline Synthetic Minority Oversampling Technique (b-SMOTE), and Adaptive Synthetic (ADASYN). In addition, principal component analysis (PCA) and partial least squares (PLS) were used to determine the most significant features. The experimental results demonstrate that the stacked ensemble achieved a performance of 99.8%, which is better than the baseline approaches. An ablation study of ensemble models was also conducted to assess the performance of the proposed model in various scenarios.
Electronic computers. Computer science
Prospects for Time-Domain and Multi-Messenger Science with AXIS
The AXIS Time-Domain, Multi-Messenger Science Working Group, :
et al.
The Advanced X-ray Imaging Satellite (AXIS) promises revolutionary science in the X-ray and multi-messenger time domain. AXIS will leverage excellent spatial resolution (<1.5 arcsec), sensitivity (80x that of Swift), and a large collecting area (5-10x that of Chandra) across a 24-arcmin diameter field of view to discover and characterize a wide range of X-ray transients from supernova-shock breakouts to tidal disruption events to highly variable supermassive black holes. The observatory's ability to localize and monitor faint X-ray sources opens up new opportunities to hunt for counterparts to distant binary neutron star mergers, fast radio bursts, and exotic phenomena like fast X-ray transients. AXIS will offer a response time of <2 hours to community alerts, enabling studies of gravitational wave sources, high-energy neutrino emitters, X-ray binaries, magnetars, and other targets of opportunity. This white paper highlights some of the discovery science that will be driven by AXIS in this burgeoning field of time domain and multi-messenger astrophysics.
en
astro-ph.HE, astro-ph.IM
Collectively encoding protein properties enriches protein language models
Jingmin An, Xiaogang Weng
Abstract Pre-trained natural language processing models on a large natural language corpus can naturally transfer learned knowledge to protein domains by fine-tuning specific in-domain tasks. However, few studies focused on enriching such protein language models by jointly learning protein properties from strongly-correlated protein tasks. Here we elaborately designed a multi-task learning (MTL) architecture, aiming to decipher implicit structural and evolutionary information from three sequence-level classification tasks for protein family, superfamily and fold. Considering the co-existing contextual relevance between human words and protein language, we employed BERT, pre-trained on a large natural language corpus, as our backbone to handle protein sequences. More importantly, the encoded knowledge obtained in the MTL stage can be well transferred to more fine-grained downstream tasks of TAPE. Experiments on structure- or evolution-related applications demonstrate that our approach outperforms many state-of-the-art Transformer-based protein models, especially in remote homology detection.
Computer applications to medicine. Medical informatics, Biology (General)