FVM: A Formal Verification Methodology for VHDL Designs
Hipolito Guzman-Miranda, Marcos Lopez Garcia, Alberto Urbon Aguado
With the increasing complexity of digital designs, functional verification is becoming unmanageable. Bugs that survive verification cause a number of issues with functional, performance, security, safety and economic impact, and are unfortunately prevalent in current FPGA and ASIC designs, manifesting in later stages of development or even after the design has been deployed or manufactured. In this context, Formal Verification poses itself as a powerful complement to verification by simulation, which is currently the most extended verification method. By mathematically proving properties of the designs, Formal Verification allows to verify them with high confidence, but also requires designers to have deep expertise of the methods, techniques and tools. Thus, adoption of formal methods for verification is not as extended as their usefulness may suggest, and even less in the case of VHDL teams. To lower the adoption barriers for formal verification of digital designs, the present article proposes a Formal Verification Methodology, which is complemented by a build and test framework and a repository of examples. Results of applying the Formal Verification Methodology to the repository of examples show compelling results both in manageable design complexity and verification productivity.
Electronic computers. Computer science, Information technology
Empirical research on the evolution trend of heat and sentiment for emergencies
Shihong Wu, Wei Yu, Yanxia Zhao
et al.
Emergencies inflict heavy casualties, economic losses, ecological damage, and significant social harm to society. By segmenting information topics and analysing emotional shifts, we can identify corresponding real-world events and their impacts, thereby providing guidance for timely responses to emergencies. In the past, public opinion monitoring of emergencies was based mainly on single-topic detection or emotion analysis, which cannot comprehensively evaluate the evolution of public opinion. In this work, word segmentation is applied to video comments related to various emergency situations. By utilizing the co-word network and Louvain algorithm for theme division, along with sentiment analysis constructed through time series analysis of sentiment value changes for various emergencies employing the naive Bayes method, the evolution of public opinion is comprehensively assessed. As a result, the pivotal nodes in the evolution of public opinion are identified and the evolution process is divided into stages. Using this method, relevant management departments can effectively address the majority of public opinions for various types of emergencies, addressing them from the perspectives of prevention, adjustment, and recovery. This approach not only enhances rescue efficiency and strengthens safety management but also actively guides the evolution of public opinion, ultimately providing society with solid and reliable security safeguards.
Electronic computers. Computer science, Science
TOPSIS approach for MADM based on Quadripartitioned single valued neutrosophic refined Hamacher aggregation operations
Arokia Pratheesha S V, Annapoorna M S, Radha R
et al.
Hamacher operators are extensively utilized in multicriteria attribute group decision-making (MAGDM) problems due to their remarkable adaptability provided by an adjustable parameter. Here, the Hamacher T-norm and T-conorm operations for two QSVNRNs are formulated.Using these Hamacher operations, we present the quadripartitioned single-valued neutrosophic refined Hamacher weighted averaging (QSVNRHWA) operators within the QSVNR framework and analyze their properties.Finally, we explore a TOPSIS-based approach for multi-attribute decision-making problems that employs the QSVNRHWA operators, demonstrating its application in evaluating practical scenarios related to converting solid waste into energy.
Mathematics, Electronic computers. Computer science
Semantic similarity on multimodal data: A comprehensive survey with applications
Baha Ihnaini, Belal Abuhaija, Ebenezer Atta Mills
et al.
Recently, the revival of the semantic similarity concept has been featured by the rapidly growing artificial intelligence research fueled by advanced deep learning architectures enabling machine intelligence using multimodal data. Thus, semantic similarity in multimodal data has gained substantial attention among researchers. However, the existing surveys on semantic similarity measures are restricted to a single modality, mainly text, which significantly limits the capability to understand the intelligence of real-world application scenarios. This study critically reviews semantic similarity approaches by shortlisting 223 vital articles from the leading databases and digital libraries to offer a comprehensive and systematic literature survey. The notable contribution is to illuminate the evolving landscape of semantic similarity and its crucial role in understanding, interpreting, and extracting meaningful information from multimodal data. Primarily, it highlights the challenges and opportunities inherent in different modalities, emphasizing the significance of advancements in cross-modal and multimodal semantic similarity approaches with potential application scenarios. Finally, the survey concludes by summarizing valuable future research directions. The insights provided in this survey improve the understanding and pave the way for further innovation by guiding researchers in leveraging the strength of semantic similarity for an extensive range of real-world applications.
Electronic computers. Computer science
How large language model-powered conversational agents influence decision making in domestic medical triage contexts
Catalina Gomez, Junjie Yin, Chien-Ming Huang
et al.
IntroductionEffective delivery of healthcare depends on timely and accurate triage decisions, directing patients to appropriate care pathways and reducing unnecessary visits. Artificial Intelligence (AI) solutions, particularly those based on Large Language Models (LLMs), may enable non-experts to make better triage decisions at home, thus easing the healthcare system's load. We investigate how LLM-powered conversational agents influence non-experts in making triage decisions, further studying different persona profiles embedded via prompting.MethodsWe designed a randomized experiment where participants first assessed patient symptom vignettes independently, then consulted one of the two agent profiles—rational or empathic—for advice, and finally revised their triage ratings. We used linear models to quantify the effect of the agent profile and confidence on the weight of advice. We examined changes in confidence and accuracy of triage decisions, along with participants' perceptions of the agents.ResultsIn a study with 49 layperson participants, we found that persona profiles can be differentiated in LLM-powered conversational agents. However, these profiles did not significantly affect the weight of advice. Notably, less confident participants were more influenced by LLM advice, leading to larger adjustments to initial decisions. AI guidance improved alignment with correct triage levels and boosted confidence in participants' decisions.DiscussionWhile LLM advice improves triage recommendations accuracy, confidence plays an important role in its adoption. Our findings raise design considerations for human-AI interfaces, highlighting two key aspects: encouraging appropriate alignment with LLMs' advice and ensuring that people are not easily swayed in situations of uncertainty.
Electronic computers. Computer science
Data intellectual property is a negotiable and tradable data property
叶雅珍, 朱扬勇
Electronic computers. Computer science
Incremental computation of the set of period sets
Eric Rivals
Overlaps between words are crucial in many areas of computer science, such as code design, stringology, and bioinformatics. A self overlapping word is characterized by its periods and borders. A period of a word $u$ is the starting position of a suffix of $u$ that is also a prefix $u$, and such a suffix is called a border. Each word of length, say $n>0$, has a set of periods, but not all combinations of integers are sets of periods. Computing the period set of a word $u$ takes linear time in the length of $u$. We address the question of computing, the set, denoted $Γ_n$, of all period sets of words of length $n$. Although period sets have been characterized, there is no formula to compute the cardinality of $Γ_n$ (which is exponential in $n$), and the known dynamic programming algorithm to enumerate $Γ_n$ suffers from its space complexity. We present an incremental approach to compute $Γ_n$ from $Γ_{n-1}$, which reduces the space complexity, and then a constructive certification algorithm useful for verification purposes. The incremental approach defines a parental relation between sets in $Γ_{n-1}$ and $Γ_n$, enabling one to investigate the dynamics of period sets, and their intriguing statistical properties. Moreover, the period set of a word $u$ is the key for computing the absence probability of $u$ in random texts. Thus, knowing $Γ_n$ is useful to assess the significance of word statistics, such as the number of missing words in a random text.
A guideline for the methodology chapter in computer science dissertations
Marco Araujo
Rather than simply offering suggestions, this guideline for the methodology chapter in computer science dissertations provides thorough insights on how to develop a strong research methodology within the area of computer science. The method is structured into several parts starting with an overview of research strategies which include experiments, surveys, interviews and case studies. The guide highlights the significance of defining a research philosophy and reasoning by talking about paradigms such as positivism, constructivism and pragmatism. Besides, it reveals the importance of types of research including deductive and inductive methodologies; basic versus applied research approaches. Moreover, this guideline discusses data collection and analysis intricacies that divide data into quantitative and qualitative typologies. It explains different ways in which data can be collected from observation to experimentation, interviews or surveys. It also mentions ethical considerations in research emphasizing ethical behavior like following academic principles. In general, this guideline is an essential tool for undertaking computer science dissertations that help researchers structure their work while maintaining ethical standards in their study design.
An effective stacked autoencoder based depth separable convolutional neural network model for face mask detection
Sundaravadivazhagan Balasubaramanian, Robin Cyriac, Sahana Roshan
et al.
The COVID-19 pandemic has been infecting the entire world over the past years. To prevent the spread of COVID-19, people have acclimatised to the new normal, which includes working from home, communicating online, and maintaining personal cleanliness. There are numerous tools required to prepare to compact transmissions in the future. One of these elements for protecting individuals from fatal virus transmission is the mask. Studies have indicated that wearing a mask may help to reduce the risk of viral transmission of all kinds. It causes many public places to take efforts to ensure that its guests wear adequate face masks and keep a safe distance from one another. Screening systems need to be installed at the doors of businesses, schools, government buildings, private offices, and/or other important areas. A variety of face detection models have been designed using various algorithms and techniques. Most of the articles in the previously published research have not worked on dimensionality reduction in conjunction with depth-wise separable neural networks. The necessity of determining the identities of people who do not cover their faces when they are in public is the driving factor for the development of this methodology. This research work proposes a deep learning technique to determine if a person is wearing mask or not and identifies whether it is properly worn or not. Stacked Auto Encoder (SAE) technique is implemented by stacking the following components: Principal Component Analysis (PCA) and Depth-wise Separable Convolutional Neural Network (DWSC-NN). PCA is used to reduce the irrelevant features in the images and resulted high true positive rate in the detection of mask. We achieved an accuracy score of 94.16% and an F1 score of 96.009% by the application of the method described in this research.
Computer engineering. Computer hardware, Electronic computers. Computer science
Design of improved deer hunting optimization enabled multihop routing protocol for wireless sensor networks
D. Lubin Balasubramanian, V. Govindasamy
A wireless sensor network (WSN) encompasses a huge set of sensor nodes employed to collect data and transmit it to a base station (BS). Due to its compact, inexpensive, and scalable nature of sensors, WSN finds its applicability in diverse real-time applications. The battery-operated sensor nodes necessitate the design of a multi-hop routing protocol for the effective utilization of available energy in the network. Routing can be considered an optimization problem and can be solved by the design of bioinspired algorithms. This study introduces an improved deer hunting optimization-enabled multihop routing (IDHO-MHR) protocol for WSN. The major intention of the IDHO-MHR approach is to optimally find the routes to the destination in WSN. The IDHO algorithm is initially derived by the incorporation of the Nelder Mead (NM) concept into the traditional DHO algorithm. In addition, the IDHO-MHR technique primarily derives a fitness function with the inclusion of two major variables, namely residual energy (RE) and distance. The nodes with higher RE and minimum distance have the probability of becoming optimal routes from the networks. The performance validation of the IDHO-MHR approach is performed, and the outcomes are inspected in various aspects. The experimental outcomes reported the supremacy of the IDHO-MHR protocol over the other recent approaches.
Electronic computers. Computer science, Science
Case Study of Smart City Development in Romania
Laurentiu-Nicolae PRICOPE, Valentin-Marian ANTOHI, Romeo-Victor IONESCU
et al.
Amid the increasingly acute need for systematization and urban social management, Romanian cities are facing transformation attempts, their desideratum being to reach a new level of comfort and safety offered to citizens. All these aspects are in line with the sustainable development goals through the need to create the least polluted cities that offer a healthy standard of living to citizens. Starting from the sustainable development desideratum obtained by orienting urban areas to the needs of the citizen and the community, we intend to analyze through the dispersion method the level of smart cities development in Romania. The mainly resuls consist in the realization of a ranking of the Romanian smart cities.
Electronic computers. Computer science, Economic theory. Demography
Rich‐scale feature fusion network for salient object detection
Fengming Sun, Junjie Cui, Xia Yuan
et al.
Abstract Fully convolutional neural networks‐based salient object detection has recently achieved great success with its performance benefits from the effective use of multi‐layer features. Based on this, most of the existing saliency detectors designed complex network structures to fuse the multi‐level features generated by the backbone network. However, the variable scale and complex shape of the target are always a great challenge for saliency detection tasks. In this paper, the authors propose a Rich‐scale Feature Fusion Network (RFFNet) for salient object detection. The authors design a rich‐scale feature interactive fusion module to obtain more efficient features from the multi‐scale features. Moreover, the global feature enhance module is used to extract features with better characterization for the final saliency prediction. Extensive experiments performed on five benchmark datasets demonstrate that the proposed method can achieve satisfactory results on different evaluation metrics compared to other state‐of‐the‐art salient object detection approaches.
Photography, Computer software
An Integrative Survey on Mental Health Conversational Agents to Bridge Computer Science and Medical Perspectives
Young Min Cho, Sunny Rai, Lyle Ungar
et al.
Mental health conversational agents (a.k.a. chatbots) are widely studied for their potential to offer accessible support to those experiencing mental health challenges. Previous surveys on the topic primarily consider papers published in either computer science or medicine, leading to a divide in understanding and hindering the sharing of beneficial knowledge between both domains. To bridge this gap, we conduct a comprehensive literature review using the PRISMA framework, reviewing 534 papers published in both computer science and medicine. Our systematic review reveals 136 key papers on building mental health-related conversational agents with diverse characteristics of modeling and experimental design techniques. We find that computer science papers focus on LLM techniques and evaluating response quality using automated metrics with little attention to the application while medical papers use rule-based conversational agents and outcome metrics to measure the health outcomes of participants. Based on our findings on transparency, ethics, and cultural heterogeneity in this review, we provide a few recommendations to help bridge the disciplinary divide and enable the cross-disciplinary development of mental health conversational agents.
Analysis of Research Trends in Computer Science: A Network Approach
Ghazal Kalhor, Behnam Bahrak
Nowadays, computer science (CS) has emerged as a dominant force in numerous research areas both within and beyond its own discipline. However, despite its significant impact on scholarly space, only a limited number of studies have been conducted to analyze the research trends and relationships within computer science. In this study, we collected information on fields and subfields from over 2,000 research articles published in the 2022 proceedings of the top Association for Computing Machinery (ACM) conferences spanning various research fields. Through a network approach, we investigated the interconnections between CS fields and subfields to evaluate their interdisciplinarity and multidisciplinarity. Our findings indicate that computing methodologies and privacy and security stand out as the most interdisciplinary fields, while human-centered computing exhibits the highest frequency among the papers. Furthermore, we discovered that machine learning emerges as the most interdisciplinary and multidisciplinary subfield within computer science. These results offer valuable insights for universities seeking to foster interdisciplinary research opportunities for their students.
Simpson's Paradox and Lagging Progress in Completion Trends of Underrepresented Students in Computer Science
John Mason Taylor, Rebecca Drucker, Chris Alvin
et al.
It is imperative for the Computer Science (CS) community to ensure active participation and success of students from diverse backgrounds. This work compares CS to other areas of study with respect to success of students from three underrepresented groups: Women, Black and Hispanic or Latino. Using a data-driven approach, we show that trends of success over the years for underrepresented groups in CS are lagging behind other disciplines. Completion of CS programs by Black students in particular shows an alarming regression in the years 2011 through 2019. This national level decline is most concentrated in the Southeast of the United States and seems to be driven mostly by a small number of institutes that produce a large number of graduates. We strongly believe that more data-driven studies in this area are necessary to make progress towards a more equitable and inclusive CS community. Without an understanding of underlying dynamics, policy makers and practitioners will be unable to make informed decisions about how and where to allocate resources to address the problem.
Canonicity and Computability in Homotopy Type Theory
Dmitry Filippov
This dissertation gives an overview of Martin Lof's dependant type theory, focusing on its computational content and addressing a question of possibility of fully canonical and computable semantic presentation.
Switchable half-metallicity in A-type antiferromagnetic NiI2 bilayer coupled with ferroelectric In2Se3
Yaping Wang, Xinguang Xu, Xian Zhao
et al.
Abstract Electrically controlled half-metallicity in antiferromagnets is of great significance for both fundamental research and practical application. Here, by constructing van der Waals heterostructures composed of two-dimensional (2D) A-type antiferromagnetic NiI2 bilayer (bi-NiI2) and ferroelectric In2Se3 with different thickness, we propose that the half-metallicity is realizable and switchable in the bi-NiI2 proximate to In2Se3 bilayer (bi-In2Se3). The polarization flipping of the bi-In2Se3 successfully drives transition between half-metal and semiconductor for the bi-NiI2. This intriguing phenomenon is attributed to the joint effect of polarization field-induced energy band shift and interfacial charge transfer. Besides, the easy magnetization axis of the bi-NiI2 is also dependent on the polarization direction of the bi-In2Se3. The half-metallicity and magnetic anisotropy energy of the bi-NiI2 in heterostructure can be effectively manipulated by strain. These findings provide not only a feasible strategy to achieve and control half-metallicity in 2D antiferromagnets, but also a promising candidate to design advanced nanodevices.
Materials of engineering and construction. Mechanics of materials, Computer software
Forecasting design values of tidal/ocean power generator in the strait with unidirectional flow by deep learning
Ryo Fujiwara, Ryoma Fukuhara, Tsubasa Ebiko
et al.
Renewable energy is an essential factor in guaranteeing the sustainability of society. In Japan, there have been developments to harness energy from the ocean. The Tsugaru strait, in the northern region of Japan, is an area that has attracted attention for this purpose. We propose a tidal/ocean power generator utilizing a Flaring Flanged Diffuser (FFD) to harness the power. However, for the power generators utilizing FFD to generate power at the optimal condition, design values based on the stream regimes need to be determined. In this paper, the objective is to forecast the design values of tidal/ocean power generators utilizing FFD. We are especially interested in the dimensions of the diffuser shape that relate to effective factors for increasing flow velocity. Fluid field data around FFD is obtained by experimentation. The fluid field data is measured by particle image velocimetry (PIV). The trained deep neural network can forecast design values from a given fluid field. Moreover, we can recognize correlations between the changes in design values and the increase of fluid velocity.
Cybernetics, Electronic computers. Computer science
Neuro-evolutionary models for imbalanced classification problems
Israa Al-Badarneh, Maria Habib, Ibrahim Aljarah
et al.
Training an Artificial Neural Network (ANN) algorithm is not trivial, which requires optimizing a set of weights and biases that increase dramatically with the increasing capacity of the neural network resulting in such hard optimization problems. Essentially, over recent decades, stochastic search algorithms have shown remarkable abilities for addressing hard optimization problems. On the other hand, pragmatically, abundant real-world problems suffer from the imbalance problem, where the distribution of data varies considerably among classes resulting in more training biases and variances which degrades the performance of the learning algorithm. This paper introduces three stochastic and metaheuristic algorithms for training the Multilayer Perceptron (MLP) neural network to solve the problem of imbalanced classifications. The utilized algorithms are the Grey Wolf Optimization (GWO), Particle Swarm Optimization (PSO), and the Salp Swarm Algorithm (SSA). The proposed GWO-MLP, PSO-MLP, and SSA-MLP are trained based on different objective functions; accuracy, f1-score, and g-mean. Whereas, it is evaluated based on 10 benchmark imbalanced datasets. The results show an advantage for f1-score, and g-mean fitness functions over the accuracy when the datasets are imbalanced.
Electronic computers. Computer science
Container orchestration on HPC systems through Kubernetes
Naweiluo Zhou, Yiannis Georgiou, Marcin Pospieszny
et al.
Abstract Containerisation demonstrates its efficiency in application deployment in Cloud Computing. Containers can encapsulate complex programs with their dependencies in isolated environments making applications more portable, hence are being adopted in High Performance Computing (HPC) clusters. Singularity, initially designed for HPC systems, has become their de facto standard container runtime. Nevertheless, conventional HPC workload managers lack micro-service support and deeply-integrated container management, as opposed to container orchestrators. We introduce a Torque-Operator which serves as a bridge between HPC workload manager (TORQUE) and container orchestrator (Kubernetes). We propose a hybrid architecture that integrates HPC and Cloud clusters seamlessly with little interference to HPC systems where container orchestration is performed on two levels.
Computer engineering. Computer hardware, Electronic computers. Computer science