Reviewing the Framework of Blockchain in Fake News Detection
Tanweer Alam, Ruchi Gupta
In the social media environment, fake news is a significant issue. It might be online or offline, depending on the field of journalism. Concerns have been expressed by media and publishing houses, who are looking for solutions to the problem. One of the solutions the industry has to offer in this area is Blockchain. It could be digital security trading, source or identity verification, or quotes following a certain news piece, photo, or video. It's miles of shared document generation to deliver timely files, and it's done with the help of a specific article, video, or image that has been addressed. This will no longer assist the fact abuser in verifying the details. This will help the fact abuser confirm the details, but it will also offer documentation of metadata generated at all phases. It allows you to cut the expense of disseminating false information by forwarding and explicit disclosure to persons who have first-hand knowledge of the subject. The proposed structure for acquiring fake news is supported by the blockchain age, which allows news organizations to deliver their content to their subscribers transparently. This framework was created for journalists and can be integrated into any current platform to publish a news piece and include asset statistics.
Electronic computers. Computer science
Students’ Innovation Ability Evaluation in Vocational Colleges Based on the STEAM Education Concept: A Neutrosophic SuperHyper Pentapartitioned Process Model
Xiaodan Kong
Innovation ability is increasingly regarded as a core competence in vocational education, particularly under the STEAM (Science, Technology, Engineering, Arts, Mathematics) paradigm that emphasizes integrative and creative problem-solving. However, evaluating innovation ability is challenging due to its multidimensionality, vagueness, and subjectivity. This paper introduces a novel Neutrosophic SuperHyper Pentapartitioned Process Model (NSPPM) designed to rigorously capture uncertainty, contradiction, and incompleteness in students’ innovation assessment. Building on the formalism of Single-Valued Pentapartitioned Neutrosophic Sets (SVPNS), the model decomposes student performance into five dimensions: truth (T), contradiction (C), ignorance (G), uncertainty (U), and falsehood (F). These are then aggregated across multi level tasks using a superhyperstructure composition that ensures closure, monotonicity, and idempotence. Decision-making is achieved via a dominance relation that compares students across dimensions, preserving the neutrosophic nature of the evaluation without collapsing it into scalar indices. To illustrate, synthetic case studies simulate student performance under STEAM tasks in engineering and arts, demonstrating how NSPPM identifies non-dominated students and reveals nuanced patterns of innovation capacity. The results highlight both the theoretical novelty of pentapartitioned neutrosophic superhyper modeling and its practical utility in educational evaluation.
Mathematics, Electronic computers. Computer science
Evaluation system of college english teaching quality based on fuzzy information of artificial intelligence
Ma Lina
Abstract The current evaluation of English teaching quality in colleges and universities faces the problems of information uncertainty and fuzziness. Traditional evaluation methods cannot accurately reflect the complex teaching effects, mainly due to the diversity of data and the fuzziness of evaluation dimensions. To address this issue, this paper proposes a college English teaching quality evaluation system that combines Fuzzy C-Means (FCM) and Takagi-Sugeno Fuzzy Inference System (TS-FIS). First, the FCM algorithm is utilized to fuzzify various teaching data and convert the evaluation dimensions into fuzzy membership degrees. Then, TS-FIS is used to infer this fuzzy information and generate comprehensive scores. Finally, a deep neural network (DNN) is employed to train historical data, dynamically adjusting the evaluation results. The findings demonstrate that the system achieves an evaluation accuracy of more than 91% when dealing with uncertainties in complex teaching environments, and the score fluctuation range is controlled within 5% during the dynamic adjustment process, which proves the effectiveness of the system in improving evaluation accuracy and adaptability. The method proposed in this paper provides an effective solution to the problem of evaluating English teaching quality in colleges and universities using fuzzy information.
Computational linguistics. Natural language processing, Electronic computers. Computer science
Overcomplete graph convolutional denoising autoencoder for noisy skeleton action recognition
Jiajun Guo, Qingge Ji, Guangwei Shan
Abstract Current skeleton‐based action recognition methods usually assume the input skeleton is complete and noise‐free. However, it is inevitable that the captured skeletons are incomplete due to occlusions or noisy due to changes in the environment. When dealing with these data, even State Of The Art (SOTA) recognition backbones experience significant degradation in recognition accuracy. Though a few methods have been proposed to address this issue, they still lack flexibility, efficiency and interpretability. In this work, an overcomplete Graph Convolutional Denoising Autoencoder (GCDAE) is proposed which can act as a flexible preprocessing module for pretrained recognition backbones and improve their robustness. Taking advantages of the overcomplete and fully graph convolutional structure, GCDAE is able to rectify noisy joints while keeping information of unspoiled details efficiently. On two large scale skeleton datasets NTU RGB+D 60 and 120, the introducing of GCDAE brings significant robustness improvements to SOTA backbones towards different types of noises.
Photography, Computer software
Emulation-based adaptive differential evolution: fast and auto-tunable approach for moderately expensive optimization problems
Kei Nishihara, Masaya Nakata
Abstract In the field of expensive optimization, numerous papers have proposed surrogate-assisted evolutionary algorithms (SAEAs) for a few thousand or even hundreds of function evaluations. However, in reality, low-cost simulations suffice for a lot of real-world problems, in which the number of function evaluations is moderately restricted, e.g., to several thousands. In such moderately restricted scenario, SAEAs become unnecessarily time-consuming and tend to struggle with premature convergence. In addition, tuning the SAEA parameters becomes impractical under the restricted budgets of function evaluations—in some cases, inadequate configuration may degrade performance instead. In this context, this paper presents a fast and auto-tunable evolutionary algorithm for solving moderately restricted expensive optimization problems. The presented algorithm is a variant of adaptive differential evolution (DE) algorithms, and is called emulation-based adaptive DE or EBADE. The primary aim of EBADE is to emulate the principle of sample-efficient optimization, such as that in SAEAs, by adaptively tuning the DE parameter configurations. Specifically, similar to Expected Improvement-based sampling, EBADE identifies parameter configurations that may produce expected-to-improve solutions, without using function evaluations. Further, EBADE incepts a multi-population mechanism and assigns a parameter configuration to each subpopulation to estimate the effectiveness of parameter configurations with multiple samples carefully. This subpopulation-based adaptation can help improve the selection accuracy of promising parameter configurations, even when using an expected-to-improve indicator with high uncertainty, by validating with respect to multiple samples. The experimental results demonstrate that EBADE outperforms modern adaptive DEs and is highly competitive compared to SAEAs with a much shorter runtime.
Electronic computers. Computer science, Information technology
Using Artificial Intelligence to Enhance Ongoing Psychological Interventions for Emotional Problems in Real- or Close to Real-Time: A Systematic Review
Patricia Gual-Montolio, I. Jaén, V. Martínez-Borba
et al.
Emotional disorders are the most common mental disorders globally. Psychological treatments have been found to be useful for a significant number of cases, but up to 40% of patients do not respond to psychotherapy as expected. Artificial intelligence (AI) methods might enhance psychotherapy by providing therapists and patients with real- or close to real-time recommendations according to the patient’s response to treatment. The goal of this investigation is to systematically review the evidence on the use of AI-based methods to enhance outcomes in psychological interventions in real-time or close to real-time. The search included studies indexed in the electronic databases Scopus, Pubmed, Web of Science, and Cochrane Library. The terms used for the electronic search included variations of the words “psychotherapy”, “artificial intelligence”, and “emotional disorders”. From the 85 full texts assessed, only 10 studies met our eligibility criteria. In these, the most frequently used AI technique was conversational AI agents, which are chatbots based on software that can be accessed online with a computer or a smartphone. Overall, the reviewed investigations indicated significant positive consequences of using AI to enhance psychotherapy and reduce clinical symptomatology. Additionally, most studies reported high satisfaction, engagement, and retention rates when implementing AI to enhance psychotherapy in real- or close to real-time. Despite the potential of AI to make interventions more flexible and tailored to patients’ needs, more methodologically robust studies are needed.
Shape Interrogation for Computer Aided Design and Manufacturing
N. Patrikalakis, T. Maekawa
589 sitasi
en
Engineering, Computer Science
Developing an interval method for training denoising autoencoders by bounding the noise
Bartłomiej Jacek Kubica
Information technology, Electronic computers. Computer science
BERT-CLSTM model for the classification of Moroccan commercial courts verdicts
Taoufiq El Moussaoui, Loqman Chakir
Information technology, Electronic computers. Computer science
Applying a unified process kinetic equation to advanced materials process analysis: Characterization of the kinetics of isothermal microwave‐assisted chemical syntheses
Boon Wong
Abstract Rate‐enhancement of any isothermal, isobaric chemical synthesis conducted under resonant microwave (RM) irradiation versus the same process activated by conventional field‐free heating has been attributed to a reduction in activation enthalpy of the process. This report applies a unified process kinetic equation (UPKE) to demonstrate and characterize non‐thermal microwave effects (NTME) on kinetics‐enhancements observed in isothermal microwave‐assisted chemical syntheses (IMACS). The UPKE, derived from a mesoscopic irreversible thermodynamic model, pinpoints that the rate of any high‐affinity chemical reaction is effectively independent of the affinity of the process as described by the mass‐action rate law. Energetically, activation enthalpy reduction observed in IMACS is considered the major NTME, which causes dominant process‐rate enhancements. This NTME results from RM‐induced enthalpy variation during the reaction: RM energy‐input first promotes the molar enthalpy of the irradiated reactant(s) at temperature, which consequently motivates an activation enthalpy reduction for rate‐enhancement. Conversely, frequency coefficient lowering is another common NTME occurring in IMACS, causing an adverse yet compensable setback to process‐kinetics as predicted by the UPKE. Applicability of the UPKE‐proposed rationale and methodology for IMACS kinetic characterization is fully confirmed by relevant data in the literature.
Engineering (General). Civil engineering (General), Electronic computers. Computer science
The importance of humanizing AI: using a behavioral lens to bridge the gaps between humans and machines
A. Fenwick, G. Molnar
Abstract One of the biggest challenges in Artificial Intelligence (AI) development and application is the lack of consideration for human enhancement as a cornerstone for its operationalization. Nor is there a universally accepted approach that guides best practices in this field. However, the behavioral science field offers suggestions on how to develop a sustainable and enriching relationship between humans and intelligent machines. This paper provides a three-level (micro, meso and macro) framework on how to humanize AI with the intention of enhancing human properties and experiences. It argues that humanizing AI will help make intelligent machines not just more efficient but will also make their application more ethical and human-centric. Suggestions to policymakers, organizations, and developers are made on how to implement this framework to fix existing issues in AI and create a more symbiotic relationship between humans and machines moving into the future.
Computational linguistics. Natural language processing, Electronic computers. Computer science
Stabilizing deep tomographic reconstruction: Part B. Convergence analysis and adversarial attacks
Weiwen Wu, Dianlin Hu, Wenxiang Cong
et al.
Summary: Due to lack of the kernel awareness, some popular deep image reconstruction networks are unstable. To address this problem, here we introduce the bounded relative error norm (BREN) property, which is a special case of the Lipschitz continuity. Then, we perform a convergence study consisting of two parts: (1) a heuristic analysis on the convergence of the analytic compressed iterative deep (ACID) scheme (with the simplification that the CS module achieves a perfect sparsification), and (2) a mathematically denser analysis (with the two approximations: [1] AT is viewed as an inverse A-1 in the perspective of an iterative reconstruction procedure and [2] a pseudo-inverse is used for a total variation operator H). Also, we present adversarial attack algorithms to perturb the selected reconstruction networks respectively and, more importantly, to attack the ACID workflow as a whole. Finally, we show the numerical convergence of the ACID iteration in terms of the Lipschitz constant and the local stability against noise. The bigger picture: For deep tomographic reconstruction to realize its full potential in practice, it is critically important to address the instabilities of deep reconstruction networks, which were identified in a recent PNAS paper. Our analytic compressed iterative deep (ACID) framework has provided an effective solution to address this challenge by synergizing deep learning and compressed sensing through iterative refinement. Here, we provide an initial convergence analysis, describe an algorithm to attack the entire ACID workflow, and establish not only its capability of stabilizing an unstable deep reconstruction network but also its stability against adversarial attacks dedicated to ACID as a whole. Although our theoretical results are under approximations, they shed light on the converging mechanism of ACID, serving as a basis for further investigation.
LPMP: A Bio-Inspired Model for Visual Localization in Challenging Environments
Sylvain Colomer, Sylvain Colomer, Nicolas Cuperlier
et al.
Autonomous vehicles require precise and reliable self-localization to cope with dynamic environments. The field of visual place recognition (VPR) aims to solve this challenge by relying on the visual modality to recognize a place despite changes in the appearance of the perceived visual scene. In this paper, we propose to tackle the VPR problem following a neuro-cybernetic approach. To this end, the Log-Polar Max-Pi (LPMP) model is introduced. This bio-inspired neural network allows building a neural representation of the environment via an unsupervised one-shot learning. Inspired by the spatial cognition of mammals, visual information in the LPMP model are processed through two distinct pathways: a “what” pathway that extracts and learns the local visual signatures (landmarks) of a visual scene and a “where” pathway that computes their azimuth. These two pieces of information are then merged to build a visuospatial code that is characteristic of the place where the visual scene was perceived. Three main contributions are presented in this article: 1) the LPMP model is studied and compared with NetVLAD and CoHog, two state-of-the-art VPR models; 2) a test benchmark for the evaluation of VPR models according to the type of environment traveled is proposed based on the Oxford car dataset; and 3) the impact of the use of a novel detector leading to an uneven paving of an environment is evaluated in terms of the localization performance and compared to a regular paving. Our experiments show that the LPMP model can achieve comparable or better localization performance than NetVLAD and CoHog.
Mechanical engineering and machinery, Electronic computers. Computer science
Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing
Marie Hicks
166 sitasi
en
Political Science
Translating Artificial Intelligence Into Clinical Care.
Andrew Beam, I. Kohane
Query Specific Focused Summarization of Biomedical Journal Articles
Akshara Rai, Suyash Sangwan, Tushar Goel
et al.
Information technology, Electronic computers. Computer science
МОДЕЛЬ ТА МЕТОД ПРИЙНЯТТЯ УПРАВЛІНСЬКИХ РІШЕНЬ НА ОСНОВІ АНАЛІЗУ ГЕОПРОСТОРОВОЇ ІНФОРМАЦІЇ
Ihor Butko
У статті запропоновано модель та метод прийняття управлінських рішень на основі аналізу геопросторової інформації. Метою статті є удосконалення моделі та методу прийняття управлінських рішень на основі аналізу геопросторової інформації. Результати: запропоновано алгоритм процесу прийняття управлінського рішення, який складається з ситуаційної та концептуальної частини; запропоновано алгоритм дій керівника організації на основі розробленої моделі прийняття управлінського рішення; розглянута ситуація, коли якість рішення залежить від зовнішніх факторів, на які орган прийняття рішення не впливає; наведена загальна схема методу прийняття управлінських рішень на основі аналізу геопросторової інформації. Використовуваними методами є: методи системного аналізу, теорії прийняття рішень, обробки інформації, оптимальних рішень, теорії ймовірності. Висновки. Удосконалено модель прийняття управлінських рішень, яка, на відміну від відомих, є динамічною і базується на відборі рішень, що є оптимальними за комбінованим критерієм, при цьому використовується прогнозні значення імовірностей станів середовища, що забезпечує обґрунтованість управлінських рішень. Отримав подальший розвиток метод прийняття управлінських рішень на основі аналізу геопросторової інформації, який базується на моделях прогнозування даних та прийняття управлінських рішень і використовує метод семантичної сегментації видових зображень для оцінки апріорних імовірностей станів середовища, що забезпечує можливість прийняття рішення в умовах ризику та невизначенності. Напрямком подальших досліджень є розробка інформаційної технології прийняття управлінських рішень на основі аналізу геопросторової інформації.
Computer software, Information theory
Esports: The Chess of the 21st Century
Matthew A. Pluss, K. Bennett, A. Novak
et al.
For many decades, researchers have explored the true potential of human achievement. The expertise field has come a long way since the early works of de Groot (1965) and Chase and Simon (1973). Since then, this inquiry has expanded into the areas of music, science, technology, sport, academia, and art. Despite the vast amount of research to date, the capability of study methodologies to truly capture the nature of expertise remains questionable. Some considerations include (i) the individual bias in the retrospective recall of developmental activities, (ii) the ability to develop ecologically valid tasks, and (iii) difficulties capturing the influence of confounding factors on expertise. This article proposes that expertise research in electronic sports (esports) presents an opportunity to overcome some of these considerations. Esports involves individuals or teams of players that compete in video game competitions via human-computer interaction. Advantages of applying the expert performance approach in esports include (i) developmental activities are objectively tracked and automatically logged online, (ii) the constraints of representative tasks correspond with the real-world environment of esports performance, and (iii) expertise has emerged without the influence of guided systematic training environments. Therefore, this article argues that esports research provides an ideal opportunity to further advance research on the development and assessment of human expertise.
66 sitasi
en
Psychology, Medicine
Research on 3D Reconstruction in Binocular Stereo Vision Based on Feature Point Matching Method
Xiaobin Lin, Jianxing Wang, Chen Lin
Computer vision as an important branch of computer science and artificial intelligence has made rapid progress in the past thirty years. Binocular stereo vision is one of the most important parts in computer vision. Binocular stereo vision can well simulate human eyes stereoscopic perception of three-dimensional objects, and has been widely used in various fields of industrial automation production and practical life. Binocular stereo vision technology is a comprehensive technology, whose knowledge includes optics, physics, image processing, computer, artificial intelligence and electronic technology, and other fields of content. Camera calibration and feature point matching are difficult and key points in binocular stereo vision technology. In this paper, the 3D reconstruction of binocular stereo vision based on feature point matching method is discussed. The main research includes the following contents: binocular stereo vision system principle, feature matching algorithm research and 3D reconstruction system implementation. This project puts forward a set of feasible algorithms and finally can get a good three-dimensional reconstruction effect based on the analysis and research of a series of algorithms.
Fracture Resistance of Partial Indirect Restorations Made with CAD/CAM Technology. A Systematic Review and Meta-Analysis
Amaia Amesti-Garaizabal, Rubén Agustín-Panadero, Blanca Verdejo-Solá
et al.
Background: The aim of this systematic review and meta-analysis was to determine the fracture resistance and survival rate of partial indirect restorations inlays, onlays, and overlays fabricated using computer-aided design and computer-aided manufacturing (CAD-CAM) technology from ceramics, composite resin, resin nanoceramic, or hybrid ceramic and to analyze the influence of proximal box elevation on fracture resistance. Materials and methods: This systematic review was based on guidelines proposed by the preferred reporting items for systematic reviews and meta-analyses (PRISMA). An electronic search was conducted in databases US National Library of Medicine National Institutes of Health (PubMed), Scopus, Web of Science (WOS), and Embase. In vitro trials published during the last 10 years were included in the review. Results: Applying inclusion criteria based on the review’s population, intervention, comparison, outcome (PICO) question, 13 articles were selected. Meta-analysis by restoration type estimated the fracture resistance of inlays to be 1923.45 Newtons (N); of onlays 1644 N and of overlays 1383.6 N. Meta-analysis by restoration material obtained an estimated fracture resistance for ceramic of 1529.5 N, for composite resin of 1600 Ne, for resin nanoceramic 2478.7 N, and hybrid ceramic 2108 N. Conclusions: Resin nanoceramic inlays present significantly higher fracture resistance values. Proximal box elevation does not exert any influence on the fracture resistance of indirect restorations.