Hasil untuk "cs.AI"

Menampilkan 20 dari ~559747 hasil · dari CrossRef, DOAJ, arXiv

JSON API
CrossRef Open Access 2026
AI-powered conversational framework for mental health diagnosis

Diwakar Diwakar, Deepa Raj, Arvind Prasad et al.

In the domain of mental health care, conditions such as anxiety, depression, bipolar disorder, and borderline personality disorder (BPD) affect millions globally, often going undetected due to stigma, limited access to specialists, and the complexity of diagnosis. In this study, a hybrid AI framework is proposed that combines conversational intelligence with deep learning-based classification to assist in mental health screening. The system utilizes GPT-3.5 to conduct adaptive, human-like conversations for gathering user responses, which are then analyzed by a fine-tuned DistilRoBERTa model for accurate multi-class classification. Further, a strategic data sampling technique is employed using t-distributed stochastic neighbor embedding (t-SNE) and Sentence-Bidirectional Encoder Representations from Transformers (BERT) embeddings to select the most representative samples per class from a public Reddit mental health dataset. The classification model achieved high performance, with an accuracy of 96.27%, and Area Under the Receiver Operating Characteristic Curve (ROC-AUC) scores consistently above 0.91 across all classes, indicating strong discriminative capability. The system is computationally efficient, with an average inference time of 1.67 milliseconds per sample, making it suitable for real-time applications. This work offers a lightweight, scalable, and explainable solution that can assist professionals or be integrated into virtual mental health assistants.

CrossRef Open Access 2026
Enhancing obesity risk prediction using ensemble learning and explainable AI: a study on Saudi Health data

Norah S. Alsulami, Muhammad Sher Ramzan, Bander Alzahrani et al.

Background Early prediction of obesity risk is critical for timely intervention to prevent complications. While numerous studies have explored obesity classification, few have combined high predictive accuracy with transparent interpretability using explainable artificial intelligence (XAI). Objective This study develops an interpretable ensemble learning framework for obesity risk prediction. Methods Two ensemble-based machine learning frameworks were implemented: (i) a stacking ensemble integrating four heterogeneous base learners—Extreme Gradient Boosting (XGBoost), Linear Support Vector Machine (SVM), Bagging, and Gradient Boosting Classifier—with a LogisticRegressionCV meta-learner, and (ii) a soft voting ensemble combining XGBoost, SVM, and Bagging classifiers. Both frameworks incorporated a robust preprocessing pipeline comprising missing-value imputation, categorical encoding, feature scaling, and class rebalancing via Borderline Synthetic Minority Over-sampling Technique (Borderline-SMOTE). Model performance was evaluated on a Saudi Health dataset about ( n = 3,000) derived from the Arab Teens Lifestyle Study (ATLS), consisting of 19 behavioral, dietary, and anthropometric features. Baseline classifiers (Random Forest, AdaBoost, Bagging, and Light Gradient-Boosting Machine (LightGBM)) were optimized via Optuna for fair comparison. Model interpretability was achieved using Local Interpretable Model-Agnostic Explanations (LIME), providing both global and local insights into feature contributions. Results The soft voting ensemble attained a test accuracy of 97.81% with corresponding weighted metrics of 0.9795, 0.9781, and 0.9780, respectively. And The stacking ensemble achieved an independent test accuracy of 97.64% with weighted precision, recall, and F1-score of 0.9779, 0.9764, and 0.9763, respectively. Both ensembles demonstrated excellent generalization with minimal validation–test performance gaps, confirming their robustness and reliability. Conclusion The proposed explainable ensemble frameworks achieved high predictive accuracy, interpretability, providing clinically relevant foundation for applying ensemble learning with XAI in behavioral health modeling.

CrossRef Open Access 2025
ALL-Net: integrating CNN and explainable-AI for enhanced diagnosis and interpretation of acute lymphoblastic leukemia

Abhiram Thiriveedhi, Swetha Ghanta, Sujit Biswas et al.

This article presents a new model, ALL-Net, for the detection of acute lymphoblastic leukemia (ALL) using a custom convolutional neural network (CNN) architecture and explainable Artificial Intelligence (XAI). A dataset consisting of 3,256 peripheral blood smear (PBS) images belonging to four classes—benign (hematogones), and the other three Early B, Pre-B, and Pro-B, which are subtypes of ALL, are utilized for training and evaluation. The ALL-Net CNN is initially designed and trained on the PBS image dataset, achieving an impressive test accuracy of 97.85%. However, data augmentation techniques are applied to augment the benign class and address the class imbalance challenge. The augmented dataset is then used to retrain the ALL-Net, resulting in a notable improvement in test accuracy, reaching 99.32%. Along with accuracy, we have considered other evaluation metrics and the results illustrate the potential of ALLNet with an average precision of 99.35%, recall of 99.33%, and F1 score of 99.58%. Additionally, XAI techniques, specifically the Local Interpretable Model-Agnostic Explanations (LIME) algorithm is employed to interpret the model’s predictions, providing insights into the decision-making process of our ALL-Net CNN. These findings highlight the effectiveness of CNNs in accurately detecting ALL from PBS images and emphasize the importance of addressing data imbalance issues through appropriate preprocessing techniques at the same time demonstrating the usage of XAI in solving the black box approach of the deep learning models. The proposed ALL-Net outperformed EfficientNet, MobileNetV3, VGG-19, Xception, InceptionV3, ResNet50V2, VGG-16, and NASNetLarge except for DenseNet201 with a slight variation of 0.5%. Nevertheless, our ALL-Net model is much less complex than DenseNet201, allowing it to provide faster results. This highlights the need for a more customized and streamlined model, such as ALL-Net, specifically designed for ALL classification. The entire source code of our proposed CNN is publicly available at https://github.com/Abhiram014/ALL-Net-Detection-of-ALL-using-CNN-and-XAI.

7 sitasi en
CrossRef Open Access 2024
Enhancing monitoring of suspicious activities with AI-based and big data fusion

Surapol Vorapatratorn

This study provides an AI-based detection tool for the surveillance of suspicious activities using data fusion. The system leverages time, location, and specific data pertaining to individuals, objects, and vehicles associated with the agency. The study’s training data was obtained from Thailand’s military institution. The study focuses on comparing the efficiency between MySQL and Apache Hive for big data processing. According to the findings, MySQL is better suited for quick data retrieval and low storage capacity, while Hive demonstrates higher scalabilities for larger datasets. Furthermore, the study explores the practical utilization of web applications interfaces, enabling real-time display, analysis, and identification suspicious activity results. The web application, built with NuxtJS and MySQL, includes statistics charts and maps that show the status of suspicious items, cars, and people, as well as data filtering options. The system utilizes machine-learning algorithms to train the suspicious identification model, with the best-performing algorithms being the decision tree, reaching 98.867% classification accuracy.

3 sitasi en
arXiv Open Access 2024
Agentive Permissions in Multiagent Systems

Qi Shi

This paper proposes to distinguish four forms of agentive permissions in multiagent settings. The main technical results are the complexity analysis of model checking, the semantic undefinability of modalities that capture these forms of permissions through each other, and a complete logical system capturing the interplay between these modalities.

en cs.AI, cs.MA
CrossRef Open Access 2023
A step toward building a unified framework for managing AI bias

Saadia Afzal Rana, Zati Hakim Azizul, Ali Afzal Awan

Integrating artificial intelligence (AI) has transformed living standards. However, AI’s efforts are being thwarted by concerns about the rise of biases and unfairness. The problem advocates strongly for a strategy for tackling potential biases. This article thoroughly evaluates existing knowledge to enhance fairness management, which will serve as a foundation for creating a unified framework to address any bias and its subsequent mitigation method throughout the AI development pipeline. We map the software development life cycle (SDLC), machine learning life cycle (MLLC) and cross industry standard process for data mining (CRISP-DM) together to have a general understanding of how phases in these development processes are related to each other. The map should benefit researchers from multiple technical backgrounds. Biases are categorised into three distinct classes; pre-existing, technical and emergent bias, and subsequently, three mitigation strategies; conceptual, empirical and technical, along with fairness management approaches; fairness sampling, learning and certification. The recommended practices for debias and overcoming challenges encountered further set directions for successfully establishing a unified framework.

15 sitasi en
arXiv Open Access 2023
Proceedings of the 2023 XCSP3 Competition

Gilles Audemard, Christophe Lecoutre, Emmanuel Lonca

This document represents the proceedings of the 2023 XCSP3 Competition. The results of this competition of constraint solvers were presented at CP'23 (the 29th International Conference on Principles and Practice of Constraint Programming, held in Toronto, Canada from 27th to 31th August, 2023).

en cs.AI
arXiv Open Access 2020
Merging of Ontologies Through Merging of Their Rules

Olegs Verhodubs

Ontology merging is important, but not always effective. The main reason, why ontology merging is not effective, is that ontology merging is performed without considering goals. Goals define the way, in which ontologies to be merged more effectively. The paper illustrates ontology merging by means of rules, which are generated from these ontologies. This is necessary for further use in expert systems.

en cs.AI, cs.IR
arXiv Open Access 2019
Search Algorithms for Mastermind

Anthony D. Rhodes

his paper presents two novel approaches to solving the classic board game mastermind, including a variant of simulated annealing (SA) and a technique we term maximum expected reduction in consistency (MERC). In addition, we compare search results for these algorithms to two baseline search methods: a random, uninformed search and the method of minimizing maximum query partition sets as originally developed by both Donald Knuth and Peter Norvig.

en cs.AI
arXiv Open Access 2019
Proceedings of the 2nd Symposium on Problem-solving, Creativity and Spatial Reasoning in Cognitive Systems, ProSocrates 2017

Ana-Maria Olteteanu, Zoe Falomir

This book contains the accepted papers at ProSocrates 2017 Symposium: Problem-solving,Creativity and Spatial Reasoning in Cognitive Systems. ProSocrates 2017 symposium was held at the Hansewissenschaftkolleg (HWK) of Advanced Studies in Delmenhorst, 20-21July 2017. This was the second edition of this symposium which aims to bring together researchers interested in spatial reasoning, problem solving and creativity.

en cs.AI
arXiv Open Access 2018
Theory of Machine Networks: A Case Study

Rooz Mahdavian, Richard Diehl Martinez

We propose a simplification of the Theory-of-Mind Network architecture, which focuses on modeling complex, deterministic machines as a proxy for modeling nondeterministic, conscious entities. We then validate this architecture in the context of understanding engines, which, we argue, meet the required internal and external complexity to yield meaningful abstractions.

en cs.AI
CrossRef Open Access 2016
The solution to AI, what real researchers do, and expectations for CS classrooms

John Langford, Bertrand Meyer, Mark Guzdial

The Communications Web site, http://cacm.acm.org, features more than a dozen bloggers in the BLOG@CACM community. In each issue of Communications , we'll publish selected posts or excerpts. twitter Follow us on Twitter at http://twitter.com/blogCACM http://cacm.acm.org/blogs/blog-cacm John Langford on AlphaGo, Bertrand Meyer on Research as Research, and Mark Guzdial on correlating CS classes with laboratory results.

arXiv Open Access 2016
Negative Learning Rates and P-Learning

Devon Merrill

We present a method of training a differentiable function approximator for a regression task using negative examples. We effect this training using negative learning rates. We also show how this method can be used to perform direct policy learning in a reinforcement learning setting.

en cs.AI, cs.LG

Halaman 7 dari 27988