A Multi-Graph Neural Network attention fusion framework for emotion-aware subgraph anomaly detection in social media fake news propagation
G. Victor Daniel, Chandrasekaran K., Venkatesan M.
et al.
Fake news on social media threatens civic trust and public safety, with emotionally charged content accelerating its spread through retweets, replies, and shares. This study addresses the challenge of detecting anomalous propagation subgraphs that reflect coordinated misinformation campaigns. We propose the Multi-Graph Neural Network Attention-based Propagation Learning for Emotion-Aware Anomaly Detection framework (hereafter referred to as MAPLE). This framework integrates sentiment features from users’ historical posts with network structures, combining multiple Graph Neural Networks (Graph Convolutional Network, Graph Attention Network, and GraphSAGE) through an attention-based fusion mechanism. Subgraph embeddings are evaluated using a One-Class Support Vector Machine for anomaly detection. Theoretical analyses establish guarantees on mutual information preservation, variance reduction, anomaly margin amplification, and entropy maximization. Experiments on the PolitiFact and GossipCop datasets show that MAPLE consistently outperforms state-of-the-art baselines, improving F1-scores by 43% and 2.17% respectively, while maintaining robustness across datasets. Unlike prior works that treat structural or sentiment cues in isolation, MAPLE provides the first unified multi-Graph Neural Network fusion framework with emotional context and theoretical underpinnings for subgraph anomaly detection.
Electronic computers. Computer science, Science
Bio-inspired cognitive robotics vs. embodied AI for socially acceptable, civilized robots
Pietro Morasso
Although cognitive robotics is still a work in progress, the trend is to “free” robots from the assembly lines of the third industrial revolution and allow them to “enter human society” in large numbers and many forms, as forecasted by Industry 4.0 and beyond. Cognitive robots are expected to be intelligent, designed to learn from experience and adapt to real-world situations rather than being preprogrammed with specific actions for all possible stimuli and environmental conditions. Moreover, such robots are supposed to interact closely with human partners, cooperating with them, and this implies that robot cognition must incorporate, in a deep sense, ethical principles and evolve, in conflict situations, decision-making capabilities that can be perceived as wise. Intelligence (true vs. false), ethics (right vs. wrong), and wisdom (good vs. bad) are interrelated but independent features of human behavior, and a similar framework should also characterize the behavior of cognitive agents integrated in human society. The working hypothesis formulated in this paper is that the propensity to consolidate ethically guided behavior, possibly evolving to some kind of wisdom, is a cognitive architecture based on bio-inspired embodied cognition, educated through development and social interaction. In contrast, the problem with current AI foundation models applied to robotics (EAI) is that, although they can be super-intelligent, they are intrinsically disembodied and ethically agnostic, independent of how much information was absorbed during training. We suggest that the proposed alternative may facilitate social acceptance and thus make such robots civilized.
Mechanical engineering and machinery, Electronic computers. Computer science
Designing Computational Tools for Exploring Causal Relationships in Qualitative Data
Han Meng, Qiuyuan Lyu, Peinuan Qin
et al.
Exploring causal relationships for qualitative data analysis in HCI and social science research enables the understanding of user needs and theory building. However, current computational tools primarily characterize and categorize qualitative data; the few systems that analyze causal relationships either inadequately consider context, lack credibility, or produce overly complex outputs. We first conducted a formative study with 15 participants interested in using computational tools for exploring causal relationships in qualitative data to understand their needs and derive design guidelines. Based on these findings, we designed and implemented QualCausal, a system that extracts and illustrates causal relationships through interactive causal network construction and multi-view visualization. A feedback study (n = 15) revealed that participants valued our system for reducing the analytical burden and providing cognitive scaffolding, yet navigated how such systems fit within their established research paradigms, practices, and habits. We discuss broader implications for designing computational tools that support qualitative data analysis.
Yapay Zekâ ile Elde Edilen Bilginin Niteliği Üzerine Bir Değerlendirme: Epistemolojik Hegemonya Bağlamında ChatGPT ve DeepSeek Örneği
Mevlüt Altıntop
Bu çalışma “yapay zekâ” (YZ) ile elde edilen bilginin niteliği üzerine bir değerlendirme içermektedir. Bu bağlamda YZ ile elde edilen bilginin üretim süreçleri, bilim içindeki rolü, avantajları ve dezavantajları, toplumsal karşılığı, ideoloji, hegemonya ve etik boyutunu ele almaktadır. Yapılan değerlendirmelerde, YZ ile elde edilen bilginin objektifliği, şeffaflığı, bilimselliği, gerçekliği, eşitliği ve etikliği sorgulanmıştır. Tersinden söylersek, YZ ile elde edilen bilginin yanlı, ideolojik, tahakkümcü, antidemokratik ve emperyal biçimli olup olmadığıyla ilgili anlamlı sonuçlar ortaya koyulmaya çalışılmıştır. Çalışmada içerik çözümleme yöntemi bağlamında tematik (semantik//latent) teknik kullanılmıştır. Bu süreç, popüler bir YZ uygulaması olan ChatGPT ve son günlerde adını duyuran DeepSeek uygulamalarına, kullanıcılarına sundukları bilginin kaynağı ve niteliğine yönelik sorular sorarak gerçekleştirilmiştir. Her iki YZ uygulamasının verdiği cevaplar, aynı işleyişe sahip tüm YZ uygulamalarının yanlı, ideolojik, sübjektif, eşitsiz, etik dışı ve zaman zaman hukuka uymayan bilgi aktarımı yaptığı yönündedir. Bu durum, modernite paradigması ile şekillenen Batı merkezli bilimsel anlayışın ürettiği YZ teknolojisinin epistemolojik hegemonyaya dayalı işleyiş biçiminin zorunlu bir sonucudur.
Electronic computers. Computer science, Technology (General)
CUBIC-Learn: A Reinforcement Learning Approach to CUBIC Congestion Control
Ehsan Abedini, Mohsen Nickray
Managing congestion effectively enables reliable and fast data transfer over networks. CUBIC delivers reliable results under normal circumstances but cannot adapt effectively to changing network scenarios. We introduce CUBIC-Learn, an RL approach for improving congestion control in CUBIC. The central idea is to use a Q-learning algorithm to adjust congestion window thresholds based on current data on packet loss, throughput, and latency. Simulations demonstrate more efficient and reliable congestion control when using CUBIC-Learn compared to standard CUBIC. CUBIC-Learn achieves a 47% reduction in packet loss, over a 59% increase in bandwidth utilization, approximately a 28% decrease in retransmissions, and 47% lower latency. In addition, CUBIC-Learn shows significant improvements in congestion window (cwnd) growth behavior, fairness among competing flows, and stability under heterogeneous traffic and network scenarios, including gigabit-scale bandwidth conditions. Statistical analysis further confirms the robustness of these gains, while the method introduces no additional computational overhead. Overall, CUBIC-Learn performs better than PCC, Reno, Tahoe, NewReno, and BBRv3 in most metrics. These findings suggest that RL can markedly improve congestion control in high-speed networks. [JJCIT 2025; 11(4.000): 466-483]
Information technology, Electronic computers. Computer science
Natural Language Processing-Based Financial Time Series Forecasting: Utilizing Sentiment Analysis for Improved Stock Price Prediction
Albert Ntumba Nkongolo, Yae Olatoundji Gaba, Kafunda Katalay Pierre
et al.
This study explores the application of natural language processing (NLP) techniques in financial time series forecasting, specifically in predicting stock prices. Historical stock price data and textual data from financial news articles and social media sources were collected, and TextBlob was used to obtain sentiment indices from the textual data. A hybrid model combining NLP techniques with LSTM (Long Short-Term Memory) neural networks was developed, and the methodology involved preprocessing and analyzing textual data using sentiment analysis with TextBlob and integrating the sentiment indices with historical stock price data for forecasting with LSTM. The LSTM model achieved a performance of 89.6 percent precision and outperformed traditional time series forecasting models in terms of accuracy and reliability. The results demonstrate that incorporating sentiment indices obtained through NLP significantly enhances the predictive performance of stock price forecasting models, and the study highlights the potential of NLP techniques, particularly sentiment analysis with TextBlob, in conjunction with LSTM neural networks, to improve the accuracy of financial time series forecasting, specifically in predicting stock prices.
Studi ini mengeksplorasi penerapan teknik pemrosesan bahasa alami (Natural Language Processing/NLP) dalam peramalan deret waktu keuangan, khususnya untuk memprediksi harga saham. Data harga saham historis dan data tekstual dari artikel berita keuangan serta sumber media sosial dikumpulkan, dan TextBlob digunakan untuk memperoleh indeks sentimen dari data tekstual tersebut. Sebuah model hibrida yang menggabungkan teknik NLP dengan jaringan saraf LSTM (Long Short-Term Memory) dikembangkan, dan metodologinya melibatkan praproses dan analisis data tekstual menggunakan analisis sentimen dengan TextBlob, serta integrasi indeks sentimen dengan data harga saham historis untuk peramalan menggunakan LSTM. Model LSTM ini mencapai kinerja dengan tingkat ketepatan (precision) sebesar 89,6 persen dan mengungguli model peramalan deret waktu tradisional dalam hal akurasi dan keandalan. Hasilnya menunjukkan bahwa penggabungan indeks sentimen yang diperoleh melalui NLP secara signifikan meningkatkan kinerja prediktif model peramalan harga saham, dan studi ini menekankan potensi teknik NLP, khususnya analisis sentimen dengan TextBlob, dalam kombinasi dengan jaringan saraf LSTM, untuk meningkatkan akurasi peramalan deret waktu keuangan, khususnya dalam memprediksi harga saham.
Information technology, Electronic computers. Computer science
FleXo: a flexible passive exoskeleton optimized for reducing lower back strain in manual handling tasks
Federico Allione, Maria Lazzaroni, Antonios E. Gkikakis
et al.
Musculoskeletal disorders, particularly low back pain, are some of the most common occupational health issues globally, causing significant personal suffering and economic burdens. Workers performing repetitive manual material handling tasks are especially at risk. FleXo, a lightweight (1.35 kg), flexible, ergonomic, and passive back-support exoskeleton is intended to reduce lower back strain during lifting tasks while allowing full freedom of movement for activities like walking, sitting, or side bending. FleXo’s design results from an advanced multi-objective design optimization approach that balances functionality and user comfort. In this work, validated through user feedback in a series of relevant repetitive tasks, it is demonstrated that FleXo can reduce the perceived physical effort during lifting tasks, enhance user satisfaction, improve employee wellbeing, promote workplace safety, decrease injuries, and lower the costs (both to society and companies) associated with lower back pain and injury.
Mechanical engineering and machinery, Electronic computers. Computer science
Integrating LLMs for Grading and Appeal Resolution in Computer Science Education
I. Aytutuldu, O. Yol, Y. S. Akgul
This study explores the integration of Large Language Models (LLMs) into the grading and appeal resolution process in computer science education. We introduce AI-PAT, an AI-powered assessment tool that leverages LLMs to evaluate computer science exams, generate feedback, and address student appeals. AI-PAT was used to assess over 850 exam submissions and handle 185 appeal cases. Our multi-model comparison (ChatGPT, Gemini) reveals strong correlations between model outputs, though significant variability persists depending on configuration and prompt design. Human graders, while internally consistent, showed notable inter-rater disagreement, further highlighting subjectivity in manual evaluation. The appeal process led to grade changes in 74% of cases, indicating the need for continued refinement of AI evaluation strategies. While students appreciated the speed and detail of AI feedback, survey responses revealed trust and fairness concerns. We conclude that AI-PAT offers scalable benefits for formative assessment and feedback, but must be accompanied by transparent grading rubrics, human oversight, and appeal mechanisms to ensure equitable outcomes.
An RBF-based method for computational electromagnetics with reduced numerical dispersion
Andrej Kolar-Požun, Gregor Kosec
The finite difference time domain method is one of the simplest and most popular methods in computational electromagnetics. This work considers two possible ways of generalising it to a meshless setting by employing local radial basis function interpolation. The resulting methods remain fully explicit and are convergent if properly chosen hyperviscosity terms are added to the update equations. We demonstrate that increasing the stencil size of the approximation has a desirable effect on numerical dispersion. Furthermore, our proposed methods can exhibit a decreased dispersion anisotropy compared to the finite difference time domain method.
en
physics.comp-ph, math.NA
ACM COMPUTE 2025 Best Practices Track Proceedings
Ritwik Murali, Mrityunjay Kumar
COMPUTE is an annual Indian conference supported by ACM India and iSIGCSE. The focus of COMPUTE is to improve the quality of computing education in the country by providing a platform for academicians and researchers to interact and share best practices in teaching, learning, and education in general. The Best Practices Track of COMPUTE 2025 invited Computer Science Educators across the country to submit an experience report for the best practices under multiple categories: 1) Novel classroom activities, 2) Imaginative assignments that promote creativity and problem-solving, 3) Diverse pedagogical approaches (e.g., flipped classrooms, peer teaching, project-based learning), 4) Designing AI-resistant or AI-integrated assessment questions, and 5) Teaching CS to students from other disciplines (e.g., business, humanities, engineering). These proceedings contain papers selected from these submissions for presentation at the conference, as well as a report (written by the editors) from the two best practices sessions where these were presented.
A Computer Vision Pipeline for Individual-Level Behavior Analysis: Benchmarking on the Edinburgh Pig Dataset
Haiyu Yang, Enhong Liu, Jennifer Sun
et al.
Animal behavior analysis plays a crucial role in understanding animal welfare, health status, and productivity in agricultural settings. However, traditional manual observation methods are time-consuming, subjective, and limited in scalability. We present a modular pipeline that leverages open-sourced state-of-the-art computer vision techniques to automate animal behavior analysis in a group housing environment. Our approach combines state-of-the-art models for zero-shot object detection, motion-aware tracking and segmentation, and advanced feature extraction using vision transformers for robust behavior recognition. The pipeline addresses challenges including animal occlusions and group housing scenarios as demonstrated in indoor pig monitoring. We validated our system on the Edinburgh Pig Behavior Video Dataset for multiple behavioral tasks. Our temporal model achieved 94.2% overall accuracy, representing a 21.2 percentage point improvement over existing methods. The pipeline demonstrated robust tracking capabilities with 93.3% identity preservation score and 89.3% object detection precision. The modular design suggests potential for adaptation to other contexts, though further validation across species would be required. The open-source implementation provides a scalable solution for behavior monitoring, contributing to precision pig farming and welfare assessment through automated, objective, and continuous analysis.
The Impact of International Collaborations with Highly Publishing Countries in Computer Science
Alberto Gomez Espes, Michael Faerber, Adam Jatowt
This paper analyzes international collaborations in Computer Science, focusing on three major players: China, the European Union, and the United States. Drawing from a comprehensive literature review, we examine collaboration patterns, research impact, retraction rates, and the role of the Development Index in shaping research outcomes. Our findings show that while China, the EU, and the US lead global research efforts, other regions are narrowing the gap in publication volume. Collaborations involving these key regions tend to have lower retraction rates, reflecting stronger adherence to scientific standards. We also find that countries with a Very High Development Index contribute to research with higher citation rates and fewer retractions. Overall, this study highlights the value of international collaboration and the importance of inclusive, ethical practices in advancing global research in Computer Science.
Special issue “Biomass‐based industry: Towards a sustainable development”
Fernando Israel Gómez‐Castro, Arturo González‐Quiroga, Alpaslan Atmanli
Engineering (General). Civil engineering (General), Electronic computers. Computer science
Generation of Contributions of Scientific Paper Based on Multi-step Sentence Selecting-and-Rewriting Model
XU Xianzhe, CHEN Jingqiang
There has been a significant surge in the number of scientific papers published in recent years,which makes it challen-ging for researchers to keep up with the latest advancements in their fields.To stay updated,researchers often rely on reading the contributions section of papers,which serves as a concise summary of the key research findings.However,it is not uncommon for authors to inadequately present the innovative content of their articles,making it difficult for readers to quickly grasp the essence of the research.To address this issue,we propose a novel task of contribution summarization to automatically generate contribution summaries of scientific papers.One of the challenges of this task is the lack of relevant datasets.Therefore,we construct a scientific contribution summarization corpus(SCSC).Another issue lies in the fact that currently available abstractive or extractive models tend to suffer from either excessive redundancy or a lack of coherence between sentences.To meet the demand of ge-nerating concise and high-quality contribution sentences,we present MSSRsum,a multi-step sentence selecting-and-rewriting model.Experiments show that the proposed model outperforms baselines on SCSC and arXiv datasets.
Computer software, Technology (General)
The CTSkills App -- Measuring Problem Decomposition Skills of Students in Computational Thinking
Dorit Assaf, Giorgia Adorni, Elia Lutz
et al.
This paper addresses the incorporation of problem decomposition skills as an important component of computational thinking (CT) in K-12 computer science (CS) education. Despite the growing integration of CS in schools, there is a lack of consensus on the precise definition of CT in general and decomposition in particular. While decomposition is commonly referred to as the starting point of (computational) problem-solving, algorithmic solution formulation often receives more attention in the classroom, while decomposition remains rather unexplored. This study presents "CTSKills", a web-based skill assessment tool developed to measure students' problem decomposition skills. With the data collected from 75 students in grades 4-9, this research aims to contribute to a baseline of students' decomposition proficiency in compulsory education. Furthermore, a thorough understanding of a given problem is becoming increasingly important with the advancement of generative artificial intelligence (AI) tools that can effectively support the process of formulating algorithms. This study highlights the importance of problem decomposition as a key skill in K-12 CS education to foster more adept problem solvers.
ReCon: Reconfiguring Analog Rydberg Atom Quantum Computers for Quantum Generative Adversarial Networks
Nicholas S. DiBrita, Daniel Leeds, Yuqian Huo
et al.
Quantum computing has shown theoretical promise of speedup in several machine learning tasks, including generative tasks using generative adversarial networks (GANs). While quantum computers have been implemented with different types of technologies, recently, analog Rydberg atom quantum computers have been demonstrated to have desirable properties such as reconfigurable qubit (quantum bit) positions and multi-qubit operations. To leverage the properties of this technology, we propose ReCon, the first work to implement quantum GANs on analog Rydberg atom quantum computers. Our evaluation using simulations and real-computer executions shows 33% better quality (measured using Frechet Inception Distance (FID)) in generated images than the state-of-the-art technique implemented on superconducting-qubit technology.
Mental health of computing professionals and students: A systematic literature review
Alicia Julia Wilson Takaoka, Kshitij Sharma
The intersections of mental health and computing education is under-examined. In this systematic literature review, we evaluate the state-of-the-art of research in mental health and well-being interventions, assessments, and concerns like anxiety and depression in computer science and computing education. The studies evaluated occurred across the computing education pipeline from introductory to PhD courses and found some commonalities contributing to high reporting of anxiety and depression in those studied. In addition, interventions that were designed to address mental health topics often revolved around self-guidance. Based on our review of the literature, we recommend increasing sample sizes and focusing on the design and development of tools and interventions specifically designed for computing professionals and students.
Foveated rendering: A state-of-the-art survey
Lili Wang, Xuehuai Shi, Yi Liu
Abstract Recently, virtual reality (VR) technology has been widely used in medical, military, manufacturing, entertainment, and other fields. These applications must simulate different complex material surfaces, various dynamic objects, and complex physical phenomena, increasing the complexity of VR scenes. Current computing devices cannot efficiently render these complex scenes in real time, and delayed rendering makes the content observed by the user inconsistent with the user’s interaction, causing discomfort. Foveated rendering is a promising technique that can accelerate rendering. It takes advantage of human eyes’ inherent features and renders different regions with different qualities without sacrificing perceived visual quality. Foveated rendering research has a history of 31 years and is mainly focused on solving the following three problems. The first is to apply perceptual models of the human visual system into foveated rendering. The second is to render the image with different qualities according to foveation principles. The third is to integrate foveated rendering into existing rendering paradigms to improve rendering performance. In this survey, we review foveated rendering research from 1990 to 2021. We first revisit the visual perceptual models related to foveated rendering. Subsequently, we propose a new foveated rendering taxonomy and then classify and review the research on this basis. Finally, we discuss potential opportunities and open questions in the foveated rendering field. We anticipate that this survey will provide new researchers with a high-level overview of the state-of-the-art in this field, furnish experts with up-to-date information, and offer ideas alongside a framework to VR display software and hardware designers and engineers.
Electronic computers. Computer science
Solving the coupled Schrödinger -Korteweg- de-Vries system by modified variational iteration method with genetic algorithm
Ali A. Mustafa, Waleed Al-Hayani
A system of nonlinear partial differential equations was solved using a modified variational iteration method (MVIM) combined with a genetic algorithm. The modified method introduced an auxiliary parameter (p) in the correction functional to ensure convergence and improve the outcomes. Before applying the modification, the traditional variational iteration method (VIM) was used firstly. The method was applied to numerically solve the system of Schrödinger-KdV equations. By comparing the two methods in addition to some of the previous approaches, it turns out the new algorithm converges quickly, generates accurate solutions and shows improved accuracy. Additionally, the method can be easily applied to various linear and nonlinear differential equations.
Electronic computers. Computer science
Automated Optimization-Based Deep Learning Models for Image Classification Tasks
Daudi Mashauri Migayo, Shubi Kaijage, Stephen Swetala
et al.
Applying deep learning models requires design and optimization when solving multifaceted artificial intelligence tasks. Optimization relies on human expertise and is achieved only with great exertion. The current literature concentrates on automating design; optimization needs more attention. Similarly, most existing optimization libraries focus on other machine learning tasks rather than image classification. For this reason, an automated optimization scheme of deep learning models for image classification tasks is proposed in this paper. A sequential-model-based optimization algorithm was used to implement the proposed method. Four deep learning models, a transformer-based model, and standard datasets for image classification challenges were employed in the experiments. Through empirical evaluations, this paper demonstrates that the proposed scheme improves the performance of deep learning models. Specifically, for a Virtual Geometry Group (VGG-16), accuracy was heightened from 0.937 to 0.983, signifying a 73% relative error rate drop within an hour of automated optimization. Similarly, training-related parameter values are proposed to improve the performance of deep learning models. The scheme can be extended to automate the optimization of transformer-based models. The insights from this study may assist efforts to provide full access to the building and optimization of DL models, even for amateurs.
Electronic computers. Computer science