Hasil untuk "Electronic computers. Computer science"

Menampilkan 20 dari ~18049160 hasil · dari DOAJ, Semantic Scholar, arXiv, CrossRef

JSON API
DOAJ Open Access 2026
Large language model bias auditing for periodontal diagnosis using an ambiguity-probe methodology: a pilot study

Teerachate Nantakeeratipat

BackgroundLarge Language Models (LLMs) in healthcare holds immense promise yet carries the risk of perpetuating social biases. While artificial intelligence (AI) fairness is a growing concern, a gap exists in understanding how these models perform under conditions of clinical ambiguity, a common feature in real-world practice.MethodsWe conducted a study using an ambiguity-probe methodology with a set of 42 sociodemographic personas and 15 clinical vignettes based on the 2018 classification of periodontal diseases. Ten were clear-cut scenarios with established ground truths, while five were intentionally ambiguous. OpenAI's GPT-4o and Google's Gemini 2.5 Pro were prompted to provide periodontal stage and grade assessments using 630 vignette-persona combinations per model.ResultsIn clear-cut scenarios, GPT-4o demonstrated significantly higher combined (stage and grade) accuracy (70.5%) than Gemini Pro (33.3%). However, a robust fairness analysis using cumulative link models with false discovery rate correction revealed no statistically significant sociodemographic bias in either model. This finding held true across both clear-cut and ambiguous clinical scenarios.ConclusionTo our knowledge, this is among the first study to use simulated clinical ambiguity to reveal the distinct ethical fingerprints of LLMs in a dental context. While LLM performance gaps exist, our analysis decouples accuracy from fairness, demonstrating that both models maintain sociodemographic neutrality. We identify that the observed errors are not bias, but rather diagnostic boundary instability. This highlights a critical need for future research to differentiate between these two distinct types of model failure to build genuinely reliable AI.

Medicine, Public aspects of medicine
DOAJ Open Access 2026
FIR-SDE: fast image restoration via mean-reverting stochastic differential equation

Xin Shi, Zhengchao Xu, Sunan Ge et al.

Abstract In computer vision, zero-shot image restoration—a technique enabling degraded image restoration without large-scale paired training data—has emerged as a pivotal technique for scenarios where data is limited or paired training data is challenging to obtain. However, existing methods face two key limitations: data consistency preservation remains challenging for out-of-domain data, and degradation process alignment is difficult when the degradation mechanism is not mathematically predetermined. To address these issues, this paper presents a novel zero-shot image restoration method (FIR-SDE). Traditional generation-oriented diffusion models (designed for image creation) are replaced with restoration-oriented models (specialized for degradation repair), expanding the range of effectively restorable images. To mitigate the noise offset (discrepancies between real and model-simulated degradation) and to enhance the alignment, a multi-step optimization strategy is employed, which evaluates the distance between real and simulated degraded images via frequency domain distribution. Experiments were conducted on two image restoration tasks (image deraining and inpainting) using three public datasets (AFHQ-dog, CelebA, and FFHQ), with Gaussian blur and motion blur superimposed as noise offsets. Results demonstrate that FIR-SDE method outperforms competitive methods in restoration quality and noise resistance. By eliminating data space constraints and exhibiting robustness against noise offsets, FIR-SDE offers a more flexible and efficient solution to broaden the practical applicability of zero-shot image restoration.

Electronic computers. Computer science, Information technology
DOAJ Open Access 2026
Joint Inference of Image Enhancement and Object Detection via Cross-Domain Fusion Transformer

Bingxun Zhao, Yuan Chen

Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to improve visual quality prior to detection. However, image enhancement and object detection are optimized for fundamentally different objectives, and directly cascading them leads to feature distribution mismatch. Moreover, prevailing dual-branch architectures process enhancement and detection independently, overlooking multi-scale interactions across domains and thus constraining the learning of cross-domain feature representation. To overcome these limitations, We propose an underwater cross-domain fusion Transformer detector (UCF-DETR). UCF-DETR jointly leverages image enhancement and object detection by exploiting the complementary information from the enhanced and original image domains. Specifically, an underwater image enhancement module is employed to improve visibility. We then design a cross-domain feature pyramid to integrate fine-grained structural details from the enhanced domain with semantic representations from the original domain. Cross-domain query interaction mechanism is introduced to model inter-domain query relationships, leading to accurate object localization and boundary delineation. Extensive experiments on the challenging DUO and UDD benchmarks demonstrate that UCF-DETR consistently outperforms state-of-the-art methods for UOD.

Electronic computers. Computer science
DOAJ Open Access 2025
PharmaNet Deep: Real-Time Pharmaceutical Defect Detection Using Defect-Guided Feature Fusion and Uncertainty-Driven Inspection

Ajantha Vijayakumar, Joseph Abraham Sundar Koilraj, Muthaiah Rajappa

Abstract Oral dosage forms are the most widely employed method of drug delivery in therapeutic treatments. However, the presence of visual defects in blister packages can adversely affect the drug's bioavailability and therapeutic efficacy, potentially compromising treatment outcomes. Consequently, detecting tablet defects post-blister packaging in real-time represents a critical challenge in the pharmaceutical industry. Additionally, factors such as blister reflections and limited dataset size hinder the deep learning model's ability to identify defects accurately. To address these challenges, the PharmaNet deep model is developed utilizing a convolutional neural network (CNN) architecture, incorporating defect-guided dynamic feature fusion (DGDFF) in which the fusion process is dynamically guided by potential defect regions, allowing the model to focus on relevant features (defect areas) more efficiently, adaptive deep chain (ADC) which includes occlusion pattern generator (OPG) and residual recursive feature reconstructor (R2FR). The OPG creates multiple views of potential defect regions by systematically dividing features into blocks and creating layered occlusions. At the same time, the R2FR uses gates with ELU activation and residual connections to reconstruct detailed features from these occluded sequences, ultimately enhancing the model's ability to detect subtle defects. The model culminates in an uncertainty-aware detection head that enhances defect prediction reliability by incorporating uncertainty estimates alongside traditional class probabilities and bounding box predictions. This provides a more informed and interpretable decision-making process for pharmaceutical quality control in real-time. Empirical evaluation on the proposed model demonstrates state-of-the-art performance with 99.4% mAP on the PharmaBlister dataset and 97.2% mAP on MVTech AD, with minimal predictive uncertainty, validating its efficacy in pharmaceutical quality control applications.

Electronic computers. Computer science
DOAJ Open Access 2025
Gold Price Forecasting using Time Series Modeling on a Web Platform

Dwi Ratna Puspita Sari, Sirli Fahriah, Kurnianingsih et al.

Gold is one of the most favored investment instruments due to its stability and its ability to preserve value against inflation. However, its price movements are volatile and influenced by various global economic factors, currency exchange rates, and geopolitical conditions, making gold price forecasting a significant challenge. This study aims to develop a gold price forecasting system using the Long Short-Term Memory (LSTM) algorithm, a variant of the Recurrent Neural Network (RNN) that excels in processing time-series data. The dataset consists of historical daily gold buying and selling prices from 2015 to 2025, collected from Yahoo Finance, Logam Mulia, and the official website of Bank Indonesia. The modeling process follows the CRISP-DM methodology, which includes business understanding, data preparation and exploration, modeling, and evaluation stages. Time Series Cross Validation (TSCV) is used to validate the model. LSTM performance is compared with other models such as GRU, CNN-1D, and Simple RNN to identify the best-performing architecture. Evaluation results indicate that LSTM achieved the highest performance with an R² score of 0.99 for selling prices and 0.98 for buying prices on the final test dataset. The system is deployed online, making it accessible in real-time. This research is expected to assist investors, financial analysts, and the general public in making smarter investment decisions based on valid historical data and advanced forecasting technology.

Information technology, Electronic computers. Computer science
DOAJ Open Access 2025
Cost-Effective Design, Content Management System Implementation and Artificial Intelligence Support of Greek Government AADE, myDATA Web Service for Generic Government Infrastructure, a Complete Analysis

George Tsamis, Georgios Evangelos, Aris Papakostas et al.

One significant digital initiative that is changing Greece’s tax environment is the myDATA platform. The platform, which is a component of the wider digital governance agenda, provides significant added value to enterprises and the tax administration, despite the challenges of adaption. Despite the positive response, we find that the development of the platform could have been carried out quickly and at a significantly lower cost and could have been able to cope much faster with the rapid and necessary changes that the platform will have to comply with. For these reasons, development in WordPress would be considered essential as this CMS platform guarantees a fast and developer-friendly environment. In this publication, as a contribution, we provide all the necessary information to develop a myDATA-like platform in a fast, economical and functional way using the WordPress CMS. Our contribution also contains the analysis of the minimum necessary amount of services of the myDATA platform in order to perform its basic functionalities, the description of the according database relational model, which must be implemented in order to provide the same functionality with the myDATA platform, and the analysis of available methods to quickly create the necessary forms and services. In addition, we study how to develop Artificial Intelligence mechanisms with a success rate reaching up to 90% for automatic tax violation detection algorithms.

Industrial engineering. Management engineering, Electronic computers. Computer science
DOAJ Open Access 2025
Investment portfolio optimization with supervised learning and attention mechanism

Zetao Yan

Portfolio optimization is a process that involves distribution of capital with the purpose of maximizing returns and at the same time minimizing risks. The current paper discusses the use of Transformer networks in supervised learning for portfolio optimization which can set new standards for machine learning-based investment strategies. The experiments show that the portfolio management method that utilizes attention mechanisms goes beyond traditional optimization methods with a substantial difference. The performance of the recommended model in terms of average annualized return and Sharpe ratio was 24.8% and 1.69 respectively over the 14 test cases. These are considerable improvements over the benchmark strategies like equal-weighted portfolios (Sharpe ratio: 0.54), market capitalization-weighted portfolios (Sharpe ratio: 0.43), and traditional index portfolios (Sharpe ratio: 0.37). The attention mechanism is what makes the model able to dynamically adjust the portfolio weights according to the changing market forces, thus, it can blend active and passive investments efficiently. Moreover, it managed to maintain a very good risk control capacity with a Sortino ratio of 2.45 while its performance during market volatility was still quite good. So, this research serves to provide both quantitative finance and machine learning with a proof that the novel deep learning architectures can easily beat the conventional portfolio optimization methods, even in the case of small asset pools.

Electronic computers. Computer science
DOAJ Open Access 2025
A multi-image steganography: ISS

Shihao Zhang, Yanhui Xiao, Huawei Tian et al.

Abstract Unlike single-image steganography, the scheme of payload distribution on different images plays a pivotal role in the security performance of multi-image steganography. In this paper, a novel multi-image steganography scheme: image stitching sender (ISS) is proposed, which achieves optimal payload distribution by optimizing the stitching scheme of multi-cover-images. In the ISS scheme, we employ peak signal-to-noise ratio as the similarity evaluation metric for the stitched cover image and stego image. Besides, genetic algorithm is used to find the local optimal solution for the similarity, corresponding to a locally optimal multi-image steganographic stitching scheme. The experiment demonstrates that ISS exhibits enhanced anti-detection capabilities in comparison to other multi-image steganography schemes. Furthermore, when combined with non-additive embedding methods, the ISS can achieve a more substantial improvement in security compared to additive embedding methods.

Computer engineering. Computer hardware, Electronic computers. Computer science
arXiv Open Access 2025
When Anti-Fraud Laws Become a Barrier to Computer Science Research

Madelyne Xiao, Andrew Sellars, Sarah Scheffler

Computer science research sometimes brushes with the law, from red-team exercises that probe the boundaries of authentication mechanisms, to AI research processing copyrighted material, to platform research measuring the behavior of algorithms and users. U.S.-based computer security research is no stranger to the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA) in a relationship that is still evolving through case law, research practices, changing policies, and legislation. Amid the landscape computer scientists, lawyers, and policymakers have learned to navigate, anti-fraud laws are a surprisingly under-examined challenge for computer science research. Fraud brings separate issues that are not addressed by the methods for navigating CFAA, DMCA, and Terms of Service that are more familiar in the computer security literature. Although anti-fraud laws have been discussed to a limited extent in older research on phishing attacks, modern computer science researchers are left with little guidance when it comes to navigating issues of deception outside the context of pure laboratory research. In this paper, we analyze and taxonomize the anti-fraud and deception issues that arise in several areas of computer science research. We find that, despite the lack of attention to these issues in the legal and computer science literature, issues of misrepresented identity or false information that could implicate anti-fraud laws are actually relevant to many methodologies used in computer science research, including penetration testing, web scraping, user studies, sock puppets, social engineering, auditing AI or socio-technical systems, and attacks on artificial intelligence. We especially highlight the importance of anti-fraud laws in two research fields of great policy importance: attacking or auditing AI systems, and research involving legal identification.

en cs.CY
arXiv Open Access 2025
TRACE: AI-Assisted Assessment of Collaborative Projects in Computer Science Education

Songmei Yu, Andrew Zagula

Collaborative group projects are integral to computer science education, fostering teamwork, problem-solving, and industry-relevant skills. However, assessing individual contributions within group settings remains challenging. Traditional approaches, including equal grade distribution and subjective peer evaluations, often lack fairness, objectivity, and scalability, particularly in large classrooms. We propose TRACE, a semi-automated AI-assisted framework for assessing collaborative software projects that evaluates both project quality and individual contributions using repository mining, communication analytics, and AI-assisted analytics. A pilot deployment in a software engineering course demonstrated high alignment with instructor assessments, increased student satisfaction, and reduced instructor grading effort. The results suggest that AI-assisted analytics can improve the transparency and scalability of collaborative project assessment in computer science education.

en cs.HC, cs.AI
DOAJ Open Access 2024
TransImg: A Translation Algorithm of Visible-to-Infrared Image Based on Generative Adversarial Network

Shuo Han, Bo Mo, Junwei Xu et al.

Abstract Infrared images of sensitive targets are difficult to obtain and cannot meet the design and training needs of target detection and tracking algorithms for mobile platforms such as aircraft. This paper proposes an image translation algorithm TransImg, which can achieve visible light image translation to the infrared domain to enrich the dataset. First, the algorithm designed a generator structure consisting of a deep residual connected encoder and a region perception feature fusion module to enhance feature learning, thereby avoiding issues such as generating infrared images with insufficient details in the transfer task. Afterward, a multi-scale discriminator and a composite loss function were designed to further improve the transfer effect. Finally, an automatic mixed-precision training strategy was designed for the overall migration algorithm architecture to accelerate the training and generation of infrared images. Experiments have shown that the image translation algorithm TransImg has good algorithm accuracy, and the infrared image generated by visible light image translation has richer texture details, faster generation speed, and lower video memory consumption, and the performance exceeds the mainstream traditional algorithm, and the generated images can meet the requirements of target detection and tracking algorithms design and training for mobile platforms such as aircraft.

Electronic computers. Computer science
DOAJ Open Access 2024
Crosswind and Vortex Usages for Electricity Production Enhancement of Solar Updraft Tower

Amnart Boonloi, Anan Sudsanguan, Withada Jedsadaratanachai

This research presents an improvement to the traditional solar updraft tower, which relies solely on solar energy and cannot operate continuously throughout the day. The enhancement involves a hybrid energy approach by installing a vortex generator at the top of the tower to convert crosswinds into a vortex flow at the chimney’s top. This modification induces an updraft within the tower, enabling it to generate electricity continuously, even at night when there is no sunlight. The aim is to enable the solar updraft tower to harness crosswind energy without altering the tower’s main structure. This involves developing a vortex generator from a unidirectional wind intake design to a three-directional intake, enhancing the feasibility of commercial installation. Additionally, various designs and heights of vortex generators were developed, considering different crosswind speeds (2, 4, 6, and 8 m/s). The research utilizes the finite element method, along with real model construction, to validate the reliability of the study’s findings. The results indicate that the updraft speed is directly proportional to the crosswind speed. From a physical standpoint, the vortex generator with a height equal to D produced the best results in all experiments. The square, cylindrical, and diffuser shapes increased the wind speed inside the chimney by 60%, 41%, and 48%, respectively. These results from various shapes provide effective design and development guidelines for the future commercial use of vortex generators.

Electronic computers. Computer science
arXiv Open Access 2024
"Which LLM should I use?": Evaluating LLMs for tasks performed by Undergraduate Computer Science Students

Vibhor Agarwal, Madhav Krishan Garg, Sahiti Dharmavaram et al.

This study evaluates the effectiveness of various large language models (LLMs) in performing tasks common among undergraduate computer science students. Although a number of research studies in the computing education community have explored the possibility of using LLMs for a variety of tasks, there is a lack of comprehensive research comparing different LLMs and evaluating which LLMs are most effective for different tasks. Our research systematically assesses some of the publicly available LLMs such as Google Bard, ChatGPT(3.5), GitHub Copilot Chat, and Microsoft Copilot across diverse tasks commonly encountered by undergraduate computer science students in India. These tasks include code explanation and documentation, solving class assignments, technical interview preparation, learning new concepts and frameworks, and email writing. Evaluation for these tasks was carried out by pre-final year and final year undergraduate computer science students and provides insights into the models' strengths and limitations. This study aims to guide students as well as instructors in selecting suitable LLMs for any specific task and offers valuable insights on how LLMs can be used constructively by students and instructors.

en cs.CY, cs.HC
arXiv Open Access 2024
Embedding Privacy in Computational Social Science and Artificial Intelligence Research

Keenan Jones, Fatima Zahrah, Jason R. C. Nurse

Privacy is a human right. It ensures that individuals are free to engage in discussions, participate in groups, and form relationships online or offline without fear of their data being inappropriately harvested, analyzed, or otherwise used to harm them. Preserving privacy has emerged as a critical factor in research, particularly in the computational social science (CSS), artificial intelligence (AI) and data science domains, given their reliance on individuals' data for novel insights. The increasing use of advanced computational models stands to exacerbate privacy concerns because, if inappropriately used, they can quickly infringe privacy rights and lead to adverse effects for individuals -- especially vulnerable groups -- and society. We have already witnessed a host of privacy issues emerge with the advent of large language models (LLMs), such as ChatGPT, which further demonstrate the importance of embedding privacy from the start. This article contributes to the field by discussing the role of privacy and the issues that researchers working in CSS, AI, data science and related domains are likely to face. It then presents several key considerations for researchers to ensure participant privacy is best preserved in their research design, data collection and use, analysis, and dissemination of research results.

en cs.AI, cs.CY
DOAJ Open Access 2023
Video images compression method based on floating positional coding with an unequal codograms length

Vladimir Barannik, Serhii Sidchenko, Dmitriy Barannik et al.

The subject of research in the article are the video images compression and encryption processes during the critically important objects managing process. The goal is to develop a method for compressing video images based on floating positional coding with an uneven codegrams length to simultaneously ensure information reliability and confidentiality during its transmission with a given time delay. Objectives: analyzing existing approaches to ensuring the video images confidentiality; development a method for compressing video images based on floating positional coding with an uneven codegrams length; evaluate the developed method effectiveness. The methods used are: digital image processing methods, digital image compression methods, image encryption and scrambling methods, structural-combinatorial coding methods, statistical analysis methods. The following results are obtained. The technology of floating encoding of an uneven sequence of blocks is proposed. Code values are formed from elements of different video image blocks. For this, a scheme for linearizing an image point coordinates from its four-dimensional representation on a plane into a one-dimensional element coordinate in a vector has been developed. The four-dimensional element coordinate on the plane describes the image block coordinates and the coordinates of the element in this block. Code values are formed under conditions of control their binary representation's length. At the same time, coding is implemented for an indeterminate number of video image elements. The number of elements depends on the length of the code word. Accordingly, codegrams with an indeterminate length are formed. Their length depends on the service data values, generated during the encoding process. Service data acts as a key element. Conclusions. The one-stage polyadic image encoding method in a differentiated basis has been further improved. The developed encoding method provides image compression without information quality loss. The original images volume compression provides by 3–20 % better compared to the TIFF data presentation format and by 4–15 % compared to the PNG format. The overhead amount is less than 2.5 % of the entire codestream size.

Computer engineering. Computer hardware, Electronic computers. Computer science
arXiv Open Access 2023
Out-of-Distribution Detection for Adaptive Computer Vision

Simon Kristoffersson Lind, Rudolph Triebel, Luigi Nardi et al.

It is well known that computer vision can be unreliable when faced with previously unseen imaging conditions. This paper proposes a method to adapt camera parameters according to a normalizing flow-based out-of-distibution detector. A small-scale study is conducted which shows that adapting camera parameters according to this out-of-distibution detector leads to an average increase of 3 to 4 percentage points in mAP, mAR and F1 performance metrics of a YOLOv4 object detector. As a secondary result, this paper also shows that it is possible to train a normalizing flow model for out-of-distribution detection on the COCO dataset, which is larger and more diverse than most benchmarks for out-of-distibution detectors.

en cs.CV, cs.LG
arXiv Open Access 2023
What Students Can Learn About Artificial Intelligence -- Recommendations for K-12 Computing Education

Tilman Michaeli, Stefan Seegerer, Ralf Romeike

Technological advances in the context of digital transformation are the basis for rapid developments in the field of artificial intelligence (AI). Although AI is not a new topic in computer science (CS), recent developments are having an immense impact on everyday life and society. In consequence, everyone needs competencies to be able to adequately and competently analyze, discuss and help shape the impact, opportunities, and limits of artificial intelligence on their personal lives and our society. As a result, an increasing number of CS curricula are being extended to include the topic of AI. However, in order to integrate AI into existing CS curricula, what students can and should learn in the context of AI needs to be clarified. This has proven to be particularly difficult, considering that so far CS education research on central concepts and principles of AI lacks sufficient elaboration. Therefore, in this paper, we present a curriculum of learning objectives that addresses digital literacy and the societal perspective in particular. The learning objectives can be used to comprehensively design curricula, but also allow for analyzing current curricula and teaching materials and provide insights into the central concepts and corresponding competencies of AI.

en cs.CY, cs.AI

Halaman 6 dari 902458