Large language model bias auditing for periodontal diagnosis using an ambiguity-probe methodology: a pilot study
Teerachate Nantakeeratipat
BackgroundLarge Language Models (LLMs) in healthcare holds immense promise yet carries the risk of perpetuating social biases. While artificial intelligence (AI) fairness is a growing concern, a gap exists in understanding how these models perform under conditions of clinical ambiguity, a common feature in real-world practice.MethodsWe conducted a study using an ambiguity-probe methodology with a set of 42 sociodemographic personas and 15 clinical vignettes based on the 2018 classification of periodontal diseases. Ten were clear-cut scenarios with established ground truths, while five were intentionally ambiguous. OpenAI's GPT-4o and Google's Gemini 2.5 Pro were prompted to provide periodontal stage and grade assessments using 630 vignette-persona combinations per model.ResultsIn clear-cut scenarios, GPT-4o demonstrated significantly higher combined (stage and grade) accuracy (70.5%) than Gemini Pro (33.3%). However, a robust fairness analysis using cumulative link models with false discovery rate correction revealed no statistically significant sociodemographic bias in either model. This finding held true across both clear-cut and ambiguous clinical scenarios.ConclusionTo our knowledge, this is among the first study to use simulated clinical ambiguity to reveal the distinct ethical fingerprints of LLMs in a dental context. While LLM performance gaps exist, our analysis decouples accuracy from fairness, demonstrating that both models maintain sociodemographic neutrality. We identify that the observed errors are not bias, but rather diagnostic boundary instability. This highlights a critical need for future research to differentiate between these two distinct types of model failure to build genuinely reliable AI.
Medicine, Public aspects of medicine
FIR-SDE: fast image restoration via mean-reverting stochastic differential equation
Xin Shi, Zhengchao Xu, Sunan Ge
et al.
Abstract In computer vision, zero-shot image restoration—a technique enabling degraded image restoration without large-scale paired training data—has emerged as a pivotal technique for scenarios where data is limited or paired training data is challenging to obtain. However, existing methods face two key limitations: data consistency preservation remains challenging for out-of-domain data, and degradation process alignment is difficult when the degradation mechanism is not mathematically predetermined. To address these issues, this paper presents a novel zero-shot image restoration method (FIR-SDE). Traditional generation-oriented diffusion models (designed for image creation) are replaced with restoration-oriented models (specialized for degradation repair), expanding the range of effectively restorable images. To mitigate the noise offset (discrepancies between real and model-simulated degradation) and to enhance the alignment, a multi-step optimization strategy is employed, which evaluates the distance between real and simulated degraded images via frequency domain distribution. Experiments were conducted on two image restoration tasks (image deraining and inpainting) using three public datasets (AFHQ-dog, CelebA, and FFHQ), with Gaussian blur and motion blur superimposed as noise offsets. Results demonstrate that FIR-SDE method outperforms competitive methods in restoration quality and noise resistance. By eliminating data space constraints and exhibiting robustness against noise offsets, FIR-SDE offers a more flexible and efficient solution to broaden the practical applicability of zero-shot image restoration.
Electronic computers. Computer science, Information technology
Joint Inference of Image Enhancement and Object Detection via Cross-Domain Fusion Transformer
Bingxun Zhao, Yuan Chen
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to improve visual quality prior to detection. However, image enhancement and object detection are optimized for fundamentally different objectives, and directly cascading them leads to feature distribution mismatch. Moreover, prevailing dual-branch architectures process enhancement and detection independently, overlooking multi-scale interactions across domains and thus constraining the learning of cross-domain feature representation. To overcome these limitations, We propose an underwater cross-domain fusion Transformer detector (UCF-DETR). UCF-DETR jointly leverages image enhancement and object detection by exploiting the complementary information from the enhanced and original image domains. Specifically, an underwater image enhancement module is employed to improve visibility. We then design a cross-domain feature pyramid to integrate fine-grained structural details from the enhanced domain with semantic representations from the original domain. Cross-domain query interaction mechanism is introduced to model inter-domain query relationships, leading to accurate object localization and boundary delineation. Extensive experiments on the challenging DUO and UDD benchmarks demonstrate that UCF-DETR consistently outperforms state-of-the-art methods for UOD.
Electronic computers. Computer science
Editorial: Advancements in AI-driven multimodal interfaces for robot-aided rehabilitation
Christian Tamantini, Christian Tamantini, Kevin Patrice Langlois
et al.
Mechanical engineering and machinery, Electronic computers. Computer science
PharmaNet Deep: Real-Time Pharmaceutical Defect Detection Using Defect-Guided Feature Fusion and Uncertainty-Driven Inspection
Ajantha Vijayakumar, Joseph Abraham Sundar Koilraj, Muthaiah Rajappa
Abstract Oral dosage forms are the most widely employed method of drug delivery in therapeutic treatments. However, the presence of visual defects in blister packages can adversely affect the drug's bioavailability and therapeutic efficacy, potentially compromising treatment outcomes. Consequently, detecting tablet defects post-blister packaging in real-time represents a critical challenge in the pharmaceutical industry. Additionally, factors such as blister reflections and limited dataset size hinder the deep learning model's ability to identify defects accurately. To address these challenges, the PharmaNet deep model is developed utilizing a convolutional neural network (CNN) architecture, incorporating defect-guided dynamic feature fusion (DGDFF) in which the fusion process is dynamically guided by potential defect regions, allowing the model to focus on relevant features (defect areas) more efficiently, adaptive deep chain (ADC) which includes occlusion pattern generator (OPG) and residual recursive feature reconstructor (R2FR). The OPG creates multiple views of potential defect regions by systematically dividing features into blocks and creating layered occlusions. At the same time, the R2FR uses gates with ELU activation and residual connections to reconstruct detailed features from these occluded sequences, ultimately enhancing the model's ability to detect subtle defects. The model culminates in an uncertainty-aware detection head that enhances defect prediction reliability by incorporating uncertainty estimates alongside traditional class probabilities and bounding box predictions. This provides a more informed and interpretable decision-making process for pharmaceutical quality control in real-time. Empirical evaluation on the proposed model demonstrates state-of-the-art performance with 99.4% mAP on the PharmaBlister dataset and 97.2% mAP on MVTech AD, with minimal predictive uncertainty, validating its efficacy in pharmaceutical quality control applications.
Electronic computers. Computer science
Gold Price Forecasting using Time Series Modeling on a Web Platform
Dwi Ratna Puspita Sari, Sirli Fahriah, Kurnianingsih
et al.
Gold is one of the most favored investment instruments due to its stability and its ability to preserve value against inflation. However, its price movements are volatile and influenced by various global economic factors, currency exchange rates, and geopolitical conditions, making gold price forecasting a significant challenge. This study aims to develop a gold price forecasting system using the Long Short-Term Memory (LSTM) algorithm, a variant of the Recurrent Neural Network (RNN) that excels in processing time-series data. The dataset consists of historical daily gold buying and selling prices from 2015 to 2025, collected from Yahoo Finance, Logam Mulia, and the official website of Bank Indonesia. The modeling process follows the CRISP-DM methodology, which includes business understanding, data preparation and exploration, modeling, and evaluation stages. Time Series Cross Validation (TSCV) is used to validate the model. LSTM performance is compared with other models such as GRU, CNN-1D, and Simple RNN to identify the best-performing architecture. Evaluation results indicate that LSTM achieved the highest performance with an R² score of 0.99 for selling prices and 0.98 for buying prices on the final test dataset. The system is deployed online, making it accessible in real-time. This research is expected to assist investors, financial analysts, and the general public in making smarter investment decisions based on valid historical data and advanced forecasting technology.
Information technology, Electronic computers. Computer science
Cost-Effective Design, Content Management System Implementation and Artificial Intelligence Support of Greek Government AADE, myDATA Web Service for Generic Government Infrastructure, a Complete Analysis
George Tsamis, Georgios Evangelos, Aris Papakostas
et al.
One significant digital initiative that is changing Greece’s tax environment is the myDATA platform. The platform, which is a component of the wider digital governance agenda, provides significant added value to enterprises and the tax administration, despite the challenges of adaption. Despite the positive response, we find that the development of the platform could have been carried out quickly and at a significantly lower cost and could have been able to cope much faster with the rapid and necessary changes that the platform will have to comply with. For these reasons, development in WordPress would be considered essential as this CMS platform guarantees a fast and developer-friendly environment. In this publication, as a contribution, we provide all the necessary information to develop a myDATA-like platform in a fast, economical and functional way using the WordPress CMS. Our contribution also contains the analysis of the minimum necessary amount of services of the myDATA platform in order to perform its basic functionalities, the description of the according database relational model, which must be implemented in order to provide the same functionality with the myDATA platform, and the analysis of available methods to quickly create the necessary forms and services. In addition, we study how to develop Artificial Intelligence mechanisms with a success rate reaching up to 90% for automatic tax violation detection algorithms.
Industrial engineering. Management engineering, Electronic computers. Computer science
Investment portfolio optimization with supervised learning and attention mechanism
Zetao Yan
Portfolio optimization is a process that involves distribution of capital with the purpose of maximizing returns and at the same time minimizing risks. The current paper discusses the use of Transformer networks in supervised learning for portfolio optimization which can set new standards for machine learning-based investment strategies. The experiments show that the portfolio management method that utilizes attention mechanisms goes beyond traditional optimization methods with a substantial difference. The performance of the recommended model in terms of average annualized return and Sharpe ratio was 24.8% and 1.69 respectively over the 14 test cases. These are considerable improvements over the benchmark strategies like equal-weighted portfolios (Sharpe ratio: 0.54), market capitalization-weighted portfolios (Sharpe ratio: 0.43), and traditional index portfolios (Sharpe ratio: 0.37). The attention mechanism is what makes the model able to dynamically adjust the portfolio weights according to the changing market forces, thus, it can blend active and passive investments efficiently. Moreover, it managed to maintain a very good risk control capacity with a Sortino ratio of 2.45 while its performance during market volatility was still quite good. So, this research serves to provide both quantitative finance and machine learning with a proof that the novel deep learning architectures can easily beat the conventional portfolio optimization methods, even in the case of small asset pools.
Electronic computers. Computer science
A multi-image steganography: ISS
Shihao Zhang, Yanhui Xiao, Huawei Tian
et al.
Abstract Unlike single-image steganography, the scheme of payload distribution on different images plays a pivotal role in the security performance of multi-image steganography. In this paper, a novel multi-image steganography scheme: image stitching sender (ISS) is proposed, which achieves optimal payload distribution by optimizing the stitching scheme of multi-cover-images. In the ISS scheme, we employ peak signal-to-noise ratio as the similarity evaluation metric for the stitched cover image and stego image. Besides, genetic algorithm is used to find the local optimal solution for the similarity, corresponding to a locally optimal multi-image steganographic stitching scheme. The experiment demonstrates that ISS exhibits enhanced anti-detection capabilities in comparison to other multi-image steganography schemes. Furthermore, when combined with non-additive embedding methods, the ISS can achieve a more substantial improvement in security compared to additive embedding methods.
Computer engineering. Computer hardware, Electronic computers. Computer science
TransImg: A Translation Algorithm of Visible-to-Infrared Image Based on Generative Adversarial Network
Shuo Han, Bo Mo, Junwei Xu
et al.
Abstract Infrared images of sensitive targets are difficult to obtain and cannot meet the design and training needs of target detection and tracking algorithms for mobile platforms such as aircraft. This paper proposes an image translation algorithm TransImg, which can achieve visible light image translation to the infrared domain to enrich the dataset. First, the algorithm designed a generator structure consisting of a deep residual connected encoder and a region perception feature fusion module to enhance feature learning, thereby avoiding issues such as generating infrared images with insufficient details in the transfer task. Afterward, a multi-scale discriminator and a composite loss function were designed to further improve the transfer effect. Finally, an automatic mixed-precision training strategy was designed for the overall migration algorithm architecture to accelerate the training and generation of infrared images. Experiments have shown that the image translation algorithm TransImg has good algorithm accuracy, and the infrared image generated by visible light image translation has richer texture details, faster generation speed, and lower video memory consumption, and the performance exceeds the mainstream traditional algorithm, and the generated images can meet the requirements of target detection and tracking algorithms design and training for mobile platforms such as aircraft.
Electronic computers. Computer science
Crosswind and Vortex Usages for Electricity Production Enhancement of Solar Updraft Tower
Amnart Boonloi, Anan Sudsanguan, Withada Jedsadaratanachai
This research presents an improvement to the traditional solar updraft tower, which relies solely on solar energy and cannot operate continuously throughout the day. The enhancement involves a hybrid energy approach by installing a vortex generator at the top of the tower to convert crosswinds into a vortex flow at the chimney’s top. This modification induces an updraft within the tower, enabling it to generate electricity continuously, even at night when there is no sunlight. The aim is to enable the solar updraft tower to harness crosswind energy without altering the tower’s main structure. This involves developing a vortex generator from a unidirectional wind intake design to a three-directional intake, enhancing the feasibility of commercial installation. Additionally, various designs and heights of vortex generators were developed, considering different crosswind speeds (2, 4, 6, and 8 m/s). The research utilizes the finite element method, along with real model construction, to validate the reliability of the study’s findings. The results indicate that the updraft speed is directly proportional to the crosswind speed. From a physical standpoint, the vortex generator with a height equal to D produced the best results in all experiments. The square, cylindrical, and diffuser shapes increased the wind speed inside the chimney by 60%, 41%, and 48%, respectively. These results from various shapes provide effective design and development guidelines for the future commercial use of vortex generators.
Electronic computers. Computer science
Retraction Note: Network security threat detection technology based on EPSO-BP algorithm
Zhu Lan
This article has been retracted. Please see the Retraction Notice for more detail: https://doi.org/10.1186/s13635-024-00152-9
Computer engineering. Computer hardware, Electronic computers. Computer science
Some Reminiscences of David Cox
A. C. Davison
Electronic computers. Computer science
Video images compression method based on floating positional coding with an unequal codograms length
Vladimir Barannik, Serhii Sidchenko, Dmitriy Barannik
et al.
The subject of research in the article are the video images compression and encryption processes during the critically important objects managing process. The goal is to develop a method for compressing video images based on floating positional coding with an uneven codegrams length to simultaneously ensure information reliability and confidentiality during its transmission with a given time delay. Objectives: analyzing existing approaches to ensuring the video images confidentiality; development a method for compressing video images based on floating positional coding with an uneven codegrams length; evaluate the developed method effectiveness. The methods used are: digital image processing methods, digital image compression methods, image encryption and scrambling methods, structural-combinatorial coding methods, statistical analysis methods. The following results are obtained. The technology of floating encoding of an uneven sequence of blocks is proposed. Code values are formed from elements of different video image blocks. For this, a scheme for linearizing an image point coordinates from its four-dimensional representation on a plane into a one-dimensional element coordinate in a vector has been developed. The four-dimensional element coordinate on the plane describes the image block coordinates and the coordinates of the element in this block. Code values are formed under conditions of control their binary representation's length. At the same time, coding is implemented for an indeterminate number of video image elements. The number of elements depends on the length of the code word. Accordingly, codegrams with an indeterminate length are formed. Their length depends on the service data values, generated during the encoding process. Service data acts as a key element. Conclusions. The one-stage polyadic image encoding method in a differentiated basis has been further improved. The developed encoding method provides image compression without information quality loss. The original images volume compression provides by 3–20 % better compared to the TIFF data presentation format and by 4–15 % compared to the PNG format. The overhead amount is less than 2.5 % of the entire codestream size.
Computer engineering. Computer hardware, Electronic computers. Computer science
The accumulation cost of relaxed fixed time accumulation mode
Lianbo Deng, Enwei Jing, Jing Xu
et al.
Abstract Studying the wagon accumulation process and the laws of accumulation cost is of great significance for determining the suitable conditions of wagon accumulation and shortening the accumulation time. Here, the process of relaxed fixed time accumulation is first taken as a stochastic service system, and derives the theoretical formula for the accumulation cost. Then based on actual data of wagon flows, a simulation model is built to analyse the influence of parameters in the theoretical formula such as the coordination of the traffic diagram and the accumulation process, the sizes and intervals of the arriving wagon groups and the minimum number of wagons. Finally, through comparing with the accumulation cost of fixed train length accumulation mode and considering the benefit of changing the minimum number of wagons in train sets, the optimal minimum number of wagons in the relaxed fixed time accumulation mode under different wagon flow intensities is determined.
Transportation engineering, Electronic computers. Computer science
A Hybrid Framework for The Implementation of Business Intelligence Systems in Small Scale Enterprises
Teressa Tjwakinna Chikohora, Bukohwo Michael Esiefarienrhe
Small scale enterprises can improve their operations by implementing business intelligence systems. The business intelligence systems are complex and require expertise to ensure successful implementation, hence the need for small scale enterprises to determine their readiness before undertaking the project. To improve chances for successful implementation, this study proposed a framework to guide small scale enterprises on the requirements for business intelligence systems. The design steps defined by Edwards and Goodrich & Tamassia were followed to design the framework. The framework components were informed by the Diffusion of Innovation and Technology Organization and Environment theories, the Information Evaluation Model, and the critical success factors for BIS implementation. A small business may evaluate its resources against the framework components to determine whether to implement a business intelligence system. In future, the framework may be extended to include weights and other criteria to calculate a business’s status.
Mathematics, Electronic computers. Computer science
CRAPPY: Command and Real-Time Acquisition in Parallelized Python, a Python module for experimental setups
Victor Couty, Jean-François Witz, Corentin Martel
et al.
Performing relevant mechanical tests requires complex experimental setups. CRAPPY is a Python module meant to help researchers develop code for systems involving several sensors and/or actuators. It features a number of advanced tools to perform measurements, drive hardware, and process data. CRAPPY aims at making the design of advanced tests easier and accessible to non-code experts. It is highly modular, and new devices can easily be added to the module as long as they can be interfaced with Python. Users can fully take advantage of Python’s versatility in their experimental scripts, as they are not constrained by any GUI. CRAPPY features a wide variety of tools specific to experimental mechanics as it was first developed for this field, but tools and instruments from other domains can as well be included within its framework. In this paper, the project and its functionalities will be described, illustrated by short examples of code and existing experimental setups using CRAPPY.
Quantum Field Theory in Categorical Quantum Mechanics
Stefano Gogioso, Fabrizio Genovese
We use tools from non-standard analysis to formulate the building blocks of quantum field theory within the framework of categorical quantum mechanics. Building upon previous work, we construct an object of *Hilb having quantum fields as states and we show that the usual ladder and field operators can be defined as suitable endomorphisms. We deal with relativistic normalisation and we obtain the Lorentz invariant Heisenberg picture operators. By moving to a coherent perspective—where the classical time and momentum parameters are replaced by wavefunctions over the parameter spaces—we show that ladder operators and field operators can be obtained by applying the same morphism to plane waves and delta functions respectively. Finally, we formulate the commutation relations diagrammatically and we use them to derive the propagator.
Mathematics, Electronic computers. Computer science
Analisis dan Perancangan Aplikasi Penyusunan Jadwal Mengajar Sesuai Jadwal Kesediaan Mengajar Dosen Di Fakultas Teknologi Informasi (Studi Kasus : Jurusan Teknik Informatika)
Meliana Christianti J., Robby Tan, Oscar Karnalim
et al.
Faculty of Information Technology is one of the faculties at Maranatha Christian University. Currently, scheduling is still done manually. Scheduling process takes quite a long and exhaustive, as the secretary of the department needs to ask the willingness of faculty, and compare each lecturer teaching schedule so it does not clash with other lecturers. As part of Maranatha Christian University, the Faculty of Information Technology would like to shorten its preparation time to arrange faculty teaching schedule and to adjust to the schedule provided by each lecturer. This application will be developed with Java (desktop), PHP (web) programming language, and MySQL database.Â
Electronic computers. Computer science, Technology
Compression of Digital Image based on Hybrid Heuristic Algorithm
Fawziya Ramo, Yaser Al Deen
In this research paper a system has been proposed to be used in data compression of digital image based on two hybrid intelligent Algorithms. In the first algorithm which is now as Meta Heuristic Genetic Compression Algorithm (MGCA) the characteristic and features of GA and local search are used to compress digital image. The second algorithm is the (HMGTCA) Hybrid Meta Genetic and Tabu Compression Algorithm. Hybrid operation has been done between Meta Heuristic Genetic and Tabu search algorithm. The proposed algorithm has been applied on four samples. Efficiencies measures has been performed pled to calculate the value of (PSNR, MSE, correlation coefficient, compression ration and calculate the performance time). The experiments showed that the proposed algorithm achieved high performance and produces PSNR= 34<strong>.</strong>
Mathematics, Electronic computers. Computer science