J. Epstein, R. Axtell
Hasil untuk "Computer Science"
Menampilkan 20 dari ~22578520 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
G. P. Jelliss, R. Graham, D. Knuth et al.
Teerachate Nantakeeratipat
BackgroundLarge Language Models (LLMs) in healthcare holds immense promise yet carries the risk of perpetuating social biases. While artificial intelligence (AI) fairness is a growing concern, a gap exists in understanding how these models perform under conditions of clinical ambiguity, a common feature in real-world practice.MethodsWe conducted a study using an ambiguity-probe methodology with a set of 42 sociodemographic personas and 15 clinical vignettes based on the 2018 classification of periodontal diseases. Ten were clear-cut scenarios with established ground truths, while five were intentionally ambiguous. OpenAI's GPT-4o and Google's Gemini 2.5 Pro were prompted to provide periodontal stage and grade assessments using 630 vignette-persona combinations per model.ResultsIn clear-cut scenarios, GPT-4o demonstrated significantly higher combined (stage and grade) accuracy (70.5%) than Gemini Pro (33.3%). However, a robust fairness analysis using cumulative link models with false discovery rate correction revealed no statistically significant sociodemographic bias in either model. This finding held true across both clear-cut and ambiguous clinical scenarios.ConclusionTo our knowledge, this is among the first study to use simulated clinical ambiguity to reveal the distinct ethical fingerprints of LLMs in a dental context. While LLM performance gaps exist, our analysis decouples accuracy from fairness, demonstrating that both models maintain sociodemographic neutrality. We identify that the observed errors are not bias, but rather diagnostic boundary instability. This highlights a critical need for future research to differentiate between these two distinct types of model failure to build genuinely reliable AI.
Julián Garrido, Susana Sánchez, Edgar Ribeiro João et al.
The Square Kilometre Array Observatory (SKAO) faces unprecedented technological challenges due to the vast scale and complexity of its data. This paper provides an overview of research by the AMIGA group to address these computing and reproducibility challenges. We present advancements in semantic data models, analysis services integrated into federated infrastructures, and the application to astronomy studies of techniques that enhance research transparency. By showcasing these astronomy work, we demonstrate that achieving reproducible science in the Big Data era is feasible. However, we conclude that for the SKAO to succeed, the development of the SKA Regional Centre Network (SRCNet) must explicitly incorporate these reproducibility requirements into its fundamental architectural design. Embedding these standards is crucial to enable the global community to conduct verifiable and sustainable research within a federated environment.
H. Ehrig, K. Ehrig, Ulrike Prange et al.
O. A. Gromova, I. Yu. Torshin, A. G. Moiseenok
Background. Neurotransmitter adenosine and B-group vitamins have neuroprotective, remyelinizing and anti-neuroinflammatory properties. Despite the studies of these molecules for decades, the molecular mechanisms of their synergistic effect on neuroinflammation processes are unexplored and not systematized.Objective: to establish the molecular mechanisms of synergism of adenosine, thiamine, niacin and cyanocobalamin in counteracting the pathology of diabetic polyneuropathy (DPN).Material and methods. The molecular mechanisms of action of adenosine, thiamine (vitamin B1), niacin (vitamin PP) and cyanocobalamin (vitamin B12) in the pathophysiology of DPN were determined using functional analysis of genomic and proteomic databases.Results. The analysis of 20,180 annotated proteins of the human proteome identified 504 vitamin-PP-dependent, 22 vitamin-B1-dependent, 24 vitamin-B12-dependent and 50 adenosine-dependent proteins. The proteins of the human proteome were detected, the activity or levels of which are important for reducing neuroinflammation, remyelination, neurogenesis, biosynthesis of neuronal adenosine triphosphate, myelin homeostasis, neuroplasticity, neutralization of homocysteine, regeneration of nerve fibers and maintaining the endothelium of the microvascular bed.Conclusion. The discovered molecular mechanisms of synergism of the studied molecules are of fundamental importance for comprehension of the processes of neuroinflammation regulation and remyelination to prevent diabetic polyneuropathy and other neurodegenerative diseases.
Jing Ma, Yuanbo Chen, Yanfang Fu et al.
Cooperative multi-UAV clusters have been widely applied in complex mission scenarios due to their flexible task allocation and efficient real-time coordination capabilities. The Air Command Aircraft (ACA), as the core node within the UAV cluster, is responsible for coordinating and managing various tasks within the cluster. When the ACA undergoes fault recovery, a handover operation is required, during which the ACA must re-authenticate its identity with the UAV cluster and re-establish secure communication. However, traditional, centralized identity authentication and ACA handover mechanisms face security risks such as single points of failure and man-in-the-middle attacks. In highly dynamic network environments, single-chain blockchain architectures also suffer from throughput bottlenecks, leading to reduced handover efficiency and increased authentication latency. To address these challenges, this paper proposes a mathematically structured dual-chain framework that utilizes a distributed ledger to decouple the management of identity and authentication information. We formalize the ACA handover process using cryptographic primitives and accumulator functions and validate its security through BAN logic. Furthermore, we conduct quantitative analyses of key performance metrics, including time complexity and communication overhead. The experimental results demonstrate that the proposed approach ensures secure handover while significantly reducing computational burden. The framework also exhibits strong scalability, making it well-suited for large-scale UAV cluster networks.
Christian Tamantini, Christian Tamantini, Kevin Patrice Langlois et al.
Ajantha Vijayakumar, Joseph Abraham Sundar Koilraj, Muthaiah Rajappa
Abstract Oral dosage forms are the most widely employed method of drug delivery in therapeutic treatments. However, the presence of visual defects in blister packages can adversely affect the drug's bioavailability and therapeutic efficacy, potentially compromising treatment outcomes. Consequently, detecting tablet defects post-blister packaging in real-time represents a critical challenge in the pharmaceutical industry. Additionally, factors such as blister reflections and limited dataset size hinder the deep learning model's ability to identify defects accurately. To address these challenges, the PharmaNet deep model is developed utilizing a convolutional neural network (CNN) architecture, incorporating defect-guided dynamic feature fusion (DGDFF) in which the fusion process is dynamically guided by potential defect regions, allowing the model to focus on relevant features (defect areas) more efficiently, adaptive deep chain (ADC) which includes occlusion pattern generator (OPG) and residual recursive feature reconstructor (R2FR). The OPG creates multiple views of potential defect regions by systematically dividing features into blocks and creating layered occlusions. At the same time, the R2FR uses gates with ELU activation and residual connections to reconstruct detailed features from these occluded sequences, ultimately enhancing the model's ability to detect subtle defects. The model culminates in an uncertainty-aware detection head that enhances defect prediction reliability by incorporating uncertainty estimates alongside traditional class probabilities and bounding box predictions. This provides a more informed and interpretable decision-making process for pharmaceutical quality control in real-time. Empirical evaluation on the proposed model demonstrates state-of-the-art performance with 99.4% mAP on the PharmaBlister dataset and 97.2% mAP on MVTech AD, with minimal predictive uncertainty, validating its efficacy in pharmaceutical quality control applications.
Dwi Ratna Puspita Sari, Sirli Fahriah, Kurnianingsih et al.
Gold is one of the most favored investment instruments due to its stability and its ability to preserve value against inflation. However, its price movements are volatile and influenced by various global economic factors, currency exchange rates, and geopolitical conditions, making gold price forecasting a significant challenge. This study aims to develop a gold price forecasting system using the Long Short-Term Memory (LSTM) algorithm, a variant of the Recurrent Neural Network (RNN) that excels in processing time-series data. The dataset consists of historical daily gold buying and selling prices from 2015 to 2025, collected from Yahoo Finance, Logam Mulia, and the official website of Bank Indonesia. The modeling process follows the CRISP-DM methodology, which includes business understanding, data preparation and exploration, modeling, and evaluation stages. Time Series Cross Validation (TSCV) is used to validate the model. LSTM performance is compared with other models such as GRU, CNN-1D, and Simple RNN to identify the best-performing architecture. Evaluation results indicate that LSTM achieved the highest performance with an R² score of 0.99 for selling prices and 0.98 for buying prices on the final test dataset. The system is deployed online, making it accessible in real-time. This research is expected to assist investors, financial analysts, and the general public in making smarter investment decisions based on valid historical data and advanced forecasting technology.
E. Iraola, M. García-Lorenzo, F. Lordan-Gomis et al.
Digital twins are transforming the way we monitor, analyze, and control physical systems, but designing architectures that balance real-time responsiveness with heavy computational demands remains a challenge. Cloud-based solutions often struggle with latency and resource constraints, while edge-based approaches lack the processing power for complex simulations and data-driven optimizations. To address this problem, we propose the High-Precision High-Performance Computer-enabled Digital Twin (HP2C-DT) reference architecture, which integrates High-Performance Computing (HPC) into the computing continuum. Unlike traditional setups that use HPC only for offline simulations, HP2C-DT makes it an active part of digital twin workflows, dynamically assigning tasks to edge, cloud, or HPC resources based on urgency and computational needs. Furthermore, to bridge the gap between theory and practice, we introduce the HP2C-DT framework, a working implementation that uses COMPSs for seamless workload distribution across diverse infrastructures. We test it in a power grid use case, showing how it reduces communication bandwidth by an order of magnitude through edge-side data aggregation, improves response times by up to 2x via dynamic offloading, and maintains near-ideal strong scaling for compute-intensive workflows across a practical range of resources. These results demonstrate how an HPC-driven approach can push digital twins beyond their current limitations, making them smarter, faster, and more capable of handling real-world complexity.
Muhammad Hussain
This paper implements a systematic methodological approach to review the evolution of YOLO variants. Each variant is dissected by examining its internal architectural composition, providing a thorough understanding of its structural components. Subsequently, the review highlights key architectural innovations introduced in each variant, shedding light on the incremental refinements. The review includes benchmarked performance metrics, offering a quantitative measure of each variant’s capabilities. The paper further presents the performance of YOLO variants across a diverse range of domains, manifesting their real-world impact. This structured approach ensures a comprehensive examination of YOLOs journey, methodically communicating its internal advancements and benchmarked performance before delving into domain applications. It is envisioned, the incorporation of concepts such as federated learning can introduce a collaborative training paradigm, where YOLO models benefit from training across multiple edge devices, enhancing privacy, adaptability, and generalisation.
Nurezayana Zainal, Mohanavali Sithambranathan, Umar Farooq Khattak et al.
Because of its versatility and ability to work with difficult materials, Electrical Discharge Machining (EDM) has become an essential tool in many different industries. It can produce precise shapes and intricate details. EDM has transformed fabrication processes in a variety of industries, including aerospace and electronics, medical implants and surgical instruments, and the shaping of small components. Its capacity to machine undercuts and deep cavities with little material removal makes it ideal for producing complex geometries that would be challenging or impossible to accomplish with conventional machining techniques. Several attempts have been carried out to solve the optimization problem involved in the EDM process. This paper emphasizes optimizing the EDM process using three metaheuristic algorithms: Glowworm Swarm Optimization (GSO), Grey Wolf Optimizer (GWO), and Whale Optimization Algorithm (WOA). The study's outcome showed that the GWO algorithm outperformed the GSO and WOA algorithms in solving the EDM optimization problem and achieved the minimum surface roughness value of 1.7593µm.
Shuo Han, Bo Mo, Junwei Xu et al.
Abstract Infrared images of sensitive targets are difficult to obtain and cannot meet the design and training needs of target detection and tracking algorithms for mobile platforms such as aircraft. This paper proposes an image translation algorithm TransImg, which can achieve visible light image translation to the infrared domain to enrich the dataset. First, the algorithm designed a generator structure consisting of a deep residual connected encoder and a region perception feature fusion module to enhance feature learning, thereby avoiding issues such as generating infrared images with insufficient details in the transfer task. Afterward, a multi-scale discriminator and a composite loss function were designed to further improve the transfer effect. Finally, an automatic mixed-precision training strategy was designed for the overall migration algorithm architecture to accelerate the training and generation of infrared images. Experiments have shown that the image translation algorithm TransImg has good algorithm accuracy, and the infrared image generated by visible light image translation has richer texture details, faster generation speed, and lower video memory consumption, and the performance exceeds the mainstream traditional algorithm, and the generated images can meet the requirements of target detection and tracking algorithms design and training for mobile platforms such as aircraft.
Brian Wright, Peter Alonzi, Ali Rivera
The definition of Data Science is a hotly debated topic. For many, the definition is a simple shortcut to Artificial Intelligence or Machine Learning. However, there is far more depth and nuance to the field of Data Science than a simple shortcut can provide. The School of Data Science at the University of Virginia has developed a novel model for the definition of Data Science. This model is based on identifying a unified understanding of the data work done across all areas of Data Science. It represents a generational leap forward in how we understand and teach Data Science. In this paper we will present the core features of the model and explain how it unifies various concepts going far beyond the analytics component of AI. From this foundation we will present our Undergraduate Major curriculum in Data Science and demonstrate how it prepares students to be well-rounded Data Science team members and leaders. The paper will conclude with an in-depth overview of the Foundations of Data Science course designed to introduce students to the field while also implementing proven STEM oriented pedagogical methods. These include, for example, specifications grading, active learning lectures, guest lectures from industry experts and weekly gamification labs.
Vito Antonio Cimmelli
We propose a thermodynamic model describing the thermoelastic behavior of composition graded materials. The compatibility of the model with the second law of thermodynamics is explored by applying a generalized Coleman–Noll procedure. For the material at hand, the specific entropy and the stress tensor may depend on the gradient of the unknown fields, resulting in a very general theory. We calculate the speeds of coupled first- and second-sound pulses, propagating either trough nonequilibrium or equilibrium states. We characterize several different types of perturbations depending on the value of the material coefficients. Under the assumption that the deformation of the body can produce changes in its stoichiometry, altering locally the material composition, the possibility of propagation of pure stoichiometric waves is pointed out. Thermoelastic perturbations generated by the coupling of stoichiometric and thermal effects are analyzed as well.
Halaman 11 dari 1128926