Hasil untuk "Computer Science"

Menampilkan 20 dari ~22604484 hasil · dari CrossRef, DOAJ, Semantic Scholar, arXiv

JSON API
DOAJ Open Access 2026
A cascaded classification approach using transfer learning and feature engineering for improved breast cancer classification

Chokri Ferkous, Ouissal Fadel, Abderrahmane Kefali et al.

The primary objective of this study is to design a cascaded classification framework that integrates deep-learning representations with handcrafted and clinical features to enhance the reliability and accuracy of breast cancer detection in mammographic screening. A multi-source mammography dataset comprising four databases was used to ensure diversity and reduce bias. The proposed system operates in two stages. In the first stage, transfer learning models (VGG16, ResNet50, and EfficientNet_B0) were evaluated using ROC-AUC, PR-AUC, calibration curves, and bootstrap confidence intervals. EfficientNet_B0, which achieved the best balance between discrimination and calibration, was selected as the feature extractor. In the second stage, the malignancy probability was combined with Haralick texture features, patient age, and breast density, and classified using SVM, Random Forest, MLP, Decision Tree, and Logistic Regression. Model robustness was verified through multi-run experiments (five random seeds) and subgroup analyses by age and density. Among the CNN models, EfficientNet_B0 yielded the best performance (accuracy = 0.9438, ROC-AUC = 0.944, PR-AUC = 0.960). In the second stage, although Random Forest achieved the highest accuracy (0.9556 ± 0.002), SVM obtained the highest mean ROC-AUC (0.980 ± 0.001) with stable accuracy (0.9539 ± 0.001) and the most significant p-values, indicating superior robustness and generalization. The proposed cascaded framework effectively combines deep, handcrafted, and clinical features to improve mammogram classification performance. The SVM-based model demonstrates strong calibration, stability, and subgroup consistency, highlighting its potential for deployment in computer-aided mammography screening systems that assist radiologists in early breast cancer detection.

Electronic computers. Computer science
DOAJ Open Access 2026
Measuring perceived physical fidelity in virtual reality and virtual environments

Bree McEwan, Clarice Wu, Harris Yang et al.

As communication scholars become increasingly interested in studying virtual reality (VR) as a communication channel it will be important to establish useful measures related to perceptual variables in virtual environments. One such variable is physical fidelity: the degree to which virtual environments replicate or resemble places in the physical world. Often in computer science and other fields interested in VR, this variable is measured as reaction time within the system. However, for social scientific VR scholars, it can be important to understand how much the user perceives the environment to have physical fidelity. In the existing literature when physical fidelity is measured as a perceptual variable, it is often conflated with measures of immersion or spatial presence. This paper presents a confirmatory factor analysis approach to establishing a well-fitting scale of perceptual physical fidelity over three separate samples as well as delineating the conceptual and operational differences between physical fidelity, immersion, and spatial presence.

Electronic computers. Computer science
DOAJ Open Access 2026
Gaze-adaptive neural pre-correction for mitigating spatially varying optical aberrations in near-eye displays

Yi Jiang, Ye Bi, Yinng Li et al.

Near-eye display (NED) technology constitutes a fundamental component of head-mounted display (HMD) systems. The compact form factor required by HMDs imposes stringent constraints on optical design, often resulting in pronounced wavefront aberrations that significantly degrade visual fidelity. In addition, natural eye movements dynamically induce varying blur that further compromises image quality. To mitigate these challenges, a gaze-contingent neural network framework has been developed to compensate for aberrations within the foveal region. The network is trained in an end-to-end manner to minimize the discrepancy between the optically degraded system output and the corresponding ground truth image. A forward imaging model is employed, in which the network output is convolved with a spatially varying point spread function (PSF) to accurately simulate the degradation introduced by the optical system. To accommodate dynamic changes in gaze direction, a foveated attention-guided module is incorporated to adaptively modulate the pre-correction process, enabling localized compensation centered on the fovea. Additionally, an end-to-end trainable architecture has been designed to integrate gaze-informed blur priors. Both simulation and experimental validations confirm that the proposed method substantially reduces gaze-dependent aberrations and enhances retinal image clarity within the foveal region, while maintaining high computational efficiency. The presented framework offers a practical and scalable solution for improving visual performance in aberration-sensitive NED systems.

Computer engineering. Computer hardware, Electronic computers. Computer science
DOAJ Open Access 2025
Automatic Evaluation Algorithms for Radio Tomography Imaging Methods

Krzysztof Strzecha, Grzegorz Rybak

The radio tomography imaging (RTI) method is very similar to X-ray tomography, but it operates in the radio frequency band without exposing the human body to harmful tissue-penetrating radiation. It can be used to monitor the number of people and their locations in buildings such as offices or hospitals. RTI can be useful in emergencies, rescue operations, and security breaches. The novelty of this paper includes the flexible architecture of an evaluation platform for RTI image reconstruction algorithms, as well as an automated evaluation process. The concept of the developed platform assumes the use of a distributed architecture based on microservices. Numerous advantages of the proposed architecture are pointed out. The presented approach ensures flexibility for further development work thanks to the system’s high degree of granularity and modularity.

Chemical technology
DOAJ Open Access 2025
Empirical Evaluation of Invariances in Deep Vision Models

Konstantinos Keremis, Eleni Vrochidou, George A. Papakostas

The ability of deep learning models to maintain consistent performance under image transformations-termed invariances, is critical for reliable deployment across diverse computer vision applications. This study presents a comprehensive empirical evaluation of modern convolutional neural networks (CNNs) and vision transformers (ViTs) concerning four fundamental types of image invariances: blur, noise, rotation, and scale. We analyze a curated selection of thirty models across three common vision tasks, object localization, recognition, and semantic segmentation, using benchmark datasets including COCO, ImageNet, and a custom segmentation dataset. Our experimental protocol introduces controlled perturbations to test model robustness and employs task-specific metrics such as mean Intersection over Union (mIoU), and classification accuracy (Acc) to quantify models’ performance degradation. Results indicate that while ViTs generally outperform CNNs under blur and noise corruption in recognition tasks, both model families exhibit significant vulnerabilities to rotation and extreme scale transformations. Notably, segmentation models demonstrate higher resilience to geometric variations, with SegFormer and Mask2Former emerging as the most robust architectures. These findings challenge prevailing assumptions regarding model robustness and provide actionable insights for designing vision systems capable of withstanding real-world input variability.

Photography, Computer applications to medicine. Medical informatics
DOAJ Open Access 2025
Ground calibration tests of the laser altimeter (LIDAR) for MMX mission

Hiroki Senshu, Takahide Mizuno, Toru Nakura et al.

Abstract The spacecraft for the Japanese Martian Moon eXploration mission is equipped with a LIDAR laser altimeter. The slant range continuously measured by the LIDAR is used for the correction of the local topography and the orbit and attitude of the spacecraft. The channel and gain setting of the LIDAR in the proximity phase will be automatically controlled based on the received energy. This paper reports the result of ground-based calibration tests. The calibration function is obtained for each channel and gain settings. Then, the performance test of the auto gain control function is carried out by changing the received energy gradually. This test demonstrates that the automatic gain control system of the LIDAR works well and the obtained slant range and received energy change smoothly.

Geography. Anthropology. Recreation, Geology
arXiv Open Access 2025
Performance analysis of mdx II: A next-generation cloud platform for cross-disciplinary data science research

Keichi Takahashi, Tomonori Hayami, Yu Mukaizono et al.

mdx II is an Infrastructure-as-a-Service (IaaS) cloud platform designed to accelerate data science research and foster cross-disciplinary collaborations among universities and research institutions in Japan. Unlike traditional high-performance computing systems, mdx II leverages OpenStack to provide customizable and isolated computing environments consisting of virtual machines, virtual networks, and advanced storage. This paper presents a comprehensive performance evaluation of mdx II, including a comparison to Amazon Web Services (AWS). We evaluated the performance of a 16-vCPU VM from multiple aspects including floating-point computing performance, memory throughput, network throughput, file system and object storage performance, and real-world application performance. Compared to an AWS 16-vCPU instance, the results indicated that mdx II outperforms AWS in many aspects and demonstrated that mdx II holds significant promise for high-performance data analytics (HPDA) workloads. We also evaluated the virtualization overhead using a 224-vCPU VM occupying an entire host. The results suggested that the virtualization overhead is minimal for compute-intensive benchmarks, while memory-intensive benchmarks experienced larger overheads. These findings are expected to help users of mdx II to obtain high performance for their data science workloads and offer insights to the designers of future data-centric cloud platforms.

en cs.DC
arXiv Open Access 2025
Distributed and heterogeneous tensor-vector contraction algorithms for high performance computing

Pedro J. Martinez-Ferrer, Albert-Jan Yzelman, Vicenç Beltran

The tensor-vector contraction (TVC) is the most memory-bound operation of its class and a core component of the higher-order power method (HOPM). This paper brings distributed-memory parallelization to a native TVC algorithm for dense tensors that overall remains oblivious to contraction mode, tensor splitting and tensor order. Similarly, we propose a novel distributed HOPM, namely dHOPM3, that can save up to one order of magnitude of streamed memory and is about twice as costly in terms of data movement as a distributed TVC operation (dTVC) when using task-based parallelization. The numerical experiments carried out in this work on three different architectures featuring multi-core and accelerators confirm that the performances of dTVC and dHOPM3 remain relatively close to the peak system memory bandwidth (50%-80%, depending on the architecture) and on par with STREAM benchmark figures. On strong scalability scenarios, our native multi-core implementations of these two algorithms can achieve similar and sometimes even greater performance figures than those based upon state-of-the-art CUDA batched kernels. Finally, we demonstrate that both computation and communication can benefit from mixed precision arithmetic also in cases where the hardware does not support low precision data types natively.

DOAJ Open Access 2024
scAnnoX: an R package integrating multiple public tools for single-cell annotation

Xiaoqian Huang, Ruiqi Liu, Shiwei Yang et al.

Background Single-cell annotation plays a crucial role in the analysis of single-cell genomics data. Despite the existence of numerous single-cell annotation algorithms, a comprehensive tool for integrating and comparing these algorithms is also lacking. Methods This study meticulously investigated a plethora of widely adopted single-cell annotation algorithms. Ten single-cell annotation algorithms were selected based on the classification of either reference dataset-dependent or marker gene-dependent approaches. These algorithms included SingleR, Seurat, sciBet, scmap, CHETAH, scSorter, sc.type, cellID, scCATCH, and SCINA. Building upon these algorithms, we developed an R package named scAnnoX for the integration and comparative analysis of single-cell annotation algorithms. Results The development of the scAnnoX software package provides a cohesive framework for annotating cells in scRNA-seq data, enabling researchers to more efficiently perform comparative analyses among the cell type annotations contained in scRNA-seq datasets. The integrated environment of scAnnoX streamlines the testing, evaluation, and comparison processes among various algorithms. Among the ten annotation tools evaluated, SingleR, Seurat, sciBet, and scSorter emerged as top-performing algorithms in terms of prediction accuracy, with SingleR and sciBet demonstrating particularly superior performance, offering guidance for users. Interested parties can access the scAnnoX package at https://github.com/XQ-hub/scAnnoX.

Medicine, Biology (General)
DOAJ Open Access 2024
Learning and Evolution: Factors Influencing an Effective Combination

Paolo Pagliuca

(1) Background: The mutual relationship between evolution and learning is a controversial argument among the artificial intelligence and neuro-evolution communities. After more than three decades, there is still no common agreement on the matter. (2) Methods: In this paper, the author investigates whether combining learning and evolution permits finding better solutions than those discovered by evolution alone. In further detail, the author presents a series of empirical studies that highlight some specific conditions determining the success of such combination. Results are obtained in five qualitatively different domains: (i) the 5-bit parity task, (ii) the double-pole balancing problem, (iii) the Rastrigin, Rosenbrock and Sphere optimization functions, (iv) a robot foraging task and (v) a social foraging problem. Moreover, the first three tasks represent benchmark problems in the field of evolutionary computation. (3) Results and discussion: The outcomes indicate that the effect of learning on evolution depends on the nature of the problem. Specifically, when the problem implies limited or absent agent–environment conditions, learning is beneficial for evolution, especially with the introduction of noise during the learning and selection processes. Conversely, when agents are embodied and actively interact with the environment, learning does not provide advantages, and the addition of noise is detrimental. Finally, the absence of stochasticity in the experienced conditions is paramount for the effectiveness of the combination. Furthermore, the length of the learning process must be fine-tuned based on the considered task.

Electronic computers. Computer science
DOAJ Open Access 2024
Evaluating the direct effect of an increase in the Value Added Tax on business sales using the Delphi and NAHP+NSC methods

Guido Macas-Acosta, Jesús Estupiñán Ricardo, Arnaldo Vergara-Romero et al.

This article uses the Delphi and neutrosophic analytic hierarchy process (NAHP) and neutrosophic social choice theory (NSC). NAHP+NSC methodology is used to investigate the potential direct effects of a rise in the Value Added Tax (VAT) on company sales. The primary question is how a change in VAT may affect corporate activity; this is a simple enough question despite its weighty ramifications. Despite the large number of economic research, it seems that the literature has not yet gone into great length on how these particular techniques might provide an in-depth understanding of possible company responses to tax increases. It's interesting to note that the study not only closes a significant research gap, but also uses advanced approaches to examine the impact. Findings that would not have been reached by more conventional methods are achieved by combining the Delphi technique for expert viewpoints with NAHP+NSC for a more in-depth study. The results imply that, depending on a number of variables, including industry type and company size, a rise in VAT might have varying impacts on business sales. This research provides helpful tools for firms and politicians looking to adjust to possible changes in the tax environment, in addition to offering a fresh viewpoint on the topic of tax policy. In the end, the study broadens our theoretical knowledge and offers helpful advice for navigating the intricate realm of tax laws and their implications on the economy.

Mathematics, Electronic computers. Computer science
DOAJ Open Access 2024
Improving SSVEP-BCI Performance Through Repetitive Anodal tDCS-Based Neuromodulation: Insights From Fractal EEG and Brain Functional Connectivity

Shangen Zhang, Hongyan Cui, Yong Li et al.

This study embarks on a comprehensive investigation of the effectiveness of repetitive transcranial direct current stimulation (tDCS)-based neuromodulation in augmenting steady-state visual evoked potential (SSVEP) brain-computer interfaces (BCIs), alongside exploring pertinent electroencephalography (EEG) biomarkers for assessing brain states and evaluating tDCS efficacy. EEG data were garnered across three distinct task modes (eyes open, eyes closed, and SSVEP stimulation) and two neuromodulation patterns (sham-tDCS and anodal-tDCS). Brain arousal and brain functional connectivity were measured by extracting features of fractal EEG and information flow gain, respectively. Anodal-tDCS led to diminished offsets and enhanced information flow gains, indicating improvements in both brain arousal and brain information transmission capacity. Additionally, anodal-tDCS markedly enhanced SSVEP-BCIs performance as evidenced by increased amplitudes and accuracies, whereas sham-tDCS exhibited lesser efficacy. This study proffers invaluable insights into the application of neuromodulation methods for bolstering BCI performance, and concurrently authenticates two potent electrophysiological markers for multifaceted characterization of brain states.

Medical technology, Therapeutics. Pharmacology
arXiv Open Access 2024
AMIDER: A Multidisciplinary Research Database and Its Application to Promote Open Science

Masayoshi Kozai, Yoshimasa Tanaka, Shuji Abe et al.

The AMIDER, Advanced Multidisciplinary Integrated-Database for Exploring new Research, is a newly developed research data catalog to demonstrate an advanced database application. AMIDER is characterized as a multidisciplinary database equipped with a user-friendly web application. Its catalog view displays diverse research data at once beyond any limitation of each individual discipline. Some useful functions, such as a selectable data download, data format conversion, and display of data visual information, are also implemented. Further advanced functions, such as visualization of dataset mutual relationship, are also implemented as a preliminary trial. These characteristics and functions are expected to enhance the accessibility to individual research data, even from non-expertized users, and be helpful for collaborations among diverse scientific fields beyond individual disciplines. Multidisciplinary data management is also one of AMIDER's uniqueness, where various metadata schemas can be mapped to a uniform metadata table, and standardized and self-describing data formats are adopted. AMIDER website (https://amider.rois.ac.jp/) had been launched in April 2024. As of July 2024, over 15,000 metadata in various research fields of polar science have been registered in the database, and approximately 500 visitors are viewing the website every day on average. Expansion of the database to further multidisciplinary scientific fields, not only polar science, is planned, and advanced attempts, such as applying Natural Language Processing (NLP) to metadata, have also been considered.

en cs.DB
DOAJ Open Access 2023
On the correspondence between the transcriptomic response of a compound and its effects on its targets

Chloe Engler Hart, Daniel Ence, David Healey et al.

Abstract Better understanding the transcriptomic response produced by a compound perturbing its targets can shed light on the underlying biological processes regulated by the compound. However, establishing the relationship between the induced transcriptomic response and the target of a compound is non-trivial, partly because targets are rarely differentially expressed. Therefore, connecting both modalities requires orthogonal information (e.g., pathway or functional information). Here, we present a comprehensive study aimed at exploring this relationship by leveraging thousands of transcriptomic experiments and target data for over 2000 compounds. Firstly, we confirm that compound-target information does not correlate as expected with the transcriptomic signatures induced by a compound. However, we reveal how the concordance between both modalities increases by connecting pathway and target information. Additionally, we investigate whether compounds that target the same proteins induce a similar transcriptomic response and conversely, whether compounds with similar transcriptomic responses share the same target proteins. While our findings suggest that this is generally not the case, we did observe that compounds with similar transcriptomic profiles are more likely to share at least one protein target and common therapeutic applications. Finally, we demonstrate how to exploit the relationship between both modalities for mechanism of action deconvolution by presenting a case scenario involving a few compound pairs with high similarity.

Computer applications to medicine. Medical informatics, Biology (General)
DOAJ Open Access 2023
Incorporating Feature Interactions and Contrastive Learning for Credit Prediction

Lisi Zhang, Qiancheng Yu, Beijing Zhou et al.

The efficacy of credit risk assessment models is pivotal to the risk management capacity of financial institutions. Traditional credit risk models often suffer from inadequate predictive accuracy due to overlooked feature combinations and weak supervisory signals. Addressing these limitations, we present a novel approach for credit default prediction that integrates feature interactions and contrastive learning. Specifically, we introduce second-order interactions atop standard linear models to achieve low-order feature interplay. Concurrently, the integration of deep neural networks and attention mechanisms facilitates the learning of concealed high-order features, thus enhancing the model’s non-linear modeling capabilities and illuminating latent feature associations. Further, to ameliorate the issues of noise and diminished supervisory signals, we embed slight noise in feature embeddings for data augmentation and construct contrastive views, ultimately refining feature quality. To attest to the effectiveness of our approach, we conducted experiments on two real-world datasets, benchmarking against eight predictive methods including LR, XGBoost, and FiBiNET. The results unequivocally demonstrate the superior performance of our method across various metrics, underscoring its promise and excellence in the realm of credit risk assessment.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2023
On the use of aspect-based sentiment analysis of Twitter data to explore the experiences of African Americans during COVID-19

Meghna Chaudhary, Kristin Kosyluk, Sylvia Thomas et al.

Abstract According to data from the U.S. Center for Disease Control and Prevention, as of June 2020, a significant number of African Americans had been infected with the coronavirus disease, experiencing disproportionately higher death rates compared to other demographic groups. These disparities highlight the urgent need to examine the experiences, behaviors, and opinions of the African American population in relation to the COVID-19 pandemic. By understanding their unique challenges in navigating matters of health and well-being, we can work towards promoting health equity, eliminating disparities, and addressing persistent barriers to care. Since Twitter data has shown significant promise as a representation of human behavior and for opinion mining, this study leverages Twitter data published in 2020 to characterize the pandemic-related experiences of the United States’ African American population using aspect-based sentiment analysis. Sentiment analysis is a common task in natural language processing that identifies the emotional tone (i.e., positive, negative, or neutral) of a text sample. Aspect-based sentiment analysis increases the granularity of sentiment analysis by also extracting the aspect for which sentiment is expressed. We developed a machine learning pipeline consisting of image and language-based classification models to filter out tweets not related to COVID-19 and those unlikely published by African American Twitter subscribers, leading to an analysis of nearly 4 million tweets. Overall, our results show that the majority of tweets had a negative tone, and that the days with larger numbers of published tweets often coincided with major U.S. events related to the pandemic as suggested by major news headlines (e.g., vaccine rollout). We also show how word usage evolved throughout the year (e.g., outbreak to pandemic and coronavirus to covid). This work also points to important issues like food insecurity and vaccine hesitation, along with exposing semantic relationships between words, such as covid and exhausted. As such, this work furthers understanding of how the nationwide progression of the pandemic may have impacted the narratives of African American Twitter users.

Medicine, Science

Halaman 22 dari 1130225