Hasil untuk "Computer software"

Menampilkan 20 dari ~8152177 hasil · dari CrossRef, DOAJ, Semantic Scholar

JSON API
DOAJ Open Access 2026
AG-CLIP: Attribute-Guided CLIP for Zero-Shot Fine-Grained Recognition

Jamil Ahmad, Mustaqeem Khan, Wail Guiaeab et al.

Zero-shot fine-grained recognition is challenging due to high visual similarities between classes and the inferior encoding of fine-grained features in embedding models. In this work, we present an attribute-guided Contrastive Language-Image Pre-training (AG-CLIP) model with an additional attribute encoder. Our approach first identifies relevant visual attributes from the textual class descriptions using an attribute mining module leveraging a large language model (LLM) GPT-4o. The attributes are then used to construct prompts for an open vocabulary object/region detector to extract relevant corresponding image regions. The attribute text, along with focused regions of the input, then guides the CLIP model to focus on these discriminative attributes during fine-tuning through a context-attribute fusion module. Our attribute-guided attention mechanism allows CLIP to effectively disambiguate fine-grained classes by highlighting their distinctive attributes without requiring fine-tuning or additional training data on unseen classes. We evaluate our approach on the CUB-200-2011 and plant disease datasets, achieving 73.3% and 84.6% accuracy, respectively. Our method achieves state-of-the-art zero-shot performance, outperforming prior methods that rely on external knowledge bases or complex meta-learning strategies. The strong results demonstratethe effectiveness of injecting generic attribute awareness into powerful vision-language models like CLIP for tackling fine-grained recognition in a zero-shot manner.

Electronic computers. Computer science, Information technology
DOAJ Open Access 2026
MarsCONE: A software toolbox for automatic morphological analysis of Martian pitted-cones

Jakub Śledziowski, Bartosz Pieterek, Thomas J. Jones

Advances in automated image classification, together with near-global imaging coverage of the Martian surface, have enabled detailed characterization of the spatial distribution of pitted cones, providing a foundation for systematic investigations of their morphological diversity. Concurrently, the continued acquisition of high-resolution Martian imagery over the past decades has allowed photogrammetrically derived digital elevation models (DEMs) to enhance the accessibility, precision, and overall robustness of morphological analyses. However, the number of identified Martian pitted cones causes systematic, manual morphological measurements to be highly labour-intensive and, consequently, impractical for large datasets. To address this challenge, we present a command-line open-source MarsCONE software toolbox, designed to perform automatic cone-morphology detection, and to compute the morphological parameters of Martian pitted cones using High Resolution Imaging Science Experiment (HiRISE)-derived DEMs. The toolbox is built on as a suite of Python tools and Jupyter notebooks, and performs key processing steps including data preparation and transect generation according to user-defined configurations (Generator), signal extraction and landform-point morphology detection (Finder), and cross-transect aggregation with uncertainty handling and data export (Analyzer). This enables MarsCONE to analyze hundreds of pitted cones in a single batch, providing fully reproducible and systematic results within seconds and at minimal computational cost. Consequently, the MarsCONE toolbox improves reproducibility, reduces manual workload, and facilitates large-scale comparative studies of pitted cones across Mars, thereby supporting a more robust understanding of the geological processes governing their formation.

Computer software
S2 Open Access 2019
TrueNorth: Accelerating From Zero to 64 Million Neurons in 10 Years

M. DeBole, B. Taba, A. Amir et al.

IBM's brain-inspired processor is a massively parallel neural network inference engine containing 1 million spiking neurons and 256 million low-precision synapses. Now, after a decade of fundamental research spanning neuroscience, architecture, chips, systems, software, and algorithms, IBM has delivered the largest neurosynaptic computer ever built.

214 sitasi en Computer Science
DOAJ Open Access 2025
DconnLoop: a deep learning model for predicting chromatin loops based on multi-source data integration

Junfeng Wang, Kuikui Cheng, Chaokun Yan et al.

Abstract Background Chromatin loops are critical for the three-dimensional organization of the genome and gene regulation. Accurate identification of chromatin loops is essential for understanding the regulatory mechanisms in disease. However, current mainstream detection methods rely primarily on single-source data, such as Hi-C, which limits these methods’ ability to capture the diverse features of chromatin loop structures. In contrast, multi-source data integration and deep learning approaches, though not yet widely applied, hold significant potential. Results In this study, we developed a method called DconnLoop to integrate Hi-C, ChIP-seq, and ATAC-seq data to predict chromatin loops. This method achieves feature extraction and fusion of multi-source data by integrating residual mechanisms, directional connectivity excitation modules, and interactive feature space decoders. Finally, we apply density estimation and density clustering to the genome-wide prediction results to identify more representative loops. The code is available from https://github.com/kuikui-C/DconnLoop . Conclusions The results demonstrate that DconnLoop outperforms existing methods in both precision and recall. In various experiments, including Aggregate Peak Analysis and peak enrichment comparisons, DconnLoop consistently shows advantages. Extensive ablation studies and validation across different sequencing depths further confirm DconnLoop’s robustness and generalizability.

Computer applications to medicine. Medical informatics, Biology (General)
DOAJ Open Access 2025
Learning Topological States from Randomized Measurements Using Variational Tensor-Network Tomography

Yanting Teng, Rhine Samajdar, Katherine Van Kirk et al.

Learning faithful representations of quantum states is crucial to fully characterizing the variety of many-body states created on quantum processors. While various tomographic methods, such as classical shadow and matrix product state (MPS) tomography have shown promise in characterizing a wide class of quantum states, they face unique limitations in detecting topologically ordered two-dimensional states. To address this problem, we implement and study a heuristic tomographic method that combines variational optimization on tensor networks with randomized measurement techniques. Using this approach, we demonstrate its ability to learn the ground state of the surface-code Hamiltonian as well as an experimentally realizable quantum spin liquid state. In particular, we perform numerical experiments using MPS ansätze and systematically investigate the sample complexity required to achieve high fidelities for systems with sizes of up to 48 qubits. In addition, we provide theoretical insights into the scaling of our learning algorithm by analyzing the statistical properties of maximum-likelihood estimation. Notably, our method is sample-efficient and experimentally friendly, only requiring snapshots of the quantum state measured randomly in the X or Z bases. Using this subset of measurements, our approach can effectively learn any real pure states represented by tensor networks, and we rigorously prove that random-XZ measurements are tomographically complete for such states.

Physics, Computer software
DOAJ Open Access 2025
Reliability And Efficiency: Digital Versus Plaster Orthodontic Models

Julia Sundquist, Bo Wold Nilsen, Gro Eirin Holde

Aim or purpose: This study aims to evaluate the reliability and efficiency of measurements on digital orthodontic models in comparison to measurements on traditional plaster models. Materials and methods: Fourteen sets of archived plaster models of permanent dentitions were randomly selected to assess overjet, overbite, mandibular canine width, maxillary first molar width and anterior crowding. First, the models were measured manually with an orthodontic caliper. Then, computer-based models were generated with a 3D scanner, where corresponding measurements were taken digitally using orthodontic analyzing software. The models were assessed by a single rater and repeated after one week. Furthermore, time required to complete the measurements was recorded. Intra-rater reliability was analyzed with intraclass correlation coefficient (ICC) for absolute agreement. Results: Agreement between digital measurements was high (ICC 0.88-0.99), with mean differences of 0.02-0.20 mm. Manual measurements had similar agreement (ICC 0.84-0.99), with mean differences of 0.02-0.23. Comparing digital and manual measurements, agreement was somewhat lower, with a mean difference of 0.03-0.57 mm (ICC 0.73-0.98). Agreement was lowest for maxillary anterior crowding (ICC 0.73-0.88) for all comparisons. On average, digital measurements were 84 seconds faster than manual measurements. Conclusions: Both digital and manual measurements demonstrate high reliability, although digital measurements had slightly higher agreement and were less time-consuming. This supports the use of digital models in epidemiological studies, offering reliable data and efficient data collection and analysis.

DOAJ Open Access 2025
Designing the model of intelligent command and control by using the Military Internet of Things

Mohammad Sepehri, Adel Farzaneh

Objective: main concern is lack of codification of indigenous intelligent command and control model, main goal is to develop indigenous intelligent command and control model using military Internet of Things, other goals: to count dimensions and components, determine relationships between dimensions and components of model design, and to count achievements, consequences, functions and requirements of model design. Method: type of applied-developmental research, descriptive-case research method, mixed research approach and method of data collection, field and library, with library study tools are books, articles, documents, interviews, questionnaires, and time domain of years 1402-1403 for five years and spatial domain of country's armed forces. Statistical population of 70 experts and experts, structural equation modeling is used to analyze and investigate relationship and correlation between factors. Results:By testing PLS model in SRMR test, since it is smaller than 0.08, overall model of PLS has a good fit and is therefore consistent with desired model in society.Conclusion:dimensions of model design are intelligence, information management, sustainability, interoperability, integration, and network-oriented. achievements and consequences of designing model are improvement (intelligence of c4isr, and decision-making, comprehensive defense readiness, deterrence), increasing military authority and capability, and increasing indigenous cyber power. Functions of model design are intelligence-making (action-oriented and strategic command systems, control, monitoring and evaluation systems, communication systems, computer and cyberspace systems, information collection systems, surveillance and identification systems) and online situational awareness on battlefield.requirements of designing model are battlefield intelligence, localization of IoT standards, IoT software security, and funding and credit, training and skill development.

Military Science
DOAJ Open Access 2025
Prediction of Air-Conditioning Outlet Temperature in Data Centers Based on Graph Neural Networks

Qilong Sha, Jing Yang, Ruping Shao et al.

This study addresses the issue of excessive cooling in data center server rooms caused by the sparse deployment of server cabinets. A precise air-conditioning control strategy based on the working temperature response of target cabinets is proposed. CFD software is used to establish the server room model and set control objectives. The simulations reveal that, under the condition of ensuring normal operation and equipment safety in the data center, the supply air temperature of the CRAC (computer room air conditioner) system can be adjusted to provide more flexibility, thereby reducing energy consumption. Based on this strategy, the dynamic load of the server room is simulated to obtain the supply air temperature of the CRAC system, forming a simulation dataset. A graph structure is created based on the distribution characteristics of the servers, and a regression prediction model for the supply air temperature of the CRAC system is trained using graph neural networks. The results show that, in the test set, 95.8% of the predicted supply air temperature errors are less than 0.5 °C, meeting ASHRAE standards. The model can be used to optimize the parameter settings of CRAC systems under real load conditions, reducing local hotspots in the server room while achieving energy-saving effects.

S2 Open Access 2015
Chaospy: An open source tool for designing methods of uncertainty quantification

Jonathan Feinberg, H. Langtangen

Abstract The paper describes the philosophy, design, functionality, and usage of the Python software toolbox Chaospy for performing uncertainty quantification via polynomial chaos expansions and Monte Carlo simulation. The paper compares Chaospy to similar packages and demonstrates a stronger focus on defining reusable software building blocks that can easily be assembled to construct new, tailored algorithms for uncertainty quantification. For example, a Chaospy user can in a few lines of high-level computer code define custom distributions, polynomials, integration rules, sampling schemes, and statistical metrics for uncertainty analysis. In addition, the software introduces some novel methodological advances, like a framework for computing Rosenblatt transformations and a new approach for creating polynomial chaos expansions with dependent stochastic variables.

327 sitasi en Computer Science
DOAJ Open Access 2024
Quantum-accurate machine learning potentials for metal-organic frameworks using temperature driven active learning

Abhishek Sharma, Stefano Sanvito

Abstract Understanding structural flexibility of metal-organic frameworks (MOFs) via molecular dynamics simulations is crucial to design better MOFs. Density functional theory (DFT) and quantum-chemistry methods provide highly accurate molecular dynamics, but the computational overheads limit their use in long time-dependent simulations. In contrast, classical force fields struggle with the description of coordination bonds. Here we develop a DFT-accurate machine-learning spectral neighbor analysis potentials for two representative MOFs. Their structural and vibrational properties are then studied and tightly compared with available experimental data. Most importantly, we demonstrate an active-learning algorithm, based on mapping the relevant internal coordinates, which drastically reduces the number of training data to be computed at the DFT level. Thus, the workflow presented here appears as an efficient strategy for the study of flexible MOFs with DFT accuracy, but at a fraction of the DFT computational cost.

Materials of engineering and construction. Mechanics of materials, Computer software
DOAJ Open Access 2023
Multi-Label Learning Based on Double Laplace Regularization and Causal Inference

Jun LUO, Qingwei GAO, Yi TAN, Dawei ZHAO, Yixiang LU, Dong SUN

Label-specific features are a research hotspot in multi-label learning, which utilizes label feature extraction to solve the problem of multiple class labels in a single instance. Existing research on multi-label classification usually considers only the correlation between labels and ignores the local manifold structure between the original data, which results in a decrease in classification accuracy. In addition, in label correlation, the structural relationship between features and labels, as well as the inherent causal relationship between labels, are often overlooked. To address these issues, in this study, a multi-label learning algorithm based on double Laplace regularization and causal inference is proposed. Linear regression models are used to establish a basic multi-label classification framework which is combined with causal learning to explore the inherent causal relationships between labels, to achieve the goal of mining the essential connections between labels. To fully utilize the structural relationship between features and labels, double Laplace regularization is added to mine local label association information and effectively maintain the local manifold structure of the original data. The effectiveness of the proposed algorithm is verified on a public multi-label dataset. The experimental results showed that compared to algorithms such as LLSF, ML-KNN, and LIFT, the proposed algorithm achieved an average performance improvement of 8.82%, 4.98%, 9.43%, 16.27%, 12.19%, and 3.35% in terms of Hamming Loss(HL), Average Precision(AP), One Error(OE), Ranking Loss(RL), coverage, and AUC, respectively.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2021
Time Synchronization Algorithm of Underwater Sensor Based on Doppler Velocity Measurement

YAO Yujie, LIU Guangzhong, KONG Weiquan

Time synchronization is the key technology of underwater sensor networks. Due to the high propagation delay and Doppler frequency shift of underwater acoustic communication in the ocean, the land-based time synchronization algorithm using radio frequency communication can not be directly applied to the underwater environment. Based on the principle of Doppler velocity measurement and the mobility of nodes under water, this paper proposes a new time synchronization CD-Sync algorithm. The cluster model with clustering characteristics is used to select a reasonable cluster head node and synchronize with the water surface beacon node within the cluster. In the process of synchronization, the synchronization node uses Doppler principle to estimate the relative moving speed between nodes, so as to calculate the propagation delay between nodes. Experimental results show that, compared with MU-Sync algorithm based on clustering time synchronization and NU-Sync algorithm based on distributed time synchronization, this algorithm can shorten the distance between nodes and accelerate the convergence speed of synchronization between nodes, while effectively improving the accuracy of time synchronization.

Computer engineering. Computer hardware, Computer software

Halaman 28 dari 407609