Hasil untuk "Instruments and machines"

Menampilkan 20 dari ~192951 hasil · dari arXiv, DOAJ, Semantic Scholar

JSON API
S2 Open Access 2005
Genome sequencing in microfabricated high-density picolitre reactors

M. Margulies, M. Egholm, William E. Altman et al.

The proliferation of large-scale DNA-sequencing projects in recent years has driven a search for alternative methods to reduce time and cost. Here we describe a scalable, highly parallel sequencing system with raw throughput significantly greater than that of state-of-the-art capillary electrophoresis instruments. The apparatus uses a novel fibre-optic slide of individual wells and is able to sequence 25 million bases, at 99% or better accuracy, in one four-hour run. To achieve an approximately 100-fold increase in throughput over current Sanger sequencing technology, we have developed an emulsion method for DNA amplification and an instrument for sequencing by synthesis using a pyrosequencing protocol optimized for solid support and picolitre-scale volumes. Here we show the utility, throughput, accuracy and robustness of this system by shotgun sequencing and de novo assembly of the Mycoplasma genitalium genome with 96% coverage at 99.96% accuracy in one run of the machine.

7987 sitasi en Medicine, Biology
S2 Open Access 2015
High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report

G. Apollinari, I. B. Alonso, O. Brüning et al.

The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.

630 sitasi en Computer Science
S2 Open Access 2019
The State of the Art of Data Science and Engineering in Structural Health Monitoring

Y. Bao, Zhicheng Chen, Shiyin Wei et al.

Abstract Structural health monitoring (SHM) is a multi-discipline field that involves the automatic sensing of structural loads and response by means of a large number of sensors and instruments, followed by a diagnosis of the structural health based on the collected data. Because an SHM system implemented into a structure automatically senses, evaluates, and warns about structural conditions in real time, massive data are a significant feature of SHM. The techniques related to massive data are referred to as data science and engineering, and include acquisition techniques, transition techniques, management techniques, and processing and mining algorithms for massive data. This paper provides a brief review of the state of the art of data science and engineering in SHM as investigated by these authors, and covers the compressive sampling-based data-acquisition algorithm, the anomaly data diagnosis approach using a deep learning algorithm, crack identification approaches using computer vision techniques, and condition assessment approaches for bridges using machine learning algorithms. Future trends are discussed in the conclusion.

417 sitasi en Computer Science
S2 Open Access 2018
Deep hedging

H. Buhler, Lukas Gonon, J. Teichmann et al.

We present a framework for hedging a portfolio of derivatives in the presence of market frictions such as transaction costs, liquidity constraints or risk limits using modern deep reinforcement machine learning methods. We discuss how standard reinforcement learning methods can be applied to non-linear reward structures, i.e. in our case convex risk measures. As a general contribution to the use of deep learning for stochastic processes, we also show in Section 4 that the set of constrained trading strategies used by our algorithm is large enough to ε-approximate any optimal solution. Our algorithm can be implemented efficiently even in high-dimensional situations using modern machine learning tools. Its structure does not depend on specific market dynamics, and generalizes across hedging instruments including the use of liquid derivatives. Its computational performance is largely invariant in the size of the portfolio as it depends mainly on the number of hedging instruments available. We illustrate our approach by an experiment on the S&P500 index and by showing the effect on hedging under transaction costs in a synthetic market driven by the Heston model, where we outperform the standard ‘complete-market’ solution.

415 sitasi en Mathematics, Economics
S2 Open Access 2022
Biological Magnetic Resonance Data Bank

J. Hoch, Kumaran Baskaran, Harrison Burr et al.

Abstract The Biological Magnetic Resonance Data Bank (BMRB, https://bmrb.io) is the international open data repository for biomolecular nuclear magnetic resonance (NMR) data. Comprised of both empirical and derived data, BMRB has applications in the study of biomacromolecular structure and dynamics, biomolecular interactions, drug discovery, intrinsically disordered proteins, natural products, biomarkers, and metabolomics. Advances including GHz-class NMR instruments, national and trans-national NMR cyberinfrastructure, hybrid structural biology methods and machine learning are driving increases in the amount, type, and applications of NMR data in the biosciences. BMRB is a Core Archive and member of the World-wide Protein Data Bank (wwPDB).

204 sitasi en Medicine, Computer Science
S2 Open Access 2022
A Review of Machine Learning for Near-Infrared Spectroscopy

Wenwen Zhang, L. L. C. Kasun, Qi-Jie Wang et al.

The analysis of infrared spectroscopy of substances is a non-invasive measurement technique that can be used in analytics. Although the main objective of this study is to provide a review of machine learning (ML) algorithms that have been reported for analyzing near-infrared (NIR) spectroscopy from traditional machine learning methods to deep network architectures, we also provide different NIR measurement modes, instruments, signal preprocessing methods, etc. Firstly, four different measurement modes available in NIR are reviewed, different types of NIR instruments are compared, and a summary of NIR data analysis methods is provided. Secondly, the public NIR spectroscopy datasets are briefly discussed, with links provided. Thirdly, the widely used data preprocessing and feature selection algorithms that have been reported for NIR spectroscopy are presented. Then, the majority of the traditional machine learning methods and deep network architectures that are commonly employed are covered. Finally, we conclude that developing the integration of a variety of machine learning algorithms in an efficient and lightweight manner is a significant future research direction.

187 sitasi en Computer Science, Medicine
arXiv Open Access 2026
Causal Effect Estimation with Learned Instrument Representations

Frances Dean, Jenna Fields, Radhika Bhalerao et al.

Instrumental variable (IV) methods mitigate bias from unobserved confounding in observational causal inference but rely on the availability of a valid instrument, which can often be difficult or infeasible to identify in practice. In this paper, we propose a representation learning approach that constructs instrumental representations from observed covariates, which enable IV-based estimation even in the absence of an explicit instrument. Our model (ZNet) achieves this through an architecture that mirrors the structural causal model of IVs; it decomposes the ambient feature space into confounding and instrumental components, and is trained by enforcing empirical moment conditions corresponding to the defining properties of valid instruments (i.e., relevance, exclusion restriction, and instrumental unconfoundedness). Importantly, ZNet is compatible with a wide range of downstream two-stage IV estimators of causal effects. Our experiments demonstrate that ZNet can (i) recover ground-truth instruments when they already exist in the ambient feature space and (ii) construct latent instruments in the embedding space when no explicit IVs are available. Our work suggests when ZNet can be used as a module for causal inference in general observational settings.

en stat.ML, cs.LG
arXiv Open Access 2026
On the falsification of instrumental variable models for heterogeneous treatment effects

Ricardo E. Miranda

In this paper I derive a set of testable implications for econometric models defined by three assumptions: (i) the existence of strictly exogenous discrete instruments, (ii) restrictions on how the instruments affect adoption of a finite number of treatment types (such as monotonicity), and (iii) the assumption that the instruments only affect outcomes through their effect on treatment adoption (i.e. an exclusion restriction). The testable implications aggregate (via integration) an otherwise potentially infinite set of inequalities that must hold for every measurable subset of the outcome's support. For binary instruments the testable implications are sharp. Furthermore, I propose an implementation that links restrictions on latent response types to a generalization of first-order stochastic dominance and random utility models, allowing to distinguish violations of the exclusion restriction from violations of monotonicity-type assumptions. The testable implications extend naturally to the many instruments case.

en econ.EM
DOAJ Open Access 2026
A cascaded classification approach using transfer learning and feature engineering for improved breast cancer classification

Chokri Ferkous, Ouissal Fadel, Abderrahmane Kefali et al.

The primary objective of this study is to design a cascaded classification framework that integrates deep-learning representations with handcrafted and clinical features to enhance the reliability and accuracy of breast cancer detection in mammographic screening. A multi-source mammography dataset comprising four databases was used to ensure diversity and reduce bias. The proposed system operates in two stages. In the first stage, transfer learning models (VGG16, ResNet50, and EfficientNet_B0) were evaluated using ROC-AUC, PR-AUC, calibration curves, and bootstrap confidence intervals. EfficientNet_B0, which achieved the best balance between discrimination and calibration, was selected as the feature extractor. In the second stage, the malignancy probability was combined with Haralick texture features, patient age, and breast density, and classified using SVM, Random Forest, MLP, Decision Tree, and Logistic Regression. Model robustness was verified through multi-run experiments (five random seeds) and subgroup analyses by age and density. Among the CNN models, EfficientNet_B0 yielded the best performance (accuracy = 0.9438, ROC-AUC = 0.944, PR-AUC = 0.960). In the second stage, although Random Forest achieved the highest accuracy (0.9556 ± 0.002), SVM obtained the highest mean ROC-AUC (0.980 ± 0.001) with stable accuracy (0.9539 ± 0.001) and the most significant p-values, indicating superior robustness and generalization. The proposed cascaded framework effectively combines deep, handcrafted, and clinical features to improve mammogram classification performance. The SVM-based model demonstrates strong calibration, stability, and subgroup consistency, highlighting its potential for deployment in computer-aided mammography screening systems that assist radiologists in early breast cancer detection.

Electronic computers. Computer science
DOAJ Open Access 2026
Measuring perceived physical fidelity in virtual reality and virtual environments

Bree McEwan, Clarice Wu, Harris Yang et al.

As communication scholars become increasingly interested in studying virtual reality (VR) as a communication channel it will be important to establish useful measures related to perceptual variables in virtual environments. One such variable is physical fidelity: the degree to which virtual environments replicate or resemble places in the physical world. Often in computer science and other fields interested in VR, this variable is measured as reaction time within the system. However, for social scientific VR scholars, it can be important to understand how much the user perceives the environment to have physical fidelity. In the existing literature when physical fidelity is measured as a perceptual variable, it is often conflated with measures of immersion or spatial presence. This paper presents a confirmatory factor analysis approach to establishing a well-fitting scale of perceptual physical fidelity over three separate samples as well as delineating the conceptual and operational differences between physical fidelity, immersion, and spatial presence.

Electronic computers. Computer science
DOAJ Open Access 2026
Gaze-adaptive neural pre-correction for mitigating spatially varying optical aberrations in near-eye displays

Yi Jiang, Ye Bi, Yinng Li et al.

Near-eye display (NED) technology constitutes a fundamental component of head-mounted display (HMD) systems. The compact form factor required by HMDs imposes stringent constraints on optical design, often resulting in pronounced wavefront aberrations that significantly degrade visual fidelity. In addition, natural eye movements dynamically induce varying blur that further compromises image quality. To mitigate these challenges, a gaze-contingent neural network framework has been developed to compensate for aberrations within the foveal region. The network is trained in an end-to-end manner to minimize the discrepancy between the optically degraded system output and the corresponding ground truth image. A forward imaging model is employed, in which the network output is convolved with a spatially varying point spread function (PSF) to accurately simulate the degradation introduced by the optical system. To accommodate dynamic changes in gaze direction, a foveated attention-guided module is incorporated to adaptively modulate the pre-correction process, enabling localized compensation centered on the fovea. Additionally, an end-to-end trainable architecture has been designed to integrate gaze-informed blur priors. Both simulation and experimental validations confirm that the proposed method substantially reduces gaze-dependent aberrations and enhances retinal image clarity within the foveal region, while maintaining high computational efficiency. The presented framework offers a practical and scalable solution for improving visual performance in aberration-sensitive NED systems.

Computer engineering. Computer hardware, Electronic computers. Computer science
arXiv Open Access 2025
Constructing an Instrument as a Function of Covariates

Moses Stewart

Researchers often use instrumental variables (IV) models to investigate the causal relationship between an endogenous variable and an outcome while controlling for covariates. When an exogenous variable is unavailable to serve as the instrument for an endogenous treatment, a recurring empirical practice is to construct one from a nonlinear transformation of the covariates. We investigate how reliable these estimates are under mild forms of misspecification. Our main result shows that for instruments constructed from covariates, the IV estimand can be arbitrarily biased under mild forms of misspecification, even when imposing constant linear treatment effects. We perform a semi-synthetic exercise by calibrating data to alternative models proposed in the literature and estimating the average treatment effect. Our results show that IV specifications that use instruments constructed from covariates are non-robust to nonlinearity in the true structural function.

en econ.EM
arXiv Open Access 2025
Managing Comprehensive Research Instrument Descriptions within a Scholarly Knowledge Graph

Muhammad Haris, Sören Auer, Markus Stocker

In research, measuring instruments play a crucial role in producing the data that underpin scientific discoveries. Information about instruments is essential in data interpretation and, thus, knowledge production. However, if at all available and accessible, such information is scattered across numerous data sources. Relating the relevant details, e.g. instrument specifications or calibrations, with associated research assets (data, but also operating infrastructures) is challenging. Moreover, understanding the (possible) use of instruments is essential for researchers in experiment design and execution. To address these challenges, we propose a Knowledge Graph (KG) based approach for representing, publishing, and using information, extracted from various data sources, about instruments and associated scholarly artefacts. The resulting KG serves as a foundation for exploring and gaining a deeper understanding of the use and role of instruments in research, discovering relations between instruments and associated artefacts (articles and datasets), and opens the possibility to quantify the impact of instruments in research.

en cs.DL
arXiv Open Access 2025
Hyperbolic absolute instruments

Tao Hou, Huanyang Chen

As a lens capable of sending images of deep sub-wavelength objects to the far field, the hyperlens has garnered significant attention for its super-resolution and magnification capabilities. However, traditional hyperlenses require extreme permittivity ratios and fail to achieve geometrically perfect imaging, significantly constraining their practical applications. In this paper, we introduce the general versions of hyperbolic absolute instruments from the perspective of dispersion and fundamental optical principles. These instruments enable the formation of closed orbits in geometric optics, allowing hyperlenses to achieve aberration-free, perfect imaging. This development not only provides a flexible and practical tool for enhancing the performance of traditional hyperlens, but also opens new possibilities for new optoelectronics applications based on hyperbolic ray dynamics.

en physics.optics
DOAJ Open Access 2025
Empirical Evaluation of Invariances in Deep Vision Models

Konstantinos Keremis, Eleni Vrochidou, George A. Papakostas

The ability of deep learning models to maintain consistent performance under image transformations-termed invariances, is critical for reliable deployment across diverse computer vision applications. This study presents a comprehensive empirical evaluation of modern convolutional neural networks (CNNs) and vision transformers (ViTs) concerning four fundamental types of image invariances: blur, noise, rotation, and scale. We analyze a curated selection of thirty models across three common vision tasks, object localization, recognition, and semantic segmentation, using benchmark datasets including COCO, ImageNet, and a custom segmentation dataset. Our experimental protocol introduces controlled perturbations to test model robustness and employs task-specific metrics such as mean Intersection over Union (mIoU), and classification accuracy (Acc) to quantify models’ performance degradation. Results indicate that while ViTs generally outperform CNNs under blur and noise corruption in recognition tasks, both model families exhibit significant vulnerabilities to rotation and extreme scale transformations. Notably, segmentation models demonstrate higher resilience to geometric variations, with SegFormer and Mask2Former emerging as the most robust architectures. These findings challenge prevailing assumptions regarding model robustness and provide actionable insights for designing vision systems capable of withstanding real-world input variability.

Photography, Computer applications to medicine. Medical informatics
DOAJ Open Access 2025
Learning metal microstructural heterogeneity through spatial mapping of diffraction latent space features

Mathieu Calvat, Chris Bean, Dhruv Anjaria et al.

Abstract To leverage advancements in machine learning for metallic materials design and property prediction, it is crucial to develop a data-reduced representation of metal microstructures that surpasses the limitations of current physics-based discrete microstructure descriptors. This need is particularly relevant for metallic materials processed through additive manufacturing, which exhibit complex hierarchical microstructures that cannot be adequately described using the conventional metrics typically applied to wrought materials. Furthermore, capturing the spatial heterogeneity of microstructures at the different scales is necessary within such framework to accurately predict their properties. To address these challenges, we propose the physical spatial mapping of metal diffraction latent space features. This approach integrates (i) point diffraction data encoding via variational autoencoders or contrastive learning and (ii) the physical mapping of the encoded values. Together, these steps offer a method to comprehensively describe metal microstructures. We demonstrate this approach on a wrought and additively manufactured alloy, showing that it effectively encodes microstructural information and enables direct identification of microstructural heterogeneity not directly possible by physics-based models. This data-reduced microstructure representation opens the application of machine learning models in accelerating metallic material design and accurately predicting their properties.

Materials of engineering and construction. Mechanics of materials, Computer software

Halaman 11 dari 9648