A. Pickering
Hasil untuk "Instruments and machines"
Menampilkan 20 dari ~631561 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Leland C. Clark, C. Lyons
J. Wilhelm, A. Pingoud
A. Holmes-Siedle, L. Adams
Peter-Paul Verbeek
Meenakshi Sudarvizhi Seenipeyathevar, Prasath Palaniappan, Vijayakumar Arumugam et al.
ABSTRACT This study deals with an integrated experimental‐machine learning framework for wear estimation in functionally graded composites made from recycled magnesium machining chips, using low‐cost ceramic fibers as reinforcement with the radial Modeling technique. The primary hurdle that is being addressed is the accurate prediction of wear behavior in spatially graded magnesium matrix composites, while simultaneously avoiding extensive experimental testing. Under varying degrees of applied loads (4.4 to 39 N), sliding speeds (0.45 to 4.5 m/s), and sliding distances (500 to 4500 m), the wear performance was experimentally assessed. Results demonstrate a hardness increment of 26.26% in the outer region compared to the inner region, while resistance to wear was enhanced by 19.8% in the outer zone due to the grading of ceramic fibers. A limited experimental dataset consisting of wear measurements from the inner, middle, and outer zones of the composite was utilized in developing and validating four machine‐learning models for wear rate prediction. The tree‐based ensemble methods significantly outperformed deep‐learning strategies, with the LightGBM model providing the best prediction performance across all zones and achieving optimization with a maximum tree depth of 5, 480 leaves, and a feature fraction of 0.05. Moreover, zone‐specific XGBoost models were also developed, employing customized learning rates and minimal loss reduction parameters in order to elevate prediction accuracy. The proposed machine‐learning framework thus provides a pathway for rapid and reliable wear rate estimation for ceramic fiber‐reinforced magnesium composites, significantly lessening experimental burden. Results highlight that recycled magnesium waste, when combined with ceramic reinforcement, can be effectively employed to produce sustainable and economically viable materials with improved wear resistance, particularly for automotive and industrial applications.
M. Caruthers
Dennys C. A. Mallqui, R. Fernandes
Abstract Bitcoin is the most accepted cryptocurrency in the world, which makes it attractive for investors and traders. However, the challenge in predicting the Bitcoin exchange rate is its high volatility. Therefore, the prediction of its behavior is of great importance for financial markets. In this way, recent studies have been carried out on what internal and/or external Bitcoin information is relevant to its prediction. The increased use of machine learning techniques to predict time series and the acceptance of cryptocurrencies as financial instruments motivated the present study to seek more accurate predictions for the Bitcoin exchange rate. In this way, in a first stage of the proposed methodology, different feature selection techniques were evaluated in order to obtain the most relevant attributes for the predictions. In the sequence, it was analyzed the behavior of Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Ensemble algorithms (based on Recurrent Neural Networks and the k-Means clustering method) for price direction predictions. Likewise, the ANN and SVM were employed for regression of the maximum, minimum and closing prices of the Bitcoin. Moreover, the regression results were also used as inputs to try to improve the price direction predictions. The results showed that the selected attributes and the best machine learning model achieved an improvement of more than 10%, in accuracy, for the price direction predictions, with respect to the state-of-the-art papers, using the same period of information. In relation to the maximum, minimum and closing Bitcoin prices regressions, it was possible to obtain Mean Absolute Percentage Errors between 1% and 2%. Based on these results, it was possible to demonstrate the efficacy of the proposed methodology when compared to other studies.
Florian Soyka, Peter Nickel, Francisco Rebelo et al.
Biprateep Dey, David Zhao, Brett H Andrews et al.
Key science questions, such as galaxy distance estimation and weather forecasting, often require knowing the full predictive distribution of a target variable Y given complex inputs X . Despite recent advances in machine learning and physics-based models, it remains challenging to assess whether an initial model is calibrated for all x , and when needed, to reshape the densities of y toward ‘instance-wise’ calibration. This paper introduces the local amortized diagnostics and reshaping of conditional densities (LADaR) framework and proposes a new computationally efficient algorithm ( Cal-PIT ) that produces interpretable local diagnostics and provides a mechanism for adjusting conditional density estimates (CDEs). Cal-PIT learns a single interpretable local probability–probability map from calibration data that identifies where and how the initial model is miscalibrated across feature space, which can be used to morph CDEs such that they are well-calibrated. We illustrate the LADaR framework on synthetic examples, including probabilistic forecasting from image sequences, akin to predicting storm wind speed from satellite imagery. Our main science application involves estimating the probability density functions of galaxy distances given photometric data, where Cal-PIT achieves better instance-wise calibration than all 11 other literature methods in a benchmark data challenge, demonstrating its utility for next-generation cosmological analyzes ^9 .
Yu Wang, Shu Xu, Zenghui Ding et al.
Background/Objectives: Knowledge Graphs (KGs) are often incomplete, which can significantly impact the performance of downstream applications. Manual completion of KGs is time-consuming and costly, emphasizing the importance of developing automated methods for KGC. Link prediction serves as a fundamental task in this domain. The semantic correlation among entity features plays a crucial role in determining the effectiveness of link-prediction models. Notably, the human brain can often infer information using a limited set of salient features. Methods: Inspired by this cognitive principle, this paper proposes a lightweight Bi-level routing attention mechanism specifically designed for link-prediction tasks. This proposed module explores a theoretically grounded and lightweight structural design aimed at enhancing the semantic recognition capability of language models without altering their core parameters. The proposed module enhances the model’s ability to attend to feature regions with high semantic relevance. With only a marginal increase of approximately one million parameters, the mechanism effectively captures the most semantically informative features. Result: It replaces the original feature-extraction module within the KGML framework and is evaluated on the publicly available WN18RR and FB15K-237 dataset. Conclusions: Experimental results demonstrate consistent improvements in standard evaluation metrics, including Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits@10, thereby confirming the effectiveness of the proposed approach.
Somporn Sahachaiseree, Takashi Oguchi
ABSTRACT Reinforcement learning (RL) is a promising machine‐learning solution to traffic signal control problems, which have been extensively studied. However, variants of non‐linear, deep artificial neural network (ANN) function approximators (FAs) have been predominantly employed in previous studies proposing RL‐based controllers, leaving a significant interpretability issue due to their black‐box nature. In this work, the use of the linear FA for a value‐based RL agent in traffic signal control problems is investigated along with the least‐squares Q‐learning method, abbreviated as LSTDQ. The interpretable linear FA was found to be adequate for the RL agent to learn an optimal policy. This leads to the proposal to replace a non‐linear ANN FA with the linear FA counterpart, resolving the interpretability issue. Moreover, the LSTDQ learning method shows superior behaviour convergence compared to a gradient descent method. In a low‐intensity arrival pattern scenario, the control by the RL agent cuts about half of the average delay resulting from the pretimed control. Owing to the conciseness of the linear FA, a direct interpretation analysis of the converged linear‐FA parameters is presented. Lastly, two online relearning tests of the agents under non‐stationary arrivals are conducted to demonstrate the online performance of LSTDQ. In conclusion, the linear‐FA specification and the LSTDQ method are together proposed to be used for its control algorithm interpretability property, superior convergence quality, and lack of hyperparameters.
Md Eimran Hossain Eimon, Ashan Perera, Juan Merlos et al.
Modern video codecs have been extensively optimized to preserve perceptual quality, leveraging models of the human visual system. However, in split inference systems-where intermediate features from neural network are transmitted instead of pixel data-these assumptions no longer apply. Intermediate features are abstract, sparse, and task-specific, making perceptual fidelity irrelevant. In this paper, we investigate the use of Versatile Video Coding (VVC) for compressing such features under the MPEG-AI Feature Coding for Machines (FCM) standard. We perform a tool-level analysis to understand the impact of individual coding components on compression efficiency and downstream vision task accuracy. Based on these insights, we propose three lightweight essential VVC profiles-Fast, Faster, and Fastest. The Fast profile provides 2.96% BD-Rate gain while reducing encoding time by 21.8%. Faster achieves a 1.85% BD-Rate gain with a 51.5% speedup. Fastest reduces encoding time by 95.6% with only a 1.71% loss in BD-Rate.
Cyrill Scheidegger, Zijian Guo, Peter Bühlmann
We introduce a new instrumental variable (IV) estimator for heterogeneous treatment effects in the presence of endogeneity. Our estimator is based on double/debiased machine learning (DML) and uses efficient machine learning instruments (MLIV) and kernel smoothing. We prove consistency and asymptotic normality of our estimator and also construct confidence sets that are more robust towards weak IV. Along the way, we also provide an accessible discussion of the corresponding estimator for the homogeneous treatment effect with efficient machine learning instruments. The methods are evaluated on synthetic and real datasets and an implementation is made available in the R package IVDML.
Al Amin Biswas
Nowadays, artificial intelligence (AI) has been utilized in several domains of the healthcare sector. Despite its effectiveness in healthcare settings, its massive adoption remains limited due to the transparency issue, which is considered a significant obstacle. To achieve the trust of end users, it is necessary to explain the AI models' output. Therefore, explainable AI (XAI) has become apparent as a potential solution by providing transparent explanations of the AI models' output. In this review paper, the primary aim is to review articles that are mainly related to machine learning (ML) or deep learning (DL) based human disease diagnoses, and the model's decision-making process is explained by XAI techniques. To do that, two journal databases (Scopus and the IEEE Xplore Digital Library) were thoroughly searched using a few predetermined relevant keywords. The PRISMA guidelines have been followed to determine the papers for the final analysis, where studies that did not meet the requirements were eliminated. Finally, 90 Q1 journal articles are selected for in-depth analysis, covering several XAI techniques. Then, the summarization of the several findings has been presented, and appropriate responses to the proposed research questions have been outlined. In addition, several challenges related to XAI in the case of human disease diagnosis and future research directions in this sector are presented.
Mathias Richerzhagen, Matthias Seidel, Leander Mehrgan et al.
A new detector controller, NGCII, is in development for the first-generation instruments of the ELT as well as new instruments for the VLT. Building on experience with previous ESO detector controllers, a modular system based on the MicroTCA.4 industrial standard, is designed to control a variety of infrared and visible light scientific and wavefront sensor detectors. This article presents the early development stages of NGCII hardware and firmware from the decision to start an all-new design to first tests with detectors and ROICs.
Taha Yasseri
This Chapter examines the dynamics of conflict and collaboration in human-machine systems, with a particular focus on large-scale, internet-based collaborative platforms. While these platforms represent successful examples of collective knowledge production, they are also sites of significant conflict, as diverse participants with differing intentions and perspectives interact. The analysis identifies recurring patterns of interaction, including serial attacks, reciprocal revenge, and third-party interventions. These microstructures reveal the role of experience, cultural differences, and topic sensitivity in shaping human-human, human-machine, and machine-machine interactions. The chapter further investigates the role of algorithmic agents and bots, highlighting their dual nature: they enhance collaboration by automating tasks but can also contribute to persistent conflicts with both humans and other machines. We conclude with policy recommendations that emphasize transparency, balance, cultural sensitivity, and governance to maximize the benefits of human-machine synergy while minimizing potential detriments.
Joanikij Chulev
Musical instrument classification, a key area in Music Information Retrieval, has gained considerable interest due to its applications in education, digital music production, and consumer media. Recent advances in machine learning, specifically deep learning, have enhanced the capability to identify and classify musical instruments from audio signals. This study applies various machine learning methods, including Naive Bayes, Support Vector Machines, Random Forests, Boosting techniques like AdaBoost and XGBoost, as well as deep learning models such as Convolutional Neural Networks and Artificial Neural Networks. The effectiveness of these methods is evaluated on the NSynth dataset, a large repository of annotated musical sounds. By comparing these approaches, the analysis aims to showcase the advantages and limitations of each method, providing guidance for developing more accurate and efficient classification systems. Additionally, hybrid model testing and discussion are included. This research aims to support further studies in instrument classification by proposing new approaches and future research directions.
V. Shiltsev, F. Zimmermann
Since the initial development of charged particle colliders in the middle of the 20th century, these advanced scientific instruments have been at the forefront of scientific discoveries in high energy physics. Collider accelerator technology and beam physics have progressed immensely and modern facilities now operate at energies and luminosities many orders of magnitude greater than the pioneering colliders of the early 1960s. In addition, the field of colliders remains extremely dynamic and continues to develop many innovative approaches. Indeed, several novel concepts are currently being considered for designing and constructing even more powerful future colliders. In this paper, we first review the colliding beam method and the history of colliders, and then present the major achievements of operational machines and the key features of near-term collider projects that are currently under development. We conclude with an analysis of numerous proposals and studies for far-future colliders. The evaluation of their respective potentials reveals tantalizing prospects for further significant breakthroughs in the collider field.
Anderson de Andrade, Alon Harell, Yalda Foroutan et al.
We present methods for conditional and residual coding in the context of scalable coding for humans and machines. Our focus is on optimizing the rate-distortion performance of the reconstruction task using the information available in the computer vision task. We include an information analysis of both approaches to provide baselines and also propose an entropy model suitable for conditional coding with increased modelling capacity and similar tractability as previous work. We apply these methods to image reconstruction, using, in one instance, representations created for semantic segmentation on the Cityscapes dataset, and in another instance, representations created for object detection on the COCO dataset. In both experiments, we obtain similar performance between the conditional and residual methods, with the resulting rate-distortion curves contained within our baselines.
Halaman 2 dari 31579