Quality Control to Reduce Appearance Defects at PT. Musical Instrument
Dikka Safriyanto, Fibi Eko Putra, Putri Anggun Sari
This research was conducted at PT. Musical Instruments that aim to analyze quality control to reduce appearance defects in piano products on the assembling production line. The problem faced by the company is the high level of product defects which has an impact on decreasing quality and customer satisfaction. The research method used is Six sigma with a DMAIC (Define, Measure, Analyze, Improve, Control) approach. This type of research is quantitative, with data collected in the form of the number of production defects in pianos. To analyze the causes of defects, a fishbone diagram with 4M + 1E factors is used, namely Man, Machine, Method, Material, and Environment. The results of the analysis show that the main factors causing appearance defects in piano products include incompatibility with work methods, lack of worker training, use of non-standard materials, suboptimal jig conditions, and unsupportive working environment. Based on these findings, improvement proposals are given in the form of improving standard operating procedures, regular training for workers, the use of high-quality materials, regular maintenance and calibration of jigs, and improvement of work environment conditions. The implementation of this improvement proposal is expected to reduce the number of appearance defects in piano products, improve product quality, and meet the quality standards expected by PT. Musical instrument.
Sentinel-2 Data for Land Cover/Use Mapping: A Review
Darius Phiri, Matamyo Simwanda, Serajis Salekin
et al.
The advancement in satellite remote sensing technology has revolutionised the approaches to monitoring the Earth’s surface. The development of the Copernicus Programme by the European Space Agency (ESA) and the European Union (EU) has contributed to the effective monitoring of the Earth’s surface by producing the Sentinel-2 multispectral products. Sentinel-2 satellites are the second constellation of the ESA Sentinel missions and carry onboard multispectral scanners. The primary objective of the Sentinel-2 mission is to provide high resolution satellite data for land cover/use monitoring, climate change and disaster monitoring, as well as complementing the other satellite missions such as Landsat. Since the launch of Sentinel-2 multispectral instruments in 2015, there have been many studies on land cover/use classification which use Sentinel-2 images. However, no review studies have been dedicated to the application of ESA Sentinel-2 land cover/use monitoring. Therefore, this review focuses on two aspects: (1) assessing the contribution of ESA Sentinel-2 to land cover/use classification, and (2) exploring the performance of Sentinel-2 data in different applications (e.g., forest, urban area and natural hazard monitoring). The present review shows that Sentinel-2 has a positive impact on land cover/use monitoring, specifically in monitoring of crop, forests, urban areas, and water resources. The contemporary high adoption and application of Sentinel-2 can be attributed to the higher spatial resolution (10 m) than other medium spatial resolution images, the high temporal resolution of 5 days and the availability of the red-edge bands with multiple applications. The ability to integrate Sentinel-2 data with other remotely sensed data, as part of data analysis, improves the overall accuracy (OA) when working with Sentinel-2 images. The free access policy drives the increasing use of Sentinel-2 data, especially in developing countries where financial resources for the acquisition of remotely sensed data are limited. The literature also shows that the use of Sentinel-2 data produces high accuracies (>80%) with machine-learning classifiers such as support vector machine (SVM) and Random forest (RF). However, other classifiers such as maximum likelihood analysis are also common. Although Sentinel-2 offers many opportunities for land cover/use classification, there are challenges which include mismatching with Landsat OLI-8 data, a lack of thermal bands, and the differences in spatial resolution among the bands of Sentinel-2. Sentinel-2 data show promise and have the potential to contribute significantly towards land cover/use monitoring.
738 sitasi
en
Environmental Science, Computer Science
Climate policies that achieved major emission reductions: Global evidence from two decades
Annika Stechemesser, Nicolas Koch, E. Mark
et al.
Meeting the Paris Agreement’s climate targets necessitates better knowledge about which climate policies work in reducing emissions at the necessary scale. We provide a global, systematic ex post evaluation to identify policy combinations that have led to large emission reductions out of 1500 climate policies implemented between 1998 and 2022 across 41 countries from six continents. Our approach integrates a comprehensive climate policy database with a machine learning–based extension of the common difference-in-differences approach. We identified 63 successful policy interventions with total emission reductions between 0.6 billion and 1.8 billion metric tonnes CO2. Our insights on effective but rarely studied policy combinations highlight the important role of price-based instruments in well-designed policy mixes and the policy efforts necessary for closing the emissions gap. Editor’s summary It is easy for countries to say they will reduce their emissions of greenhouse gases, but these statements do not mean that the policies they adopt will be effective. Stechemesser et al. evaluated 1500 climate policies that have been implemented over the past 25 years and identified the 63 most successful ones. Some of those successes involved rarely studied policies and unappreciated combinations. This work illustrates the kinds of policy efforts that are needed to close the emissions gaps in various economic sectors. —Jesse Smith
Analyzing Compost Fermentation Accuracy Through Fuzzy Logic and R-Square Techniques
Reza Firmansyah Putranto, Novita Kurnia Ningrum
The accumulation of unmanaged organic waste remains a critical environmental issue, highlighting the need for technological support to improve composting efficiency and monitoring. This study proposes an Internet of Things (IoT)-based system for monitoring compost fermentation conditions using temperature and humidity sensors, combined with Fuzzy Logic and R-square (R²) analysis to evaluate fermentation quality. The system employs a DHT11 sensor integrated with an ESP8266 microcontroller to collect temperature and humidity data in real time over a 20-day observation period, resulting in 1,008 data points. Fuzzy Logic is applied through fuzzification, rule-based inference, and defuzzification to classify compost conditions into four categories: poor, good, very good, and cooling needed. The model’s performance is further validated using multiple linear regression, with temperature and humidity as independent variables and average temperature as the dependent variable. The results show that compost temperature ranged between 28–32°C and humidity between 50–87%, indicating that the fermentation process was predominantly in the mesophilic or early composting phase. The fuzzy inference results demonstrate that most conditions fell within the “good” category, while the R² value of 0.87 indicates a strong relationship between the observed variables. These findings confirm that the integration of IoT, Fuzzy Logic, and statistical analysis is effective as a real-time monitoring and decision support system for compost management, while also highlighting the need for additional parameters to achieve a more comprehensive compost quality assessment.
Electronic computers. Computer science
Liver Disease Prediction Using Machine Learning
Madugula Anjaneyulu, G. Karuna, Bharadwaj Vedadri Yoganand
et al.
Numerous causes contribute to the prevalence of liver disease, which is a major global health concern. Improved patient outcomes and prompt intervention can be facilitated by early detection and precise prediction of liver disease. Based on clinical and demographic characteristics, we present a machine learning method for predicting liver disease in this work. Patient medical histories, a range of laboratory test results, and demographic data make up the dataset used in this investigation. We use a variety of machine learning algorithms, such as support vector machines, decision trees, random forests, and logistic regression, to create prediction models. The most pertinent traits that contribute to the prediction of liver disease are found using feature selection approaches. Metrics including accuracy, precision, recall, and F1-score are used to evaluate performance. The findings show the efficiency of the suggested machine learning models in correctly identifying if liver disease is present or not. Better patient care and the efficient use of healthcare resources are made possible by this research's contribution to the development of trustworthy diagnostic instruments for the early identification and treatment of liver disease.
Advancements in remote sensing for active fire detection: A review of datasets and methods.
Songxi Yang, Qunying Huang, Manzhu Yu
This study comprehensively and critically reviews active fire detection advancements in remote sensing from 1975 to the present, focusing on two main perspectives: datasets and corresponding instruments, and detection algorithms. The study highlights the increasing role of machine learning, particularly deep learning techniques, in active fire detection. Looking forward, the review outlines current challenges and future research opportunities in remote sensing for active fire detection. These include exploring data quality management and multi-modal learning, developing spatiotemporally explicit models, investigating self-supervised learning models, improving explainable and interpretable models, integrating physical models with machine learning, and building digital twins for data analysis. The review aims to serve as a valuable resource for informing natural resource management and enhancing environmental protection efforts through the application of RS technology.
Contrastive timbre representations for musical instrument and synthesizer retrieval
Gwendal Le Vaillant, Yannick Molle
Efficiently retrieving specific instrument timbres from audio mixtures remains a challenge in digital music production. This paper introduces a contrastive learning framework for musical instrument retrieval, enabling direct querying of instrument databases using a single model for both single- and multi-instrument sounds. We propose techniques to generate realistic positive/negative pairs of sounds for virtual musical instruments, such as samplers and synthesizers, addressing limitations in common audio data augmentation methods. The first experiment focuses on instrument retrieval from a dataset of 3,884 instruments, using single-instrument audio as input. Contrastive approaches are competitive with previous works based on classification pre-training. The second experiment considers multi-instrument retrieval with a mixture of instruments as audio input. In this case, the proposed contrastive framework outperforms related works, achieving 81.7\% top-1 and 95.7\% top-5 accuracies for three-instrument mixtures.
AI-Instruments: Embodying Prompts as Instruments to Abstract & Reflect Graphical Interface Commands as General-Purpose Tools
Nathalie Riche, Anna Offenwanger, Frederic Gmeiner
et al.
Chat-based prompts respond with verbose linear-sequential texts, making it difficult to explore and refine ambiguous intents, back up and reinterpret, or shift directions in creative AI-assisted design work. AI-Instruments instead embody "prompts" as interface objects via three key principles: (1) Reification of user-intent as reusable direct-manipulation instruments; (2) Reflection of multiple interpretations of ambiguous user-intents (Reflection-in-intent) as well as the range of AI-model responses (Reflection-in-response) to inform design "moves" towards a desired result; and (3) Grounding to instantiate an instrument from an example, result, or extrapolation directly from another instrument. Further, AI-Instruments leverage LLM's to suggest, vary, and refine new instruments, enabling a system that goes beyond hard-coded functionality by generating its own instrumental controls from content. We demonstrate four technology probes, applied to image generation, and qualitative insights from twelve participants, showing how AI-Instruments address challenges of intent formulation, steering via direct manipulation, and non-linear iterative workflows to reflect and resolve ambiguous intents.
Quasiperiodicity Protects Quantized Transport in Disordered Systems Without Gaps
Emmanuel Gottlob, Dan S. Borgnia, Robert-Jan Slager
et al.
The robustness of topological properties, such as quantized currents, generally depends on the existence of gaps surrounding the relevant energy levels or on symmetry-forbidden transitions. Here, we observe quantized currents that survive the addition of bounded local disorder beyond the closing of the relevant instantaneous energy gaps in a driven Aubry-André-Harper chain, a prototypical model of quasiperiodic systems. We explain the robustness using a local picture in configuration space based on Landau-Zener transitions, which rests on the Anderson localization of the eigenstates. Moreover, we propose a protocol, directly realizable in, for instance, cold atoms or photonic experiments, that leverages this stability to prepare topological many-body states with high Chern numbers and opens new experimental avenues for the study of both the integer and fractional quantum Hall effects.
Physics, Computer software
Quantum causal inference with extremely light touch
Xiangjing Liu, Yixian Qiu, Oscar Dahlsten
et al.
Abstract We give a causal inference scheme using quantum observations alone for a case with both temporal and spatial correlations: a bipartite quantum system with measurements at two times. The protocol determines compatibility with five causal structures distinguished by the direction of causal influence and whether there are initial correlations. We derive and exploit a closed-form expression for the spacetime pseudo-density matrix (PDM) for many times and qubits. This PDM can be determined by light-touch coarse-grained measurements alone. We prove that if there is no signalling between two subsystems, the reduced state of the PDM cannot have negativity, regardless of initial spatial correlations. In addition, the protocol exploits the time asymmetry of the PDM to determine the temporal order. The protocol succeeds for a state with coherence undergoing a fully decohering channel. Thus coherence in the channel is not necessary for the quantum advantage of causal inference from observations alone.
Physics, Electronic computers. Computer science
A Wearable Human–Machine Interactive Instrument for Controlling a Wheelchair Robotic Arm System
Zilin Lu, Yajun Zhou, Li Hu
et al.
The limitations of traditional manual human–machine interactive instruments (HMIs) prevent individuals with severe motor disabilities from effectively controlling wheelchair robotic arm systems, thereby impacting their independence and quality of life. Addressing this issue, this study developed a wearable hybrid HMI to meet the practical needs of individuals with severe motor disabilities. By recruiting ten healthy participants to complete three experiments—a blink detection test, wheelchair control test, and wheelchair robotic arm system test—the effectiveness of the proposed HMI was validated. In the first experiment, the average accuracy of blink detection in the system reached 97.29%, with an average response time of 1.47 s, and the system generated 0.07 errors per minute in idle state. The second experiment demonstrated an average response time of 1.03 s for wheelchair turning and 1.48 s for wheelchair stopping. In the hybrid control condition, the wheelchair navigated obstacles along the specified route in an average time of 1.14 min, and participants executed a minimum of 25.20 commands. In the third experiment, all participants successfully completed the mobile self-drinking task by controlling the wheelchair robotic arm system, with an average workload reported on the NASA task load index (NASA-TLX) scale of 30.2. The study revealed that the proposed HMI offers a promising solution for nonmanual control in complex rehabilitation assistive systems. It has the potential to assist a wider range of individuals with motor disabilities, improving their daily life experiences.
23 sitasi
en
Computer Science
Geometric Error Measurement of Rotary Axes on Five-Axis Machine Tools: A Review
Yu-Ta Chen, Chien-Sheng Liu, Wen-Chi Shiau
et al.
When Process Analysis Technology Meets Transfer Learning: A Model Transfer Strategy Between Different Spectrometers for Quantitative Analysis
Yan Yu, Meibao Yao, Jipeng Huang
et al.
With the increase in the number of types of spectrometers in use, calibration models cannot be shared among different instruments; however, this problem can be solved via calibration transfer (CT). In this study, a variety of modern process analysis technology (PAT) data are taken as the research object. After preprocessing the spectra data using principal component analysis (PCA) and cubic spline interpolation, the TrAdaBoost algorithm in transfer learning combined with extreme learning machine (ELM), i.e., TrAdaBoost-ELM, is used to transfer the master model to slave instruments and to make comparisons with the transfer via an extreme learning machine auto-encoder method (TEAM) and the semisupervised parameter-free framework for calibration enhancement (SS-PFCE) method. After the master model is transferred by the TrAdaBoost-ELM algorithm for the prediction dataset of slave instruments, the mean coefficient of determination of prediction ( $R_{\mathrm {p}}^{2}$ ) increases from 0.7843 to 0.8707, and the mean root-mean-square error of prediction (RMSEP) decreases from 2.7508 to 2.3112. Furthermore, variable combination population analysis (VCPA) in combination with a genetic algorithm (VCPA-IGA) was used to select characteristic wavelengths in molecular and atomic spectra, respectively. For the same type of laser-induced breakdown spectroscopy (LIBS) instruments K1 and K2, after processing by the VCPA-IGA algorithm, the LIBS calibration model established on K1 was transferred successfully to K2, and for the major elements, the mean $R_{\mathrm {p}}^{2}$ = 0.9563 and the mean RMSEP = 1.3796. After processing by the VCPA algorithm, the near-infrared (NIR) model for instrument L was transferred to a different instrument J, and the prediction results were $R_{\mathrm {p}}^{2}$ = 0.9110 and RMSEP = 0.4044 °Brix. The results demonstrated that an appropriate variable selection method combined with the TrAdaBoost-ELM algorithm can be effectively used for CT for spectrometers of the same and different types, thus achieving model sharing between different spectrometers.
15 sitasi
en
Computer Science
Which policy instruments to induce clean innovating
R. Veugelers
Instrumental variables: A non-asymptotic viewpoint
Eric Xia, Martin J. Wainwright, Whitney Newey
We provide a non-asymptotic analysis of the linear instrumental variable estimator allowing for the presence of exogeneous covariates. In addition, we introduce a novel measure of the strength of an instrument that can be used to derive non-asymptotic confidence intervals. For strong instruments, these non-asymptotic intervals match the asymptotic ones exactly up to higher order corrections; for weaker instruments, our intervals involve adaptive adjustments to the instrument strength, and thus remain valid even when asymptotic predictions break down. We illustrate our results via an analysis of the effect of PM2.5 pollution on various health conditions, using wildfire smoke exposure as an instrument. Our analysis shows that exposure to PM2.5 pollution leads to statistically significant increases in incidence of health conditions such as asthma, heart disease, and strokes.
A Compact Anomaly Detection Solution for Science Instruments
Alfonso Lagares de Toledo, Christopher E. Carr
Small low-cost instruments enable new and exciting mission opportunities yet their constrained volume and limited budgets make them especially susceptible to suffering anomalies during flight. Radiation effects as well as sensor or actuator failure can all pose a serious threat to the continued collection of scientific data as well as cause the partial or complete loss of a mission science payload. Onboard anomaly detection could allow instruments to recover from such events but its ad hoc development typically falls outside the mission timeline or monetary constraints. Here we describe a compact solution for the implementation of onboard anomaly detection meant for space science missions. The device is designed to be interoperable with a broad range of instruments utilizing easily accessible power and logic signals to monitor the state of peripherals and actuators without disrupting their functionality. By leveraging a commercially available microcontroller with a radiation hardened alternative package the device can be inexpensively sourced and assembled with minimal work enabling instrument characterization on an expedited timeline. The system can then be exchanged for a radiation hardened version ensuring the replicability of observed anomalies in a laboratory environment during instrument operations. We also present currently implemented anomaly detection algorithms which enable the system to detect anomalies in instruments with varying failure modes and allow mission designers to choose which detection approach best fits the specific needs of their instrument. Finally, we showcase an example application of this system in the detection of anomalies during the operation of a lysis motor designed for use in biological space instruments.
en
physics.ins-det, eess.SP
Corrigendum to “Effective and scalable black-box fuzzing approach for modern web applications” [J. King Saud Univ. Comp. Info. Sci. 34(10) (2022) 10068–10078]
Aseel Alsaedi, Abeer Alhuzali, Omaimah Bamasag
Electronic computers. Computer science
Using LLMs for Augmenting Hierarchical Agents with Common Sense Priors
Bharat Prakash, Tim Oates, Tinoosh Mohsenin
Solving long-horizon, temporally-extended tasks using Reinforcement Learning (RL) is challenging, compounded by the common practice of learning without prior knowledge (or tabula rasa learning). Humans can generate and execute plans with temporally-extended actions and quickly learn to perform new tasks because we almost never solve problems from scratch. We want autonomous agents to have this same ability. Recently, LLMs have been shown to encode a tremendous amount of knowledge about the world and to perform impressive in-context learning and reasoning. However, using LLMs to solve real world problems is hard because they are not grounded in the current task. In this paper we exploit the planning capabilities of LLMs while using RL to provide learning from the environment, resulting in a hierarchical agent that uses LLMs to solve long-horizon tasks. Instead of completely relying on LLMs, they guide a high-level policy, making learning significantly more sample efficient. This approach is evaluated in simulation environments such as MiniGrid, SkillHack, and Crafter, and on a real robot arm in block manipulation tasks. We show that agents trained using our approach outperform other baselines methods and, once trained, don't need access to LLMs during deployment.
Technology, Electronic computers. Computer science
Madvex: Instrumentation-based Adversarial Attacks on Machine Learning Malware Detection
Nils Loose, Felix Mächtle, Claudius Pott
et al.
WebAssembly (Wasm) is a low-level binary format for web applications, which has found widespread adoption due to its improved performance and compatibility with existing software. However, the popularity of Wasm has also led to its exploitation for malicious purposes, such as cryptojacking, where malicious actors use a victim's computing resources to mine cryptocurrencies without their consent. To counteract this threat, machine learning-based detection methods aiming to identify cryptojacking activities within Wasm code have emerged. It is well-known that neural networks are susceptible to adversarial attacks, where inputs to a classifier are perturbed with minimal changes that result in a crass misclassification. While applying changes in image classification is easy, manipulating binaries in an automated fashion to evade malware classification without changing functionality is non-trivial. In this work, we propose a new approach to include adversarial examples in the code section of binaries via instrumentation. The introduced gadgets allow for the inclusion of arbitrary bytes, enabling efficient adversarial attacks that reliably bypass state-of-the-art machine learning classifiers such as the CNN-based Minos recently proposed at NDSS 2021. We analyze the cost and reliability of instrumentation-based adversarial example generation and show that the approach works reliably at minimal size and performance overheads.
The First-stage F Test with Many Weak Instruments
Zhenhong Huang, Chen Wang, Jianfeng Yao
A widely adopted approach for detecting weak instruments is to use the first-stage $F$ statistic. While this method was developed with a fixed number of instruments, its performance with many instruments remains insufficiently explored. We show that the first-stage $F$ test exhibits distorted sizes for detecting many weak instruments, regardless of the choice of pretested estimators or Wald tests. These distortions occur due to the inadequate approximation using classical noncentral Chi-squared distributions. As a byproduct of our main result, we present an alternative approach to pre-test many weak instruments with the corrected first-stage $F$ statistic. An empirical illustration with Angrist and Keueger (1991)'s returns to education data confirms its usefulness.