The aim of this study is to investigate the decisions and reasoning of undergraduate students when choosing simple measurement instruments in an introductory physics laboratory course. For this study, we have developed a questionnaire and implemented it in a pre-/post-test manner to analyze the influence of lab instruction on both students' decisions and reasoning. To characterize students' justifications, we have inductively developed a coding manual that captures the nuances of students' reasoning when choosing an instrument. It shows that students consider different aspects for their decisions, such as data quality, practical and personal considerations. We have also found that laboratory instruction influenced both students' decisions and justifications, leading to a stronger emphasis on data quality. In fact, after instruction, the majority of students choose the instrument with lower uncertainty and base their justifications mainly on the aim of reducing uncertainties, avoiding systematic effects or mistakes in the instrument reading, and less often than before instruction on personal experience and intuition. These findings suggest that dedicating specific laboratory instruction sessions on measurements and data quality, and having students choose between different instrumentation and provide a justification for their decision, can positively impact students' habits in the laboratory and encourage them to base their choices on evidence rather than intuition.
Electrical machines are at the centre of most engineering processes, with rotating electrical machines, in particular, becoming increasingly important in recent history due to their growing applications in electric vehicles and renewable energy. Although the landscape of condition monitoring in electrical machines has evolved over the past 50 years, the intensification of engineering efforts towards sustainability, reliability, and efficiency, coupled with breakthroughs in computing, has prompted a data-driven paradigm shift. This paper explores the evolution of condition monitoring of rotating electrical machines in the context of maintenance strategy, focusing on the emergence of this data-driven paradigm. Due to the broad and varying nature of condition monitoring practices, a framework is also offered here, along with other essential terms of reference, to provide a concise overview of recent developments and to highlight the modern challenges and opportunities within this area. The paper is purposefully written as a tutorial-style overview for the benefit of practising engineers and researchers who are new to the field or not familiar with the wider intricacies of modern condition monitoring systems.
In industrial surface Quality Control (QC) scenarios, deep classification neural networks are widely used to classify product images for qualified judgment or quality grading. However, surface QC equipment equipped with deep classification neural networks must meet Attribute Reproducibility and Repeatability (AR&R) assessment requirements. Perturbations in product images, caused by assembly tolerance, equipment vibrations, and other factors, lead to variations in position, angle, brightness, and blurring. These perturbations result in inconsistent classification outputs, causing the surface QC equipment to fail the AR&R assessment, a problem referred to as the network output reproducibility issue. To address this issue, this study proposes a training method for classification neural networks based on Siamese networks. The Siamese primary network is trained using original samples for supervised learning to learn correct classification categories. The Siamese secondary network copies the weights of the primary network via exponential smoothing and generates feature embeddings of perturbed samples corresponding to the original ones. These embeddings are used for comparative learning training of the primary network, enabling it to output consistent classification probabilities for both original and perturbed sample inputs. During inference, only the primary network is retained for product defect classification. The results show that the classification accuracy reaches 99.346 2%, with a classification probability variance of 0.001 016. The described method effectively improves the output reproducibility of deep classification neural networks for industrial product image classification by reducing classification probability variance and enhancing accuracy.
Rahul Karmakar, Akhil Kumar Das, Debapriya Sarkar
et al.
This paper explores the application of stacking models for breast cancer detection, integrating key techniques such as data balancing, hyperparameter tuning, and feature selection. We implemented five different stacking configurations. Initially, Logistic Regression (LR) was used as the meta-classifier, while the base estimators included Decision Tree (DT), Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Random Forest (RF) classifiers. In the second configuration, we reversed the roles: DT acted as the meta-classifier, with SVM, KNN, RF, and LR serving as the base estimators. In a third setup, SVM was used as the meta-classifier, with DT, LR, KNN, and RF as the base learners. Fourth, we implemented KNN as the stacking classifier, with LR, DT, SVM, and RF as the base estimators. Finally, in the fifth configuration, RF was the meta-classifier, supported by LR, DT, KNN, and SVM as base learners. The evaluation of stacking models was conducted in five phases, starting with a baseline with no adjustments, followed by applying data balancing alone, then adding hyperparameter tuning, applying Chi-square feature selection with data balancing, and finally using correlation-based feature selection with data balancing, all systematically excluding certain elements to analyze their individual impact. Among all cases, the stacking model with LR delivers the best performance, achieving an accuracy of 97.63%, precision of 97.68%, recall of 97.63%, and an F-measure of 97.63%, showcasing its exceptional reliability and balanced effectiveness. All models were evaluated using 10-fold cross-validation.
Sepideh Shafaei, Rahim Mohammad-Rezaei, Balal Khalilzadeh
et al.
In this study, an efficient and ultrasensitive biosensor based on platinum nanoparticles decorated on MXene nanosheets (NSs) was reported for the detection of CA15–3 breast cancer biomarker. In order to increase the accuracy and reproducibility of the fabricated biosensor, MXene NSs and platinum nanoparticles (PtNPs) were electrochemically deposited on the surface of glassy carbon electrode (MXene-Pt/GCE). The synergistic effect of MXene and PtNPs caused an increased conductivity, fast electron transfers, amplified sensitivity and improved stabilization of streptavidin and CA15–3 antibody on the electrode surface. According to the electrochemical results, the electroactive surface area of MXene-Pt/GCE was obtained 0.1345 cm2 which was remarkably more than GCE. The detection limit and linear range for the developed CA15–3 breast cancer biosensor were 1 nU and 1 to 100 nU, respectively. Based on the data, the proposed biosensor exhibited unique reproducibility, stability, and selectivity which can be differentiate between the healthy serum samples and patients suffering from breast cancer.
Talita Santos de Arruda, Rayssa Bruna Holanda Lima, Karla Luciana Magnani Seki
et al.
Ultrasound has become an important tool that offers clinical and practical benefits in the intensive care unit (ICU). Its real-time imaging provides immediate information to support prognostic evaluation and clinical decision-making. This study used ultrasound assessment to investigate the impact of hospitalization on muscle properties in neurocritical patients and analyze the relationship between peripheral muscle changes and motor sequelae. A total of 43 neurocritical patients admitted to the ICU were included. The inclusion criteria were patients with acute brain injuries with or without motor sequelae. Muscle ultrasonography assessments were performed during ICU admission and hospital discharge. Measurements included muscle thickness, cross-sectional area, and echogenicity of the biceps brachii, quadriceps femoris, and rectus femoris. Statistical analyses were used to compare muscle properties between time points (hospital admission vs. discharge) and between groups (patients with vs. without motor sequelae). Significance was set at 5%. Hospitalization had a significant effect on muscle thickness, cross-sectional area, and echogenicity in patients with and without motor sequelae (<i>p</i> < 0.05, effect sizes between 0.104 and 0.475). Patients with motor sequelae exhibited greater alterations in muscle echogenicity than those without (<i>p</i> < 0.05, effect sizes between 0.182 and 0.211). Changes in muscle thickness and cross-sectional area were similar between the groups (<i>p</i> > 0.05). Neurocritical patients experience significant muscle deterioration during hospitalization. Future studies should explore why echogenicity is more markedly affected than muscle thickness and cross-sectional area in patients with motor sequelae compared to those without.
Photography, Computer applications to medicine. Medical informatics
Abstract Remanufacturing has become a mainstream sustainable manufacturing paradigm for energy conservation and environmental protection. Disassembly and reprocessing operations are two main activities in remanufacturing. This work proposes multiobjective integrated scheduling of disassembly and reprocessing operations considering product structures and random processing time. First, a stochastic programming model is developed to minimize maximum completion time and total tardiness. Second, a reinforcement learning-based multiobjective evolutionary algorithm is devised considering problem-specific knowledge. Three search strategy combinations are formed: crossover and mutation, crossover and key product-based iterated local search, mutation and key product-based iterated local search. At each iteration, a Q-learning method is devised to intelligently choose a combination of premium strategies. A stochastic simulation is incorporated to evaluate the objective values of the searched solutions. Finally, the formulated model and method are compared with an exact solver, CPLEX, and three well-known metaheuristics from the literature on a set of test instances. The results confirm the excellent competitiveness of the developed model and algorithm for solving the considered problem.
Electronic computers. Computer science, Information technology
Nowadays, tens of satellites carry hyperspectral spectrometers. Such instruments allow decomposing the light that exits the atmosphere from its top into hundreds to thousands of contiguous spectral channels. By analysis of the light spectral distribution, and in particular the depths of selected absorption lines, researchers and meteorological agencies can retrieve the atmosphere composition and thermodynamic state. To get a global view of the Earth, several instruments are generally operated synergistically, therefore, a harmonized calibration must be achieved between them. To cross-calibrate two spectrometers, a common practice is to analyze an ensemble of collocated measurements, meaning acquisitions performed at the same time and under the same geometry. Nonetheless, such analysis always faces the issue of setting appropriate temporal and geometric thresholds in defining the collocations, trading off between statistics and quality. Consequently, some collocation mismatches may have a substantial impact on the cross-calibration results. Thus, the following manuscript describes in detail the inclusion of collocation errors into the mathematical description and presents an application which is designed on purpose to be robust to such errors. Then, the knowledge of the spectral sensitivities of each channel to the incoming light, called the spectral response functions (SRF), are key to the exploitation of the acquisitions. In that context, the authors have studied and designed a novel methodology to retrieve relative SRF between two or more spectrometers, within a single instrument or between instruments embarked on different platforms. The objective of the methodology is to characterize discrepancies of responses between flying spectrometers, track long-term evolutions and harmonize their responses with post-processing when necessary.
Xiduo Chen, Xingdong Feng, Antonio F. Galvao
et al.
Obtaining valid treatment effect inference remains a challenging problem when dealing with numerous instruments and non-sparse control variables. In this paper, we propose a novel ridge regularization-based instrumental variables method for estimation and inference in the presence of both high-dimensional instrumental variables and high-dimensional control variables. These methods are applicable both with and without sparsity assumptions. To remove the estimation bias, we introduce a two-step procedure employing a ridge regression coupled with data-splitting in the first step, and a ridge style projection matrix with a simple least squares regression in the second. We establish statistical properties of the estimator, including consistency and asymptotic normality. Furthermore, we develop practical statistical inference procedures by providing a consistent estimator for the asymptotic variance of the estimator. The finite sample performance of the proposed methods is evaluated through numerical simulations. Results indicate that the new estimator consistently outperforms existing sparsity-based approaches across various settings, offering valuable insights for complex scenarios. Finally, we provide an empirical application estimating the causal effect of schooling on earnings addressing potential endogeneity through the use of high-dimensional instrumental variables and high-dimensional covariates.
Multi-stack machines and Turing machines can simulate to each other. In this note, we give a succinct definition of multi-stack machines, and from this definition it is clearly seen that pushdown automata and deterministic finite automata are special cases of multi-stack machines. Also, with this mode of definition, pushdown automata and deterministic pushdown automata are equivalent and recognize all context-free languages. In addition, we are motivated to formulate concise definitions of quantum pushdown automata and quantum stack machines.
Lukasz Scislo, Davide Astolfi, Francesco Castellani
Vibration analysis and monitoring are currently required in various fields of industry, from automotive and aeronautics to manufacturing and quality control, and from machining and maintenance to civil engineering [...]
We propose to use a quantum spin chain as a device to store and release energy coherently and we investigate the interplay between its internal correlations and outside decoherence. We employ the quantum Ising chain in a transverse field and our charging protocol consists of a sudden global quantum quench in the external field to take the system out of equilibrium. Interactions with the environment and decoherence phenomena can dissipate part of the work that the chain can supply after being charged, measured by the ergotropy. We find that overall, the system shows remarkably better performance, in terms of resilience, charging time, and energy storage, when topological frustration is introduced by setting antiferromagnetic interactions with an odd number of sites and periodic boundary conditions. Moreover, we show that in a simple discharging protocol to an external spin, only the frustrated chain can transfer work and not just heat.
Carlos Alberto Espinosa-Pinos, Paúl Bladimir Acosta-Pérez, Camila Alessandra Valarezo-Calero
This article investigates the factors that affect the job satisfaction of university teachers for which 400 teachers from 4 institutions (public and private) in Ecuador were stratified, resulting in a total of 1600 data points collected through online forms. The research was of a cross-sectional design and quantitative and used machine learning techniques of classification and prediction to analyze variables such as ethnic identity, field of knowledge, gender, number of children, job burnout, perceived stress, and occupational risk. The results indicate that the best classification model is neural networks with a precision of 0.7304; the most significant variables for predicting the job satisfaction of university teachers are: the number of children they have, scores related to perceived stress, professional risk, and burnout, province of the university at which the university teacher surveyed works, and city where the teacher works. This is in contrast to marital status, which does not contribute to its prediction. These findings highlight the need for inclusive policies and effective strategies to improve teacher well-being in the university academic environment.
Many instruments for astroparticle physics are primarily geared towards multi-messenger astrophysics, to study the origin of cosmic rays (CR) and to understand high-energy astrophysical processes. Since these instruments observe the Universe at extreme energies and in kinematic ranges not accessible at accelerators these experiments provide also unique and complementary opportunities to search for particles and physics beyond the standard model of particle physics. In particular, the reach of IceCube, Fermi and KATRIN to search for and constrain Dark Matter, Axions, heavy Big Bang relics, sterile neutrinos and Lorentz Invariance Violation (LIV) will be discussed. The contents of this article are based on material presented at the Humboldt-Kolleg "Clues to a mysterious Universe - exploring the interface of particle, gravity and quantum physics" in June 2022.
Beerend G. A. Gerats, Jelmer M. Wolterink, Seb P. Mol
et al.
Laparoscopic video tracking primarily focuses on two target types: surgical instruments and anatomy. The former could be used for skill assessment, while the latter is necessary for the projection of virtual overlays. Where instrument and anatomy tracking have often been considered two separate problems, in this paper, we propose a method for joint tracking of all structures simultaneously. Based on a single 2D monocular video clip, we train a neural field to represent a continuous spatiotemporal scene, used to create 3D tracks of all surfaces visible in at least one frame. Due to the small size of instruments, they generally cover a small part of the image only, resulting in decreased tracking accuracy. Therefore, we propose enhanced class weighting to improve the instrument tracks. We evaluate tracking on video clips from laparoscopic cholecystectomies, where we find mean tracking accuracies of 92.4% for anatomical structures and 87.4% for instruments. Additionally, we assess the quality of depth maps obtained from the method's scene reconstructions. We show that these pseudo-depths have comparable quality to a state-of-the-art pre-trained depth estimator. On laparoscopic videos in the SCARED dataset, the method predicts depth with an MAE of 2.9 mm and a relative error of 9.2%. These results show the feasibility of using neural fields for monocular 3D reconstruction of laparoscopic scenes.
José Manuel Porras, Juan Alfonso Lara, Cristóbal Romero
et al.
Predicting student dropout is a crucial task in online education. Traditionally, each educational entity (institution, university, faculty, department, etc.) creates and uses its own prediction model starting from its own data. However, that approach is not always feasible or advisable and may depend on the availability of data, local infrastructure, and resources. In those cases, there are various machine learning approaches for sharing data and/or models between educational entities, using a classical centralized machine learning approach or other more advanced approaches such as transfer learning or federated learning. In this paper, we used data from three different LMS Moodle servers representing homogeneous different-sized educational entities. We tested the performance of the different machine learning approaches for the problem of predicting student dropout with multiple educational entities involved. We used a deep learning algorithm as a predictive classifier method. Our preliminary findings provide useful information on the benefits and drawbacks of each approach, as well as suggestions for enhancing performance when there are multiple institutions. In our case, repurposed transfer learning, stacked transfer learning, and centralized approaches produced similar or better results than the locally trained models for most of the entities.
The adequacy of the entity information directly affects the applications that depend on textual entity information,while conventional entity recognition models can only identify the existing entities.The task of the entity missing detection,defined as a sequence labeling task,aims at finding the location where the entity is missing.In order to construct training dataset,three corres-ponding methods are proposed.We introduce an entity missing detection method combining the convolutional neural network with the gated mechanism and the pre-trained language model.Experiments show that the F1 scores of this model are 80.45% for the PER entity,83.02% for the ORG entity,and 86.75% for the LOC entity.The model performance exceeds the other LSTM-based named entity recognition model.It is found that there is a correlation between the accuracy of the model and the word frequency of the annotated characters.
Coded mask instruments have been used in high-energy astronomy for the last forty years now and designs for future hard X-ray/low gamma-ray telescopes are still based on this technique when they need to reach moderate angular resolutions over large field of views, particularly for observations dedicated to the, now flourishing, field of time domain astrophysics. However these systems are somehow unfamiliar to the general astronomers as they actually are two-step imaging devices where the recorded picture is very different from the imaged object and the data processing takes a crucial part in the reconstruction of the sky image. Here we present the concepts of these optical systems applied to high-energy astronomy, the basic reconstruction methods including some useful formulae and the trend of the expected and observed performances as function of the system designs. We review the historical developments and recall the flown space-borne coded mask instruments along with the description of a few relevant examples of major successful implementations and future projects in space astronomy.
YU Zhichao, LI Yangzhong, LIU Lei, FENG Shengzhong
Through superposition and entanglement, a quantum computing displays significant advantages over classical computers in dealing with problems that require large-scale parallel processing capabilities.At present, a physical quantum computer is limited in scalability, coherence time, and precision of quantum gate operations, so it is feasible to simulate quantum computing on a classical computer for studying quantum advantage and quantum algorithms.However, the computer resources required for quantum computing simulation grow exponentially with the number of qubits.Therefore, it is of great importance to study how to reduce the resources required for large-scale simulation with ensured computational accuracy, precision and efficiency.This paper describes the basic principles and background knowledge of quantum computing, including qubits, quantum gates, quantum circuits and quantum operating systems.Meanwhile, this paper summarizes the classical computer-based methods for simulating quantum computing, and analyzes their design ideas, advantages and disadvantages.Some commonly used simulators are also listed.On this basis, this paper discusses the communication overhead problem of quantum computing simulation, and presents some supercomputer-based methods for optimizing quantum computing simulation from the two aspects of node analysis and communication optimization.