Hasil untuk "Instruments and machines"

Menampilkan 20 dari ~633002 hasil · dari DOAJ, Semantic Scholar, CrossRef

JSON API
S2 Open Access 2018
KEVM: A Complete Formal Semantics of the Ethereum Virtual Machine

Everett Hildenbrandt, Manasvi Saxena, Nishant Rodrigues et al.

A developing field of interest for the distributed systems and applied cryptography communities is that of smart contracts: self-executing financial instruments that synchronize their state, often through a blockchain. One such smart contract system that has seen widespread practical adoption is Ethereum, which has grown to a market capacity of 100 billion USD and clears an excess of 500,000 daily transactions. Unfortunately, the rise of these technologies has been marred by a series of costly bugs and exploits. Increasingly, the Ethereum community has turned to formal methods and rigorous program analysis tools. This trend holds great promise due to the relative simplicity of smart contracts and bounded-time deterministic execution inherent to the Ethereum Virtual Machine (EVM). Here we present KEVM, an executable formal specification of the EVM's bytecode stack-based language built with the K Framework, designed to serve as a solid foundation for further formal analyses. We empirically evaluate the correctness and performance of KEVM using the official Ethereum test suite. To demonstrate the usability, several extensions of the semantics are presented. and two different-language implementations of the ERC20 Standard Token are verified against the ERC20 specification. These results are encouraging for the executable semantics approach to language prototyping and specification.

371 sitasi en Computer Science
S2 Open Access 2019
2017 Robotic Instrument Segmentation Challenge

M. Allan, Alexey A. Shvets, T. Kurmann et al.

In mainstream computer vision and machine learning, public datasets such as ImageNet, COCO and KITTI have helped drive enormous improvements by enabling researchers to understand the strengths and limitations of different algorithms via performance comparison. However, this type of approach has had limited translation to problems in robotic assisted surgery as this field has never established the same level of common datasets and benchmarking methods. In 2015 a sub-challenge was introduced at the EndoVis workshop where a set of robotic images were provided with automatically generated annotations from robot forward kinematics. However, there were issues with this dataset due to the limited background variation, lack of complex motion and inaccuracies in the annotation. In this work we present the results of the 2017 challenge on robotic instrument segmentation which involved 10 teams participating in binary, parts and type based segmentation of articulated da Vinci robotic instruments.

293 sitasi en Computer Science
S2 Open Access 2019
Alternative data mining/machine learning methods for the analytical evaluation of food quality and authenticity - A review.

A. M. Jiménez-Carvelo, A. González-Casado, M. Bagur-González et al.

In recent years, the variety and volume of data acquired by modern analytical instruments in order to conduct a better authentication of food has dramatically increased. Several pattern recognition tools have been developed to deal with the large volume and complexity of available trial data. The most widely used methods are principal component analysis (PCA), partial least squares-discriminant analysis (PLS-DA), soft independent modelling by class analogy (SIMCA), k-nearest neighbours (kNN), parallel factor analysis (PARAFAC), and multivariate curve resolution-alternating least squares (MCR-ALS). Nevertheless, there are alternative data treatment methods, such as support vector machine (SVM), classification and regression tree (CART) and random forest (RF), that show a great potential and more advantages compared to conventional ones. In this paper, we explain the background of these methods and review and discuss the reported studies in which these three methods have been applied in the area of food quality and authenticity. In addition, we clarify the technical terminology used in this particular area of research.

280 sitasi en Medicine, Computer Science
S2 Open Access 2021
Vibration Analysis for Machine Monitoring and Diagnosis: A Systematic Review

M. H. Mohd Ghazali, W. Rahiman

Untimely machinery breakdown will incur significant losses, especially to the manufacturing company as it affects the production rates. During operation, machines generate vibrations and there are unwanted vibrations that will disrupt the machine system, which results in faults such as imbalance, wear, and misalignment. Thus, vibration analysis has become an effective method to monitor the health and performance of the machine. The vibration signatures of the machines contain important information regarding the machine condition such as the source of failure and its severity. Operators are also provided with an early warning for scheduled maintenance. Numerous approaches for analyzing the vibration data of machinery have been proposed over the years, and each approach has its characteristics, advantages, and disadvantages. This manuscript presents a systematic review of up-to-date vibration analysis for machine monitoring and diagnosis. It involves data acquisition (instrument applied such as analyzer and sensors), feature extraction, and fault recognition techniques using artificial intelligence (AI). Several research questions (RQs) are aimed to be answered in this manuscript. A combination of time domain statistical features and deep learning approaches is expected to be widely applied in the future, where fault features can be automatically extracted from the raw vibration signals. The presence of various sensors and communication devices in the emerging smart machines will present a new and huge challenge in vibration monitoring and diagnosing.

211 sitasi en
DOAJ Open Access 2026
РОЛЬ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА В АВТОМАТИЗАЦИИ НАУЧНЫХ ИССЛЕДОВАНИЙ: ОТ АНАЛИЗА ЛИТЕРАТУРЫ ДО ГЕНЕРАЦИИ ГИПОТЕЗ

Анастас К.В.

Современная научная деятельность характеризуется экспоненциальным ростом объема публикаций и данных, что создает значительные трудности в систематизации, анализе и интерпретации информации. В этих условиях технологии искусственного интеллекта (ИИ) становятся ключевым инструментом автоматизации процессов научного исследования. В статье рассматриваются современные подходы к применению методов машинного обучения, глубоких нейронных сетей и обработки естественного языка (NLP) для анализа научной литературы, выявления скрытых закономерностей, генерации гипотез и планирования экспериментальной работы. Особое внимание уделено практическим примерам применения ИИ в биоинформатике, химии, медицине, физике и компьютерных науках, а также анализу ограничений, связанных с интерпретируемостью моделей, надежностью выводов и соблюдением этических норм. Обсуждаются перспективы развития гибридных систем, обеспечивающих совместную работу человека и ИИ, и возможности повышения аналитических компетенций исследователей в условиях цифровизации науки.

Electronic computers. Computer science, Cybernetics
S2 Open Access 2018
DeepLOB: Deep Convolutional Neural Networks for Limit Order Books

Zihao Zhang, Stefan Zohren, Stephen J. Roberts

We develop a large-scale deep learning model to predict price movements from limit order book (LOB) data of cash equities. The architecture utilizes convolutional filters to capture the spatial structure of the LOBs as well as long short-term memory modules to capture longer time dependencies. The proposed network outperforms all existing state-of-the-art algorithms on the benchmark LOB dataset [A. Ntakaris, M. Magris, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Benchmark dataset for mid-price prediction of limit order book data with machine learning methods,” J. Forecasting, vol. 37, no. 8, 852–866, 2018]. In a more realistic setting, we test our model by using one-year market quotes from the London Stock Exchange, and the model delivers a remarkably stable out-of-sample prediction accuracy for a variety of instruments. Importantly, our model translates well to instruments that were not part of the training set, indicating the model's ability to extract universal features. In order to better understand these features and to go beyond a “black box” model, we perform a sensitivity analysis to understand the rationale behind the model predictions and reveal the components of LOBs that are most relevant. The ability to extract robust features that translate well to other instruments is an important property of our model, which has many other applications.

259 sitasi en Computer Science, Economics
DOAJ Open Access 2025
The Future of Work: Digitalisation of Sub-Saharan Africa Labour Markets

Cheryl Akinyi Genga

Digital transformation is reshaping global operations by integrating technology into business, fundamentally changing how value is delivered. In Sub-Saharan Africa, this shift is altering work processes and job content, impacting the demand for skills and leading to the displacement of certain roles across all industries. Understanding the effects of digital technologies on the future of work in the region is essential for developing effective strategies. It is important to recognise how these changes will affect labour markets and workers' ability to transition to new opportunities. While technology can create new paths and improve access, it also exacerbates existing inequalities. This study aimed to explore the challenges shaping the future of work in Sub-Saharan Africa. A qualitative research approach and inductive thematic analysis were utilised for this study. The findings highlight that the major challenges affecting the future of work are digital skills, followed by Diversity, equity and inclusion- digital divide, gender inequality and discrimination and lack of DEI initiatives and finally, workforce- unemployment and inadequately skilled workforce. In conclusion, while the future of work in Africa presents significant challenges, it also offers great promise. Realising this potential depends on bold and proactive decisions by policymakers, educational institutions, and businesses. Strategic investments made today can empower the next generation of African workers, innovators, and entrepreneurs to thrive in an increasingly digital and competitive global economy.

Mathematics, Electronic computers. Computer science
DOAJ Open Access 2025
Fusion of Deep Features of Wavelet Transform for Wildfire Detection

Akbar Asgharzadeh-Bonab, Salar Ghamati, Farid Ahmadi et al.

Forests uniquely deliver different vital resources, particularly oxygen and carbon dioxide purification. Wildfire is the leading cause of deforestation, where massive forest areas are annually lost due to the failure to identify and predict forest fires. Accordingly, early detection of wildfires is crucial to inform operational and firefighting teams to prevent fires from advancing. This study analyzes images taken by unmanned aerial vehicles for wildfire detection. For this purpose, the two-dimensional discrete wavelet transform was first performed on the images. Next, due to its superior ability, a convolutional neural network was utilized to extract deep features from wavelet transform sub-bands. Then, the features obtained from each sub-band were merged to create the final feature vector. Afterward, multidimensional scaling was employed to reduce the extracted non-useful features. Ultimately, the presence or absence of wildfire locations in the images was detected using proper classifiers. The proposed method reaches an accuracy and F1 score of 0.9684 and 0.9672, respectively, from the images of the FLAME dataset, indicating its efficiency in detecting the presence of wildfire locations. Thus, this method can significantly contribute to the on-time and prompt firefighting operations and prevent extensive damage to forests.

Electronic computers. Computer science
DOAJ Open Access 2025
Virtual reality in skill development through user experience and technology advancements

Mochammad Hannats Hanafi Ichsan, Cecilia Sik-Lanyi, Tibor Guzsvinecz

Abstract New technologies, such as Virtual Reality (VR) / Virtual Environment (VE), which focus on User Experience (UX) to provide more engaging and immersive experiences, can help people grow their skills. Technology advancement is also an essential component of VR development. However, the literature needs to contain more studies on using VR as an assistive tool for skill development. This study aims to explore the impact of VR technological advancements on skill development through UX design taxonomies using a Systematic Literature Review (SLR). Skill development classification was conducted based on social, emotional, and behavioral (SEB) aspects. The selected studies that met the eligibility selection criteria were examined and synthesized. The study’s findings highlight the necessity of technology development for VR technology to accomplish UX for skill development, allowing them to become more self-sufficient. This research can enrich researchers and VR developers, particularly software, hardware, and artificial intelligence (AI) experts. More research should be conducted on the long-term use of VR as an assistive device, particularly for those seeking skill improvement to improve their quality of life.

Electronic computers. Computer science
DOAJ Open Access 2024
Heart disease prediction using autoencoder and DenseNet architecture

Norah Saleh Alghamdi, Mohammed Zakariah, Achyut Shankar et al.

Heart disease continues to be a prominent cause of death globally, emphasizing the critical requirement for precise prediction techniques and prompt therapies. This research presents a new method that utilizes the collective capabilities of autoencoder and DenseNet architectures to predict heart illness. Our study is based on the Heart Disease UCI Cleveland dataset, which includes 13 variables that cover clinical and demographic parameters such as age, sex, cholesterol levels, and exercise-induced angina. The dataset presents issues due to its varied attribute types, including category and numerical variables. Furthermore, our approach tackles these difficulties by utilizing a dense autoencoder model, which produced exceptional outcomes. The Model attained a mean accuracy of 99.67% on the Heart Disease UCI Cleveland dataset. Further testing showed it was resilient, with a test accuracy of 99.99%. In addition, the Model demonstrated outstanding macro precision, macro recall, and macro F1 score, with percentages of 99.98%, 99.97%, and 99.96%, respectively. In addition, our results indicate that combining autoencoder and DenseNet designs shows potential for predicting cardiac disease, with substantial enhancements in accuracy and performance metrics compared to current approaches. This methodology can improve clinical decision-making and patient outcomes in cardiovascular care by accurately finding and defining complex patterns within the data. Notwithstanding these encouraging outcomes, our investigation has constraints. The specific attributes of the dataset utilized may limit the applicability of our findings. Subsequent studies could examine the suitability of our method for various datasets and analyze supplementary variables that may improve forecast precision. Furthermore, it is necessary to conduct prospective validation studies to evaluate our strategy’s practical effectiveness in clinical environments.

Electronic computers. Computer science
DOAJ Open Access 2024
Examining differences in time to appointment and no-show rates between rural telehealth users and non-users

Kristin Pullyblank, Nicole Krupa, Melissa Scribani et al.

BackgroundTelehealth has undergone widespread implementation since 2020 and is considered an invaluable tool to improve access to healthcare, particularly in rural areas. However, telehealth's applicability may be limited for certain populations including those who live in rural, medically underserved communities. While broadband access is a recognized barrier, other important factors including age and education influence a person's ability or preference to engage with telehealth via video telehealth or a patient portal. It remains unclear the degree to which these digital technologies lead to disparities in access to care.PurposeThe purpose of this analysis is to determine if access to healthcare differs for telehealth users compared with non-users.MethodsUsing electronic health record data, we evaluated differences in “time to appointment” and “no-show rates” between telehealth users and non-users within an integrated healthcare network between August 2021 and January 2022. We limited analysis to patient visits in endocrinology or outpatient behavioral health departments. We analyzed new patients and established patients separately.ResultsTelehealth visits were associated with shorter time to appointment for new and established patients in endocrinology and established patients in behavioral health, as well as with lower no-show rates for established patients in both departments.ConclusionsThe findings suggest that those who are unwilling or unable to engage with telehealth may have more difficulty accessing timely care.

Medicine, Public aspects of medicine
S2 Open Access 2023
Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

Dasha Pruss

Recidivism risk assessment instruments are presented as an ‘evidence-based’ strategy for criminal justice reform – a way of increasing consistency in sentencing, replacing cash bail, and reducing mass incarceration. In practice, however, AI-centric reforms can simply add another layer to the sluggish, labyrinthine machinery of bureaucratic systems and are met with internal resistance. Through a community-informed interview-based study of 23 criminal judges and other criminal legal bureaucrats in Pennsylvania, I find that judges overwhelmingly ignore a recently-implemented sentence risk assessment instrument, which they disparage as “useless,” “worthless,” “boring,” “a waste of time,” “a non-thing,” and simply “not helpful.” I argue that this algorithm aversion cannot be accounted for by individuals’ distrust of the tools or automation anxieties, per the explanations given by existing scholarship. Rather, the instrument’s non-use is the result of an interplay between three organizational factors: county-level norms about pre-sentence investigation reports; alterations made to the instrument by the Pennsylvania Sentencing Commission in response to years of public and internal resistance; and problems with how information is disseminated to judges. These findings shed new light on the important role of organizational influences on professional resistance to algorithms, which helps explain why algorithm-centric reforms can fail to have their desired effect. This study also contributes to an empirically-informed argument against the use of risk assessment instruments: they are resource-intensive and have not demonstrated positive on-the-ground impacts.

28 sitasi en Computer Science
DOAJ Open Access 2023
Ranking-Based Case Retrieval with Graph Neural Networks in Process-Oriented Case-Based Reasoning

Maximilian Hoffmann, Ralph Bergmann

In Process-Oriented Case-Based Reasoning (POCBR), experiential knowledge from previous problem-solving situations is retrieved from a case base to be reused for upcoming problems. The task of retrieval is approached in previous work by using Graph Neural Networks (GNNs) to learn workflow similarities which are, in turn, used to find similar workflows w.r.t. a query workflow. This paper is motivated by the fact that these GNNs are mostly used for predicting the similarity between two workflows (query and case), while the retrieval in CBR is only concerned with the ranking of the most similar workflows from the case base w.r.t. the query. Thus, we propose a novel approach to extend the GNN-based workflow retrieval by a Learning-to-Rank (LTR) component where rankings instead of similarities between cases are predicted. The main contribution of this paper addresses the changes to the GNNs from previous work, such that their model architecture predicts pairwise preferences between cases w.r.t. a query and that they can be trained using labeled preference data. In order to transform these preferences into a case ranking, we also describe rank aggregation methods with different levels of computational complexity. The experimental evaluation compares different models for predicting similarities and rankings in case retrieval scenarios. The results indicate the potential of our ranking-based approach in significantly improving retrieval quality with only small impacts on the performance.

Technology, Electronic computers. Computer science
DOAJ Open Access 2022
Verification Of Student Diplomas Based On Qr Code

Citra Widya Herawati

<p>Diploma verification is still done manually. During this time, the diploma is verified by displaying the original diploma. Because the campus does not match the diploma with the existing archives, diploma falsification is possible. IAIN Bukittinggi uses manual methods to verify the authenticity of diplomas. The goal of this research is to create a diploma verification system based on QR CODE to verify the authenticity of diplomas. Research and Development is the type of research employed (R&amp;D). The system development model employs a waterfall approach and the System Development Life Cycle (SDLC). The validity test results are valid, with an average of 0.90. The average practicality test result is 92, indicating that the product is very practical. And the effectiveness test results with an average of 0.90, which is very high.</p>

Electronic computers. Computer science
DOAJ Open Access 2022
Multi-Strategy Improved Sparrow Search Algorithm and Application

Xiangdong Liu, Yan Bai, Cunhui Yu et al.

The sparrow search algorithm (SSA) is a metaheuristic algorithm developed based on the foraging and anti-predatory behavior of sparrow populations. Compared with other metaheuristic algorithms, SSA also suffers from poor population diversity, has weak global comprehensive search ability, and easily falls into local optimality. To address the problems whereby the sparrow search algorithm tends to fall into local optimum and the population diversity decreases in the later stage of the search, an improved sparrow search algorithm (PGL-SSA) based on piecewise chaotic mapping, Gaussian difference variation, and linear differential decreasing inertia weight fusion is proposed. Firstly, we analyze the improvement of six chaotic mappings on the overall performance of the sparrow search algorithm, and we finally determine the initialization of the population by piecewise chaotic mapping to increase the initial population richness and improve the initial solution quality. Secondly, we introduce Gaussian difference variation in the process of individual iterative update and use Gaussian difference variation to perturb the individuals to generate a diversity of individuals so that the algorithm can converge quickly and avoid falling into localization. Finally, linear differential decreasing inertia weights are introduced globally to adjust the weights so that the algorithm can fully traverse the solution space with larger weights in the first iteration to avoid falling into local optimum, and we enhance the local search ability with smaller weights in the later iteration to improve the search accuracy of the optimal solution. The results show that the proposed algorithm has a faster convergence speed and higher search accuracy than the comparison algorithm, the global search capability is significantly enhanced, and it is easier to jump out of the local optimum. The improved algorithm is also applied to the Heating, Ventilation and Air Conditioning (HVAC) system control optimization direction, and the improved algorithm is used to optimize the parameters of the HVAC system Proportion Integral Differential (PID) controller. The results show that the PID controller optimized by the improved algorithm has higher control accuracy and system stability, which verifies the feasibility of the improved algorithm in practical engineering applications.

Applied mathematics. Quantitative methods, Mathematics
DOAJ Open Access 2021
Applications of Rough Sets in Big Data Analysis: An Overview

Pięta Piotr, Szmuc Tomasz

Big data, artificial intelligence and the Internet of things (IoT) are still very popular areas in current research and industrial applications. Processing massive amounts of data generated by the IoT and stored in distributed space is not a straightforward task and may cause many problems. During the last few decades, scientists have proposed many interesting approaches to extract information and discover knowledge from data collected in database systems or other sources. We observe a permanent development of machine learning algorithms that support each phase of the data mining process, ensuring achievement of better results than before. Rough set theory (RST) delivers a formal insight into information, knowledge, data reduction, uncertainty, and missing values. This formalism, formulated in the 1980s and developed by several researches, can serve as a theoretical basis and practical background for dealing with ambiguities, data reduction, building ontologies, etc. Moreover, as a mature theory, it has evolved into numerous extensions and has been transformed through various incarnations, which have enriched expressiveness and applicability of the related tools. The main aim of this article is to present an overview of selected applications of RST in big data analysis and processing. Thousands of publications on rough sets have been contributed; therefore, we focus on papers published in the last few years. The applications of RST are considered from two main perspectives: direct use of the RST concepts and tools, and jointly with other approaches, i.e., fuzzy sets, probabilistic concepts, and deep learning. The latter hybrid idea seems to be very promising for developing new methods and related tools as well as extensions of the application area.

Mathematics, Electronic computers. Computer science

Halaman 12 dari 31651