Hasil untuk "Cybernetics"
Menampilkan 20 dari ~134541 hasil · dari arXiv, DOAJ, Semantic Scholar, CrossRef
Yumi Iwashita, Haakon Moe, Yang Cheng et al.
As global efforts to explore the Moon intensify, the need for high-quality 3D lunar maps becomes increasingly critical-particularly for long-distance missions such as NASA's Endurance mission concept, in which a rover aims to traverse 2,000 km across the South Pole-Aitken basin. Kaguya TC (Terrain Camera) images, though globally available at 10 m/pixel, suffer from altitude inaccuracies caused by stereo matching errors and JPEG-based compression artifacts. This paper presents a method to improve the quality of 3D maps generated from Kaguya TC images, focusing on mitigating the effects of compression-induced noise in disparity maps. We analyze the compression behavior of Kaguya TC imagery, and identify systematic disparity noise patterns, especially in darker regions. In this paper, we propose an approach to enhance 3D map quality by reducing residual noise in disparity images derived from compressed images. Our experimental results show that the proposed approach effectively reduces elevation noise, enhancing the safety and reliability of terrain data for future lunar missions.
Nigar Alishzade, Gulchin Abdullayeva
This study presents a systematic comparative analysis of recurrent and attention-based neural architectures for isolated sign language recognition. We implement and evaluate two representative models-ConvLSTM and Vanilla Transformer-on the Azerbaijani Sign Language Dataset (AzSLD) and the Word-Level American Sign Language (WLASL) dataset. Our results demonstrate that the attention-based Vanilla Transformer consistently outperforms the recurrent ConvLSTM in both Top-1 and Top-5 accuracy across datasets, achieving up to 76.8% Top-1 accuracy on AzSLD and 88.3% on WLASL. The ConvLSTM, while more computationally efficient, lags in recognition accuracy, particularly on smaller datasets. These findings highlight the complementary strengths of each paradigm: the Transformer excels in overall accuracy and signer independence, whereas the ConvLSTM offers advantages in computational efficiency and temporal modeling. The study provides a nuanced analysis of these trade-offs, offering guidance for architecture selection in sign language recognition systems depending on application requirements and resource constraints.
Vasiliy Znamenskiy, Rafael Niyazov, Joel Hernandez
This paper presents a new educational framework for integrating generative artificial intelligence (GenAI) platforms such as ChatGPT, Claude, and Gemini into laboratory activities aimed at developing critical thinking and digital literacy among undergraduate students. Recognizing the limitations and risks of uncritical reliance on large language models (LLMs), the proposed pedagogical model reframes GenAI as a research subject and cognitive tool. Students formulate discipline-specific prompts and evaluate GenAI-generated responses in text, image, and video modalities. A pilot implementation in a general astronomy course for non-science majors demonstrated high levels of engagement and critical reflection, with many students continuing the activity after class and presenting results at a research symposium. The results highlight the importance of structured AI interactions in education and suggest that GenAI can improve learning outcomes when combined with reflective assessment methods. The study proposes a replicable model for interdisciplinary AI-integrated lab work, adaptable to scientific disciplines. See the guide to learning activities based on Generative-Ai platforms: https://doi.org/10.5281/zenodo.15555802
T. E. Romanenko, A. V. Razgulin, N. G. Iroshnikov et al.
The problem of wavefront reconstruction by its slopes, related to the the phase recovery of a light wave based on Shack-Hartmann sensor data, is considered. A reconstruction method based on the application of physics-informed neural networks to slope measurement data on both regular and irregular grids in two modifications WRPINN and WRRADPINN is proposed. A comparison with the reconstruction method based on the variational approach combined with the projection method using a fractional smoothness stabilizer on typical smooth, nonsmooth, and discontinuous wavefronts defined on a regular grid is given. The results of the method’s performance on irregular grids and with partially missing data are analyzed, leading to the conclusion about its effectiveness in handling such data.
Leo Thomas Ramos, Edmundo Casas, Francklin Rivas-Echeverría
This research presents the K-Pipelines dataset, a pioneering synthetic image collection designed specifically for the classification of corrosion in oil and gas pipelines. Instead of training custom generative architectures, our research used an online image generation tool powered by Stable Diffusion. This choice leveraged the platform’s robust capability to quickly produce a high volume of diverse and detailed images, saving significant time and resources. The dataset was carefully constructed using a sequence of refined prompts, derived from a review of pipeline characteristics including material types, environments, and corrosion forms. K-Pipelines consist of 600 PNG images of 512 × 512 resolution. Furthermore, an augmented version was developed, totaling 1080 images. Our evaluation employed state-of-the-art deep learning classifiers, specifically VGG16, ResNet50, EfficientNet, InceptionV3, MobileNetV2, and ConvNeXt-base, to test the integrity of the K-pipelines dataset. These models showcased its robustness by consistently achieving accuracies around the 90% mark, illustrating the dataset’s substantial promise as a resource for both AI research and real-world applications in the oil and gas industry. The dataset is publicly available for access and use within the scientific community.
María Teresa García-Ordás, Héctor Alaiz-Moretón, José-Luis Casteleiro-Roca et al.
This work addresses the performance comparison between four clustering techniques with the objective of achieving strong hybrid models in supervised learning tasks. A real dataset from a bio-climatic house named Sotavento placed on experimental wind farm and located in Xermade (Lugo) in Galicia (Spain) has been collected. Authors have chosen the thermal solar generation system in order to study how works applying several cluster methods followed by a regression technique to predict the output temperature of the system. With the objective of defining the quality of each clustering method two possible solutions have been implemented. The first one is based on three unsupervised learning metrics (Silhouette, Calinski-Harabasz and Davies-Bouldin) while the second one, employs the most common error measurements for a regression algorithm such as Multi Layer Perceptron.
Gewei Zuo, Mengmou Li, Lijun Zhu
In this paper, we address the distributed prescribed-time convex optimization (DPTCO) for a class of networked Euler-Lagrange systems under undirected connected graphs. By utilizing position-dependent measured gradient value of local objective function and local information interactions among neighboring agents, a set of auxiliary systems is constructed to cooperatively seek the optimal solution. The DPTCO problem is then converted to the prescribed-time stabilization problem of an interconnected error system. A prescribed-time small-gain criterion is proposed to characterize prescribed-time stabilization of the system, offering a novel approach that enhances the effectiveness beyond existing asymptotic or finite-time stabilization of an interconnected system. Under the criterion and auxiliary systems, innovative adaptive prescribed-time local tracking controllers are designed for subsystems. The prescribed-time convergence lies in the introduction of time-varying gains which increase to infinity as time tends to the prescribed time. Lyapunov function together with prescribed-time mapping are used to prove the prescribed-time stability of closed-loop system as well as the boundedness of internal signals. Finally, theoretical results are verified by one numerical example.
Ruimin Peng, Jiayu An, Dongrui Wu
Electroencephalogram (EEG)-based seizure subtype classification enhances clinical diagnosis efficiency. Source-free semi-supervised domain adaptation (SF-SSDA), which transfers a pre-trained model to a new dataset with no source data and limited labeled target data, can be used for privacy-preserving seizure subtype classification. This paper considers two challenges in SF-SSDA for EEG-based seizure subtype classification: 1) How to effectively fuse both raw EEG data and expert knowledge in classifier design? 2) How to align the source and target domain distributions for SF-SSDA? We propose a Knowledge-Data Fusion based SF-SSDA approach, KDF-MutualSHOT, for EEG-based seizure subtype classification. In source model training, KDF uses Jensen-Shannon Divergence to facilitate mutual learning between a feature-driven Decision Tree-based model and a data-driven Transformer-based model. To adapt KDF to a new target dataset, an SF-SSDA algorithm, MutualSHOT, is developed, which features a consistency-based pseudo-label selection strategy. Experiments on the public TUSZ and CHSZ datasets demonstrated that KDF-MutualSHOT outperformed other supervised and source-free domain adaptation approaches in cross-subject seizure subtype classification.
Andrea Cerroni
With the 1973 Chilean coup there was a total change in the economic order, from Allende’s socialist experiment (original in its “balancing” of center and periphery) to the full adoption of neoliberalism, which was accompanied by a simultaneous transition in the IT field. In fact, Allende’s experience was so closely linked to the development of the first, futuristic cybernetical project for the governance of the technological-information infrastructure (Cybersyn), which was promptly scrapped after the coup. A similar outcome to this sudden defeat of cybernetics, had actually occurred during the Prague Spring. Later, as neoliberalism spread, cybernetics faded in favor of a rival approach to information science, so-called Artificial Intelligence. This article will attempt to answer the following questions. (1) Were there fundamental cultural divergences between cybernetics and artificial intelligence such that they really resonated with two alternative policy perspectives? (2) Can one, moreover, delineate a kind of symbolic universe that would extend from the (then prospective) artificial intelligence to the paradigms later established in many disciplines?
Bayram Ibrahimov, Elshan Hashimov , Togrul Ismayılov
The noise immunity indicators of the functioning telecommunication systems in the presence interference sources are analyzed based on the architectural concept of the next and future networks. The object study is the optimal demodulator signal receiver with matched filters. The relevance of this area research is shown. Based on a study algorithms for the operation of a demodulator with matched filters, a new approach to constructing a mathematical model for assessing the noise immunity characteristics receiving traffic messages is proposed. The developed mathematical model takes into account the demodulator synthesis algorithm, effective modulation and coding methods in the detector receiver. The subject of the research is a mathematical model for assessing the noise immunity indicators of the functioning multiservice telecommunication networks. Based on a study of the reliability of the transmission traffic messages, a block diagram of an optimal demodulator signal receiver with matched filters is proposed. The purpose of the research is to develop a new approach to create a mathematical model for assessing the characteristics communication quality and noise immunity telecommunication systems when receiving message traffic packets in a complex signal-noise environment. Based on a mathematical model for assessing the noise immunity indicators of telecommunication systems, important analytical expressions for further research were obtained. As a result of the research, the main conclusions of the study were obtained, which can be implemented and used in multi-service fixed and mobile communication networks to calculate the noise immunity indicators of public telecommunication systems. The rationale for the proposed main stages of the study is given, the results of analytical research and simulation modeling are presented, confirming the validity of the theoretical conclusions made.
Long Chen, Siyu Teng, Bai Li et al.
Growing interest in autonomous driving (AD) and intelligent vehicles (IVs) is fueled by their promise for enhanced safety, efficiency, and economic benefits. While previous surveys have captured progress in this field, a comprehensive and forward-looking summary is needed. Our work fills this gap through three distinct articles. The first part, a "Survey of Surveys" (SoS), outlines the history, surveys, ethics, and future directions of AD and IV technologies. The second part, "Milestones in Autonomous Driving and Intelligent Vehicles Part I: Control, Computing System Design, Communication, HD Map, Testing, and Human Behaviors" delves into the development of control, computing system, communication, HD map, testing, and human behaviors in IVs. This part, the third part, reviews perception and planning in the context of IVs. Aiming to provide a comprehensive overview of the latest advancements in AD and IVs, this work caters to both newcomers and seasoned researchers. By integrating the SoS and Part I, we offer unique insights and strive to serve as a bridge between past achievements and future possibilities in this dynamic field.
Anton Novianto, Mila Desi Anasanti
Autism Spectrum Disorder (ASD) is a developmental disorder that impairs the development of behaviors, communication, and learning abilities. Early detection of ASD helps patients to get beter training to communicate and interact with others. In this study, we identified ASD and non-ASD individuals using machine learning (ML) approaches. We used Gaussian naive Bayes (NB), k-nearest neighbors (KNN), random forest (RF), logistic regression (LR), Gaussian naive Bayes (NB), support vector machine (SVM) with linear basis function and decision tree (DT). We preprocessed the data using the imputation methods, namely linear regression, Mice forest, and Missforest. We selected the important features using the Simultaneous perturbation feature selection and ranking (SpFSR) technique from all 21 ASD features of three datasets combined (N=1,100 individuals) from University California Irvine (UCI) repository. We evaluated the performance of the method's discrimination, calibration, and clinical utility using a stratified 10-fold cross-validation method. We achieved the highest accuracy possible by using SVM with selected the most important 10 features. We observed the integration of imputation using linear regression, SpFSR and SVM as the most effective models, with an accuracy rate of 100% outperformed the previous studies in ASD prediciton
Chanseung Lee, Chang-Hwan Lee
A variety of social dilemma scenarios are studied within the context of the prisoner’s dilemma, one of the most well-known concepts in modern game theory, and its variants. In the prisoner’s dilemma, studies typically emphasize the priority of maximizing the gain of each individual. In this paper, however, we focus on maximizing the benefit of the larger group, not each individual. It is worth noting that regardless of individual strategies in the prisoner’s dilemma, there is always a certain level of defection. These individual defections can be analyzed in a collective group setting from the perspective of game theory. We look into how much defection is required, if necessary, in order to optimize a group’s advantages, and analytically identify the specific effects of defection for the purpose of maximizing group benefit.
Yuan Wang, Chunyuan Zhang, Tianzong Yu et al.
As an important algorithm in deep reinforcement learning, advantage actor critic (A2C) has been widely succeeded in both discrete and continuous control tasks with raw pixel inputs, but its sample efficiency still needs to improve more. In traditional reinforcement learning, actor-critic algorithms generally use the recursive least squares (RLS) technology to update the parameter of linear function approximators for accelerating their convergence speed. However, A2C algorithms seldom use this technology to train deep neural networks (DNNs) for improving their sample efficiency. In this paper, we propose two novel RLS-based A2C algorithms and investigate their performance. Both proposed algorithms, called RLSSA2C and RLSNA2C, use the RLS method to train the critic network and the hidden layers of the actor network. The main difference between them is at the policy learning step. RLSSA2C uses an ordinary first-order gradient descent algorithm and the standard policy gradient to learn the policy parameter. RLSNA2C uses the Kronecker-factored approximation, the RLS method and the natural policy gradient to learn the compatible parameter and the policy parameter. In addition, we analyze the complexity and convergence of both algorithms, and present three tricks for further improving their convergence speed. Finally, we demonstrate the effectiveness of both algorithms on 40 games in the Atari 2600 environment and 11 tasks in the MuJoCo environment. From the experimental results, it is shown that our both algorithms have better sample efficiency than the vanilla A2C on most games or tasks, and have higher computational efficiency than other two state-of-the-art algorithms.
Halaman 37 dari 6728