Hasil untuk "Cybernetics"
Menampilkan 20 dari ~120924 hasil · dari CrossRef, arXiv, DOAJ
Pandula Thennakoon, Mario De Silva, M. Mahesha Viduranga et al.
Computational disease modeling plays a crucial role in understanding and controlling the transmission of infectious diseases. While agent-based models (ABMs) provide detailed insights into individual dynamics, accurately replicating human motion remains challenging due to its complex, multi-factorial nature. Most existing frameworks fail to model realistic human motion, leading to oversimplified and less realistic behavior modeling. Furthermore, many current models rely on synthetic assumptions and fail to account for realistic environmental structures, transportation systems, and behavioral heterogeneity across occupation groups. To address these limitations, we introduce AVSim, an agent-based simulation framework designed to model airborne and vector-borne disease dynamics under realistic conditions. A distinguishing feature of AVSim is its ability to accurately model the dual nature of human mobility (both the destinations individuals visit and the duration of their stay) by utilizing GPS traces from real-world participants, characterized by occupation. This enables a significantly more granular and realistic representation of human movement compared to existing approaches. Furthermore, spectral clustering combined with graph-theoretic analysis is used to uncover latent behavioral patterns within occupations, enabling fine-grained modeling of agent behavior. We validate the synthetic human mobility patterns against ground-truth GPS data and demonstrate AVSim's capabilities via simulations of COVID-19 and dengue. The results highlight AVSim's capacity to trace infection pathways, identify high-risk zones, and evaluate interventions such as vaccination, quarantine, and vector control with occupational and geographic specificity.
Hidaka Asai, Tomoyuki Noda, Jun Morimoto
Human motor control remains agile and robust despite limited sensory information for feedback, a property attributed to the body's ability to perform morphological computation through muscle coordination with variable impedance. However, it remains unclear how such low-level mechanical computation reduces the control requirements of the high-level controller. In this study, we implement a hierarchical controller consisting of a high-level neural network trained by reinforcement learning and a low-level variable-impedance muscle coor dination model with mono- and biarticular muscles in monoped locomotion task. We systematically restrict the high-level controller by varying the control frequency and by introducing biologically inspired observation conditions: delayed, partial, and substituted observation. Under these conditions, we evaluate how the low-level variable-impedance muscle coordination contributes to learning process of high-level neural network. The results show that variable-impedance muscle coordination enables stable locomotion even under slow-rate control frequency and limited observation conditions. These findings demonstrate that the morphological computation of muscle coordination effectively offloads high-frequency feedback of the high-level controller and provide a design principle for the controller in motor control.
Anmolika Singh, Yuhang Diao
Effective item categorization is vital for businesses, enabling the transformation of unstructured datasets into organized categories that streamline inventory management. Despite its importance, item categorization remains highly subjective and lacks a uniform standard across industries and businesses. The United Nations Standard Products and Services Code (UNSPSC) provides a standardized system for cataloguing inventory, yet employing UNSPSC categorizations often demands significant manual effort. This paper investigates the deployment of Large Language Models (LLMs) to automate the classification of inventory data into UNSPSC codes based on Item Descriptions. We evaluate the accuracy and efficiency of LLMs in categorizing diverse datasets, exploring their language processing capabilities and their potential as a tool for standardizing inventory classification. Our findings reveal that LLMs can substantially diminish the manual labor involved in item categorization while maintaining high accuracy, offering a scalable solution for businesses striving to enhance their inventory management practices.
Bhavith Chandra Challagundla, Chakradhar Peddavenkatagari
Automatic text summarization (TS) plays a pivotal role in condensing large volumes of information into concise, coherent summaries, facilitating efficient information retrieval and comprehension. This paper presents a novel framework for abstractive TS of single documents, which integrates three dominant aspects: structural, semantic, and neural-based approaches. The proposed framework merges machine learning and knowledge-based techniques to achieve a unified methodology. The framework consists of three main phases: pre-processing, machine learning, and post-processing. In the pre-processing phase, a knowledge-based Word Sense Disambiguation (WSD) technique is employed to generalize ambiguous words, enhancing content generalization. Semantic content generalization is then performed to address out-of-vocabulary (OOV) or rare words, ensuring comprehensive coverage of the input document. Subsequently, the generalized text is transformed into a continuous vector space using neural language processing techniques. A deep sequence-to-sequence (seq2seq) model with an attention mechanism is employed to predict a generalized summary based on the vector representation. In the post-processing phase, heuristic algorithms and text similarity metrics are utilized to refine the generated summary further. Concepts from the generalized summary are matched with specific entities, enhancing coherence and readability. Experimental evaluations conducted on prominent datasets, including Gigaword, Duc 2004, and CNN/DailyMail, demonstrate the effectiveness of the proposed framework. Results indicate significant improvements in handling rare and OOV words, outperforming existing state-of-the-art deep learning techniques. The proposed framework presents a comprehensive and unified approach towards abstractive TS, combining the strengths of structure, semantics, and neural-based methodologies.
Shuntaro Tanaka, Hidetoshi Matsui
Screening methods are useful tools for variable selection in regression analysis when the number of predictors is much larger than the sample size. Factor analysis is used to eliminate multicollinearity among predictors, which improves the variable selection performance. We propose a new method, called Truncated Preconditioned Profiled Independence Screening (TPPIS), that better selects the number of factors to eliminate multicollinearity. The proposed method improves the variable selection performance by truncating unnecessary parts from the information obtained by factor analysis. We confirmed the superior performance of the proposed method in variable selection through analysis using simulation data and real datasets.
Vladimir Norkin, Anton Kozyriev
Shor's r-algorithm (Shor, Zhurbenko (1971), Shor (1979)) with space stretching in the direction of difference of two adjacent subgradients is a competitive method of nonsmooth optimization. However, the original r-algorithm is designed to minimize convex ravine functions without constraints. The standard technique for solving constraint problems with this algorithm is to use exact nonsmooth penalty functions (Eremin (1967), Zangwill (1967)). At the same time, it is necessary to choose the (sufficiently large) penalty coefficient in this method. In Norkin (2020, 2022) and Galvan et al. (2021), the so-called projective exact penalty function method is proposed, which does not formally require an exact definition of the penalty coefficient. In this paper, a nonsmooth optimization problem with convex constraints is first transformed into a constraintfree problem by the projective penalty function method, and then the r-algorithm is applied to solve the transformed problem. We present the results of testing this approach on problems with linear constraints using a program implemented in Matlab.
Amir Najafi, Sedigeh Soleimanpur, Zoheir Morady
Information Technology is a key factor in eliminating the limitations posed by time and location and gives a much better and a faster access to informaition. In other words, technology has revolutionized work methods as a result of which, the paper which was used as a substrate for writing the work upon it has now been replaced by an electronic substrate. The changes created by technology include the use of Blockchain, IoT (Internet of Things), Cloud Accounting and Big Data to do accounting in automated form.The statistical sample of this study is given on the basis of the 171 person Cochran formulae consisting of financial managers, accountants and auditors of the Tehran city. Empirical results show that the use of Information Technology has a very significant role on the quality of Accounting Information and all these factors are influential in the quality of accounting system.
M. V. Sprindzuk, A. S. Vladyko, L. P. Titov et al.
The novel coronavirus pandemic has stimulated the scientific activity of virology and interdisciplinary sciences: medical cybernetics and bioinformatics. The article is focused on the study of algorithms for processing bioinformatic data of genomic origin predominantly for the purposes of predominantly immunoinformatics and computational vaccinology. The schemes of algorithms developed by the authors for the analysis of bioinformatic data are presented. The algorithms for processing genomic information developed by the authors based on the analysis of the available literature and many years of experience in computational and laboratory experiments can be used not only for the design and analysis of epitope vaccine components, but also for the other tasks of computational virology and microbiology. In silico experiments on the analysis of bioinformatic data are relatively low-cost and multi-informative, but they require highly qualified scientists with extensive experience, interdisciplinary training, and, accordingly, a wide range of knowledge and skills. However, for the complete analysis and implementation of, for example, the epitope vaccines, subsequent validation by the laboratory and in vivo experiments are required.
Markus Bayer, Marc-André Kaufhold, Björn Buchhold et al.
In many cases of machine learning, research suggests that the development of training data might have a higher relevance than the choice and modelling of classifiers themselves. Thus, data augmentation methods have been developed to improve classifiers by artificially created training data. In NLP, there is the challenge of establishing universal rules for text transformations which provide new linguistic patterns. In this paper, we present and evaluate a text generation method suitable to increase the performance of classifiers for long and short texts. We achieved promising improvements when evaluating short as well as long text tasks with the enhancement by our text generation method. Especially with regard to small data analytics, additive accuracy gains of up to 15.53% and 3.56% are achieved within a constructed low data regime, compared to the no augmentation baseline and another data augmentation technique. As the current track of these constructed regimes is not universally applicable, we also show major improvements in several real world low data tasks (up to +4.84 F1-score). Since we are evaluating the method from many perspectives (in total 11 datasets), we also observe situations where the method might not be suitable. We discuss implications and patterns for the successful application of our approach on different types of datasets.
Xiwen Qu, Hao Che, Jun Huang et al.
Multi-label image classification (MLIC) is a fundamental and practical task, which aims to assign multiple possible labels to an image. In recent years, many deep convolutional neural network (CNN) based approaches have been proposed which model label correlations to discover semantics of labels and learn semantic representations of images. This paper advances this research direction by improving both the modeling of label correlations and the learning of semantic representations. On the one hand, besides the local semantics of each label, we propose to further explore global semantics shared by multiple labels. On the other hand, existing approaches mainly learn the semantic representations at the last convolutional layer of a CNN. But it has been noted that the image representations of different layers of CNN capture different levels or scales of features and have different discriminative abilities. We thus propose to learn semantic representations at multiple convolutional layers. To this end, this paper designs a Multi-layered Semantic Representation Network (MSRN) which discovers both local and global semantics of labels through modeling label correlations and utilizes the label semantics to guide the semantic representations learning at multiple layers through an attention mechanism. Extensive experiments on four benchmark datasets including VOC 2007, COCO, NUS-WIDE, and Apparel show a competitive performance of the proposed MSRN against state-of-the-art models.
Bai Yan, Qi Zhao, Jin Zhang et al.
Gridless methods show great superiority in line spectral estimation. These methods need to solve an atomic $l_0$ norm (i.e., the continuous analog of $l_0$ norm) minimization problem to estimate frequencies and model order. Since this problem is NP-hard to compute, relaxations of atomic $l_0$ norm, such as nuclear norm and reweighted atomic norm, have been employed for promoting sparsity. However, the relaxations give rise to a resolution limit, subsequently leading to biased model order and convergence error. To overcome the above shortcomings of relaxation, we propose a novel idea of simultaneously estimating the frequencies and model order by means of the atomic $l_0$ norm. To accomplish this idea, we build a multiobjective optimization model. The measurment error and the atomic $l_0$ norm are taken as the two optimization objectives. The proposed model directly exploits the model order via the atomic $l_0$ norm, thus breaking the resolution limit. We further design a variable-length evolutionary algorithm to solve the proposed model, which includes two innovations. One is a variable-length coding and search strategy. It flexibly codes and interactively searches diverse solutions with different model orders. These solutions act as steppingstones that help fully exploring the variable and open-ended frequency search space and provide extensive potentials towards the optima. Another innovation is a model order pruning mechanism, which heuristically prunes less contributive frequencies within the solutions, thus significantly enhancing convergence and diversity. Simulation results confirm the superiority of our approach in both frequency estimation and model order selection.
Mohammad Minhazul Alam, Md Gazuruddin, Nahian Ahmed et al.
One of the challenges of training artificial intelligence models for classifying satellite images is the presence of label noise in the datasets that are sometimes crowd-source labeled and as a result, somewhat error prone. In our work, we have utilized three labeled satellite image datasets namely, SAT-6, SAT-4, and EuroSAT. The combined dataset consists of over 900,000 image patches that belong to a land cover class. We have applied some standard pixel-based feature extraction algorithms to extract features from the images and then trained those features with various machine learning algorithms. In our experiment, three types of artificial label noises are injected – Noise Completely at Random (NCAR), Noise at Random (NAR) and Noise Not at Random (NNAR) to the training datasets. The noisy data are used to train the algorithms, and the effect of noise on the algorithm performance are compared with noise-free test sets. From our study, the Random Forest and the Back-propagation Neural Network classifiers are found to be the least sensitive to label noises. As label noise is a common scenario in human-labeled image datasets, the current research initiative will help the development of noise robust classification methods for various relevant applications.
J.D. Arango, Y.A. Vélez, V.H. Aristizabal et al.
The response of fiber specklegram sensors (FSSs) is given as function of variations in the intensity distribution of the modal interference pattern or speckle pattern induced by external disturbances. In the present work, the behavior of a FSS sensing scheme under thermal perturbations is studied by means of computational simulations of the speckle patterns. These simulations are generated by applying the finite element method (FEM) to the modal interference in optical fibers as a function of the thermal disturbance and the length of the sensing zone. A correlation analysis is performed on the images generated in the simulations to evaluate the dependence between the changes in the speckle pattern grains and the intensity of the applied disturbance. The numerical simulation shows how the building characteristic of the length of sensing zone, combined with image processing, can be manipulated to control the metrological performance of the sensors.
Arkadiusz Kowalski, Robert Waszkowski
The transport of the winning in deep mines, using the room and pillar mining system, is most often performed with bucket loaders and haul trucks. In the era of attempts to stop rapid climate change, it is crucial to choose the transport means for the winning both in terms of efficiency and cost-effectiveness and to consider its environmental aspect. Permissible levels of pollutant emissions in exhaust gases are defined for this type of means of transport by the EU Stage Standards. There is a discernible need to develop a multi-criteria method supporting the decision-making process, which should reward loaders and haul trucks that meet more stringent emission standards. The article proposes an innovative idea of taking environmental aspects into account when selecting loaders and haul trucks for excavated material transport tasks, so that the amount of pollutants emitted by them in exhaust gases, e.g., the sum of hydrocarbons and nitrogen oxides (HC+NO<sub>x</sub>), is also taken into consideration when assigning means of transport to particular tasks. Based on simulation studies for a specific case, it was found that a 20% reduction of HC+NO<sub>x</sub> emission is possible with only a 2% increase in the transport costs of the winning. For this purpose, an objective function was used formulated on the basis of two criteria: minimization of the transport cost of the winning and the level of pollutant emissions in the exhaust gases. Since dozens of mining machines are operated continuously in deep mines of non-ferrous metal ores, the application of the proposed method would significantly reduce the emission of pollutants in the used air coming out of ventilation shafts.
Hanna Tsvietkova, Olena Beskorsa, Liudmyla Pryimenko
The article's primary objective is to provide the first comprehensive retrospective examination of Canadian media education history. The authors support the periodization, identify the trends, the periodization criteria, and the three main periods of establishment and development of Canadian media education in the context of socio-political and sociopedagogical determinants based on the theoretical findings of Canadian media educators. The historical foundations for Canada's development of media education are made clear. According to pedagogical theory and practice, the core of media education is the study of theory and the development of practical skills for mastering contemporary mass media. This knowledge is seen as belonging to a distinct, independent field of knowledge. The authors conclude that media education is connected to all forms of media, including the collection of information and communication tools that each person uses on a daily basis: printed media (newspapers, magazines), auditory media (radio, audio), and screen media (movies, television, video, multimedia, the Internet, etc.); they also define the fundamental elements of media education and conclude that it is a subset of media literacy and media culture. The article describes the evolution of media education associations, approaches, and programs. It has been proven necessary to use positive Canadian experience to address issues with implementing media education in Ukraine in order to improve and humanize that country's educational system in the framework of studying the experience of Canadian media theorists and practitioners.
Halaman 32 dari 6047