Minimizing Control Attention:The Linear Gauss-Markov paradigm
Ralph Sabbagh, Asmaa Eldesoukey, Mahmoud Abdelgalil
et al.
We revisit the concept of `attention' as a technical term to quantify the effort in calibrating control action based on available data. While Wiener, in his work on Cybernetics, anticipated key principles on prioritizing task-relevant signals, it was not until the late 1990's when Brockett first formulated pertinent optimization problems that have inspired subsequent as well as the present work. `Attention,' as a technical term, is defined so as to quantify the dependence of the control law on the time and space/state coordinate; a control law that is independent of time and space, assuming it meets specifications, requires vanishing attention. In the present work we focus on Linear-Markovian dynamics with Gaussian state uncertainty so as to analyze the structure of minimal-attention control schemes that steer the dynamics between terminal states with Gaussian uncertainty profile.
Digital-physical testbed for ship autonomy studies in the Marine Cybernetics Laboratory basin
Emir Cem Gezer, Mael Korentin Ivan Moreau, Anders Sandneseng Høgden
et al.
The algorithms developed for Maritime Autonomous Surface Ships (MASS) are often challenging to test on actual vessels due to high operational costs and safety considerations. Simulations offer a cost-effective alternative and eliminate risks, but they may not accurately represent real-world dynamics for the given tasks. Utilizing small-scale model ships and robotic vessels in conjunction with a laboratory basin provides an accessible testing environment for the early stages of validation processes. However, designing and developing a model vessel for a single test can be costly and cumbersome, and researchers often lack access to such infrastructure. To address these challenges and enable streamlined testing, we have developed an in-house testbed that facilitates the development, testing, verification, and validation of MASS algorithms in a digital-physical laboratory. This infrastructure includes a set of small-scale model vessels, a simulation environment for each vessel, a comprehensive testbed environment, and a digital twin in Unity. With this, we aim to establish a full design and verification pipeline that starts with high-fidelity simulation models of each model vessel, to the model-scale testing in the laboratory basin, allowing possibilities for moving towards semi-fullscale validation with R/V milliAmpere1 and full-scale validation with R/V Gunnerus. In this work, we present our progress on the development of this testbed environment and its components, demonstrating its effectiveness in enabling ship guidance, navigation, and control (GNC), including autonomy.
Position Paper: Towards Open Complex Human-AI Agents Collaboration Systems for Problem Solving and Knowledge Management
Ju Wu, Calvin K. L. Or
We propose a technology-agnostic, collaboration-ready stance for Human-AI Agents Collaboration Systems (HAACS) that closes long-standing gaps in prior stages (automation; flexible autonomy; agentic multi-agent collectives). Reading empirical patterns through a seven-dimension collaboration spine and human-agent contrasts, we identify missing pieces: principled budgeting of initiative, instantaneous and auditable reconfiguration, a system-wide knowledge backbone with an epistemic promotion gate, capacity-aware human interfaces; and, as a prerequisite to all of the above, unified definitions of agent and formal collaborative dynamics. We respond with (i) a boundary-centric ontology of agenthood synthesized with cybernetics; (ii) a Petri net family (colored and interpreted) that models ownership, cross-boundary interaction, concurrency, guards, and rates with collaboration transitions; and (iii) a three-level orchestration (meta, agent, execution) that governs behavior families via guard flips. On the knowledge side, we ground collaborative learning in Conversation Theory and SECI with teach-back gates and an evolving backbone; on the problem-solving side, we coordinate routine MEA-style control with practice-guided open-ended discovery. The result is the Hierarchical Exploration-Exploitation Net (HE2-Net): a policy-controlled stance that splits provisional from validated assets, promotes only after tests and peer checks, and budgets concurrent probing while keeping reuse fast and safe. We show interoperability with emerging agent protocols without ad hoc glue and sketch bio-cybernetic extensions (autopoiesis, autogenesis, evolving boundaries, synergetics, etc). Altogether, the framework keeps humans central to setting aims, justifying knowledge, and steering theory-practice dynamics, while scaling agents as reliable collaborators within audited governance.
GAN-ViT-CMFD: A novel framework integrating generative adversarial networks and vision transformers for enhanced copy-move forgery detection and classification with spectral clustering
Jyothsna Ravula, Nilu Singh
Copy-move forgery detection (CMFD) is a critical task in digital forensics to ensure the authenticity of visual content, as the prevalence of advanced editing tools has made it increasingly easy to tamper with images. Such forgeries can have severe implications in fields like journalism, legal evidence, and cybersecurity. The motivation for adopting a hybrid Generative Adversarial Network (GAN)-Vision Transformer (ViT) approach arises from the need for robust models capable of handling the complexities of forgery patterns while ensuring high detection accuracy. This study proposes a hybrid framework, GAN-ViT-CMFD, integrating GANs and ViTs to address these challenges. GANs are employed to generate realistic forged images, creating an augmented dataset that enhances model robustness. ViTs extract powerful feature representations, leveraging their competence to capture long-range dependencies and intricate patterns in image data. Spectral clustering is then applied to the feature space to segregate forged and original image features, which are subsequently fed into a Convolutional Neural Network (CNN)-based classifier for forgery detection and classification.The proposed model demonstrates superior performance, achieving a training accuracy of 99.62 % and a validation accuracy of 99.0 %, with training and validation losses of 0.0352 and 0.0269, respectively. Evaluation metrics further affirm its effectiveness, with an accuracy of 99.02 %, precision of 97.92 %, recall of 99.89 %, and F1-score of 98.95 %. Additionally, the model achieves an exceptional ROC-AUC score of 99.88 %. These outcomes emphasize the ability of the GAN-ViT method in CMFD, highlighting its potential impact in reinforcing the reliability of image authenticity verification across various domains.
Cybernetics, Electronic computers. Computer science
Thermal and mechanical degradation mechanisms in heterostructural field-effect transistors based on gallium nitride
Vadim M. Minnebaev
Objectives. Gallium nitride heterostructural field-effect transistors (GaN HFET) are among the most promising semiconductor devices for power and microwave electronics. Over the past 10–15 years, GaN HFETs have firmly established their position in radio-electronic equipment for transmitting, receiving, and processing information, as well as in power electronics products, due to their significant advantages in terms of energy and thermal parameters. At the same time, issues associated with ensuring their reliability are no less acute than for devices based on other semiconductor materials. The aim of the study is to review the thermal and mechanical mechanisms of degradation in GaN HFETs due to the physicochemical characteristics of the materials used, as well as their corresponding growth and post-growth processes. Methods for preventing or reducing these mechanisms during development, production, and operation are evaluated.Methods. The main research method consists in an analytical review of the results of publications by a wide range of specialists in the field of semiconductor physics, production technology of heteroepitaxial structures and active devices based on them, as well as the modeling and design of modules and equipment in terms of their reliable operation.Results. As well as describing the problems of GaN HFET quality degradation caused by thermal overheating, mechanical degradation, problems with hot electrons and phonons in gallium nitride, the article provides an overview of research into these phenomena and methods for reducing their impact on transistor technical parameters and quality indicators.Conclusions. The results of the study show that strong electric fields and high specific thermal loading of highpower GaN HFETs can cause physical, polarization, piezoelectric and thermal phenomena that lead to redistribution of mechanical stresses in the active region, degradation of electrical characteristics, and a decrease in the reliability ofthe transistor as a whole. Itis shown that the presence of a field-plate and a passivating SiN layer leads to a decrease in the values of mechanical stress in the gate area by 1.3–1.5 times. The effects of thermal degradation in class AB amplifiers are more pronounced than the effects of strong fields in class E amplifiers; moreover, the mean time to failure sharply decreases at GaN HFET active zone temperatures over 320–350°C.
Packing Soft Polygons in a Minimum Height Rectangular Target Domain
Oksana Melashenko, Tetyana Romanova, Oleksandr Pankratov
et al.
The paper studies packing polygons of variable shapes, regarding the stretching coefficient, in a rectangular target domain of minimum height. Packing objects of a variable shape have a wide spectrum of applications, e.g, in biology, materials science, mechanics, land allocation, and logistics. Interest in these problems is also due to the modeling of the structures of porous media under pressure, e.g., for creating test models of artificial digital cores. Elements of porous media can be deformed under the influence of an external force, but the mass of each particle remains unchanged. This corresponds to conservation of area for the two-dimensional case. Polygonal objects must be completely contained within the target domain (containment constraint) and do not overlap (non-overlapping constraint), provided they have free translations, continuous rotations, stretch transformations, and conserve their area. The phi-function technique is used for an analytical description of the placement constraints for variable shape polygons. Quasi-phi-functions for describing non-overlapping constraints and phi-functions for describing containment constraints are defined. The packing problem is presented in the form of a nonlinear programming model. A solution strategy is proposed, which consists of the following stages: generation of feasible starting points; search for local minima of the problem of packing soft polygons for each starting point using the decomposition algorithm; choosing the best local minimum found at the previous stage. To search for smart starting arrangements, an optimization algorithm for packing original polygons using their homothetic transformations is applied. Decomposition of the problem of packing polygons of variable shapes is based on an iterative procedure that allows reducing a large-scale problem to a sequence of smaller nonlinear programming problems (linear to the number of objects). Numerical examples are provided for oriented rectangles and non-oriented regular polygons.
Analysis of the structural reliability of communication networks supporting protective switching mechanisms for one protected section and one backup section
K. А. Batenkov, A. B. Fokin
Objectives. The service level agreement is an important tool used in building reasonable relations between subscribers and operators of telecommunication networks. This includes the quality of services provided. One key component is reliability as assessed by the availability factor. The most suitable model for assessing the reliability of the service provided is a random graph model based on the service contour. This is the set of technical resources involved in the provision of this service. In this formulation, the assessment of the reliability of the service is based on the reliability of elements which constitute the telecommunications network (graph), nodes (vertices) and communication lines (edges). At the same time, the availability factors of nodes and lines are determined by the design features of the distribution environment, as well as the technical means used to organize them. The purpose of this work is to develop an approach to analyzing the reliability of telecommunication networks which support protective switching mechanisms for one protected and one backup sections.Methods. The following methods are used: theory of random graphs, matrices, probabilities and computer modeling.Results. The elements of the route, both basic and reserving, are divided into three groups. The first indicates permanent unchangeable parts of the paths, the second group identifies the reserved sections, and the third group indicates the reserving sections. At the same time, each of the reserved and reserving sections is formed on the basis of specified preferences. They are usually aimed at increasing the resulting reliability, although other rules may be used. In the case of protective switching schemes for one protected section and one backup sections, a variant of forming routes used for further calculations of the reliability indicator is shown.Conclusions. Using the example of a backbone network, the study shows that the use of protective switching mechanisms for the case of one required transmission route demonstrates a significant increase in reliability, with the exception of the use of protective switching in sections. This is primarily due to the topology features of the network under consideration.
The Past and Future of High Technology
This interview was given in 2008 by Arkady Zakrevsky (1928–2014), Corresponding Member of the National Academy of Sciences of Belarus (1972), Doctor of Technical Sciences (1967), and Professor (1969). He stood at the origins of the birth of cybernetics in the Soviet Union. He proposed the programming language for logical tasks LYaPAS, on the basis of which a series of computer-aided design systems for discrete devices was created, and methods for implementing parallel algorithms for the logical control of interacting processes. Some monographs: LYaPAS: A Programming Language for Logic and Coding Algorithms (N.-Y., L.: Academic Press, 1969; with M. A. Gavrilov); Boolesche Gleichungen: Theorie, Anwendung, Algorithmen (Berlin: VEB Verlag Technik, 1984; with Dieter Bochmann and Christian Posthoff); Combinatorial Algorithms of Discrete Mathematics (Tallinn: TUT Press, 2008; with Yu. Pottosin, L. Cheremisinova); Optimization in Boolean Space (Tallinn: TUT Press, 2009; with Yu. Pottosin, L. Cheremisinova); Design of Logical Control Devices (Tallinn: TUT Press, 2009; with Yu. Pottosin, L. Cheremisinova); Combinatorial Calculations in Many-Dimensional Boolean Space (Tallinn: TUT Press, 2012); Solving Large Systems Logical Equations (Tallinn: TUT Press, 2013).
Compensation of dry friction impact on the accuracy and stability of electromechanical systems with cable-block transmissions
Sergey A. Gayvoronskiy
The paper considers the electromechanical systems with cable-block transmissions. Their blocks have a small mass and are inertialess. In such transmissions, as a rule, one block is connected to the electric drive and serves as a drive block for transmitting forces, and the remaining movable and stationary transmitting blocks are passive. In this case, dry friction moments with previously unknown values act in all blocks. The author has carried out the analysis of the effect of dry friction in blocks of both types on the accuracy and stability of electromechanical systems. As a result of the mathematical description, taking into account the accepted assumptions, it is established that the effect of dry friction in the drive block is equivalent to the action of a relay β-link with a leading hysteresis characteristic. The effect of dry friction in passive inertialess blocks corresponds to the effect of backlashes with lagging hysteresis characteristics. A conclusion is made about the negative effect of dry friction in blocks of both types on the accuracy of electromechanical systems and the deterioration of their stability as a result of dry friction in passive blocks. The author developed the control algorithms that reduce the established negative effect of dry friction on the dynamics of electromechanical systems. The first adaptive algorithm is capable of accelerating the start of electromechanical system motion from a state of rest with an uncertain moment of dry friction, as well as when changing the direction of rotation of the blocks due to the rapid overcoming of the total moment of dry friction. The second robust algorithm implements a pseudo-linear law of correction of the phase-frequency characteristic of the system and creates a constant frequency-independent phase advance of the required value in it.
Cybernetics, Computer software
The Future of Artificial Intelligence (AI) and Machine Learning (ML) in Landscape Design: A Case Study in Coastal Virginia, USA
Zihao Zhang, Ben Bowes
There have been theory-based endeavours that directly engage with AI and ML in the landscape discipline. By presenting a case that uses machine learning techniques to predict variables in a coastal environment, this paper provides empirical evidence of the forthcoming cybernetic environment, in which designers are conceptualized not as authors but as choreographers, catalyst agents, and conductors among many other intelligent agents. Drawing ideas from posthumanism, this paper argues that, to truly understand the cybernetic environment, we have to take on posthumanist ethics and overcome human exceptionalism.
Networks of Classical Conditioning Gates and Their Learning
Shun-ichi Azuma, Dai Takakura, Ryo Ariizumi
et al.
Chemical AI is chemically synthesized artificial intelligence that has the ability of learning in addition to information processing. A research project on chemical AI, called the Molecular Cybernetics Project, was launched in Japan in 2021 with the goal of creating a molecular machine that can learn a type of conditioned reflex through the process called classical conditioning. If the project succeeds in developing such a molecular machine, the next step would be to configure a network of such machines to realize more complex functions. With this motivation, this paper develops a method for learning a desired function in the network of nodes each of which can implement classical conditioning. First, we present a model of classical conditioning, which is called here a classical conditioning gate. We then propose a learning algorithm for the network of classical conditioning gates.
Life-inspired Interoceptive Artificial Intelligence for Autonomous and Adaptive Agents
Sungwoo Lee, Younghyun Oh, Hyunhoe An
et al.
Building autonomous -- i.e., choosing goals based on one's needs -- and adaptive -- i.e., surviving in ever-changing environments -- agents has been a holy grail of artificial intelligence (AI). A living organism is a prime example of such an agent, offering important lessons about adaptive autonomy. Here, we focus on interoception, a process of monitoring one's internal environment to keep it within certain bounds, which underwrites the survival of an organism. To develop AI with interoception, we need to factorize the state variables representing internal environments from external environments and adopt life-inspired mathematical properties of internal environment states. This paper offers a new perspective on how interoception can help build autonomous and adaptive agents by integrating the legacy of cybernetics with recent advances in theories of life, reinforcement learning, and neuroscience.
An Experience Mapping Method for Delayed Understanding in STEM Education
Masaaki Kunigami, Takamasa Kikuchi, Takao Terano
This study introduces a novel experience-mapping methodology designed to alleviate the challenge of delayed comprehension in education. Education often entails a delayed understanding of its content and value. This comprehension lag often results in discrepancies between learners and educational content, potentially leading to setbacks in the learning process. In response, we present a mapping model that delineates the essential structure of educational content and positioning between the learner and the content. This model serves as a guiding roadmap, enabling learners to navigate the complexities of educational content through a pair of constructed semantic networks. These networks reflect insights from recent brain science and educational experience studies. This study delves into the application of the STEM (Science, Technology, Engineering, Mathematics) education. Furthermore, we discuss the potential of experience mapping within the spheres of curriculum design and faculty development. Through these applications, this research contributes to the development of educational.
Information technology, Communication. Mass media
Optimal by the Order of Accuracy Cubature Formula for the Approximate Calculation of Triple Integrals from Fast Oscillating Functions in General View
Olesia Nechuiviter, Serhii Ivanov
Introduction. The rapid development of digital technologies encourages scientists to create new or improve existing mathematical models of technical processes. It is time to develop mathematical models with different types of data. In the tasks of digital signal and image processing, the approximate calculation of integrals from rapidly oscillating functions using new information operators makes it possible to build cubature formulas using different types of information: the values of functions on planes, lines and points can be used as data.
The purpose is to present and investigate the optimal cubature formula for the approximate calculation of the triple integral from rapidly oscillating functions in the general form on the class of differential functions. Information about functions are traces on systems of mutually perpendicular planes.
Results. The study of the problems of digital signal and image processing continued using the example of numerical integration of triple integrals from rapidly oscillating functions in the general form.
The values of functions on systems of mutually perpendicular planes are using for constructed cubature formula.
The main attention in the research focuses on obtaining the estimations of errors. Proposed cubature formula for the approximate calculation of the triple integral from rapidly oscillating functions in general view is optimal in order of accuracy on the class of differential functions. The conducted numerical experiment confirmed the theoretical results.
Conclusions. The obtained results make it possible to build new and improve existing mathematical models of processes with different types of input information. New information operators are a powerful tool in the development of such models. Cubature formulas for the approximate calculation of integrals from rapidly oscillating functions of many variables have been created. Іn the construction of the formulas traces of the function on planes, lines, and points are used. Formulas in their construction use function traces on planes, lines, and points. In this work, a cubature formula for the approximate calculation of the triple integral from a rapidly oscillating function in the general form, which is optimal in order of accuracy, is constructed and investigated on the class of differentiable functions. A feature of the proposed formula is the use of values of functions on systems of mutually perpendicular planes as data.
Field-Controlled Microrobots Fabricated by Photopolymerization
Xiyue Liang, Zhuo Chen, Yan Deng
et al.
Field-controlled microrobots have attracted extensive research in the biological and medical fields due to the prominent characteristics including high flexibility, small size, strong controllability, remote manipulation, and minimal damage to living organisms. However, the fabrication of these field-controlled microrobots with complex and high-precision 2- or 3-dimensional structures remains challenging. The photopolymerization technology is often chosen to fabricate field-controlled microrobots due to its fast-printing velocity, high accuracy, and high surface quality. This review categorizes the photopolymerization technologies utilized in the fabrication of field-controlled microrobots into stereolithography, digital light processing, and 2-photon polymerization. Furthermore, the photopolymerized microrobots actuated by different field forces and their functions are introduced. Finally, we conclude the future development and potential applications of photopolymerization for the fabrication of field-controlled microrobots.
HYBRID APPROACH FOR DATA FILTERING AND MACHINE LEARNING INSIDE CONTENT MANAGEMENT SYSTEM
Oleh Poliarush , Svitlana Krepych , Iryna Spivak
The object of research is the processes of data filtering and machine learning in content management systems. The subject of research is developing a hybrid approach to data filtering based on a combination of supervised and unsupervised machine learning. The article explores machine learning approaches to content management and how they can change the way we organize, categorize, and derive value from vast amounts of data. The main goal is to develop and use a hybrid approach for data filtering and training that will help optimize resource consumption and perform supervised training for better categorization in the future. This approach includes elements of supervised and unsupervised learning using the BERT architecture that uses this kind of flow that help reduce resource usage and adjust the algorithm to perform better in a specific area. As a result, thanks to this approach, the intelligent system was able to independently optimize for a specific field of use and help to reduce the costs of using resources. Conclusion. After applying a hybrid approach of data filtering and machine learning to existing data streams, we obtain a performance increase of up to 5%, and this percentage increases depending on the running time of the application.
Computer software, Information theory
Fine-Grained Population Mobility Data-Based Community-Level COVID-19 Prediction Model
Pengyue Jia, Ling Chen, Dandan Lyu
Predicting the number of infections in the anti-epidemic process is extremely beneficial to the government in developing anti-epidemic strategies, especially in fine-grained geographic units. Previous works focus on low spatial resolution prediction, e.g., county-level, and preprocess data to the same geographic level, which loses some useful information. In this paper, we propose a fine-grained population mobility data-based model (FGC-COVID) utilizing data of two geographic levels for community-level COVID-19 prediction. We use the population mobility data between Census Block Groups (CBGs), which is a finer-grained geographic level than community, to build the graph and capture the dependencies between CBGs using graph neural networks (GNNs). To mine as finer-grained patterns as possible for prediction, a spatial weighted aggregation module is introduced to aggregate the embeddings of CBGs to community level based on their geographic affiliation and spatial autocorrelation. Extensive experiments on 300 days LA city COVID-19 data indicate our model outperforms existing forecasting models on community-level COVID-19 prediction.
Proceedings of the Fourth International Conference on Applied Category Theory
Kohei Kishida
The Fourth International Conference on Applied Category Theory took place at the Computer Laboratory of the University of Cambridge on 12--16 July 2021. It was a hybrid event, with physical attendees present in Cambridge and other participants taking part online. All the talks were recorded and the videos have been posted online, links to which can be found on the conference website (https://www.cl.cam.ac.uk/events/act2021/). Continuing the trend in the previous meetings of ACT, the contributions to ACT 2021 ranged from pure to applied and represented a great variety of categorical techniques and application topics, including: graphical calculi; lenses; differential categories; categorical probability theory; machine learning; game theory; cybernetics; natural language semantics and processing; cryptography; and finite model theory. This proceedings volume contains about half of the papers that were presented as talks at ACT 2021. This selection is a reflection of the authors' choice as to whether to publish their papers in this volume or elsewhere.
Making decisions in the dark basement of the brain: A look back at the GPR model of action selection and the basal ganglia
Mark D. Humphries, Kevin Gurney
How does your brain decide what you will do next? Over the past few decades compelling evidence has emerged that the basal ganglia, a collection of nuclei in the fore- and mid-brain of all vertebrates, are vital to action selection. Gurney, Prescott, and Redgrave published an influential computational account of this idea in Biological Cybernetics in 2001. Here we take a look back at this pair of papers, outlining the "GPR" model contained therein, the context of that model's development, and the influence it has had over the past twenty years. Tracing its lineage into models and theories still emerging now, we are encouraged that the GPR model is that rare thing, a computational model of a brain circuit whose advances were directly built on by others.
Whiteness in and through data protection: an intersectional approach to anti-violence apps and #MeToo bots
Renee Shelby, Jenna Harb, Kathryn Henne
This article analyses apps and artificial intelligence chatbots designed to offer survivors of sexual violence with emergency assistance, education, and a means to report and build evidence against perpetrators. Demonstrating how these technologies both confront and constitute forms of oppression, this analysis complicates assumptions about data protection through an intersectional feminist examination of these digital tools. In surveying different anti-violence apps, we interrogate how the racial formation of whiteness manifests in ways that can be understood as the political, representational, and structural intersectional dimensions of data protection.
Cybernetics, Information theory