André Artelt, Stelios G. Vrachimis, Demetrios G. Eliades
et al.
The increasing penetration of information and communication technologies in the design, monitoring, and control of water systems enables the use of algorithms for detecting and identifying unanticipated events (such as leakages or water contamination) using sensor measurements. However, data-driven methodologies do not always give accurate results and are often not trusted by operators, who may prefer to use their engineering judgment and experience to deal with such events.In this work, we propose a framework for interpretable event diagnosis — an approach that assists the operators in associating the results of algorithmic event diagnosis methodologies with their own intuition and experience. This is achieved by providing contrasting (i.e., counterfactual) explanations of the results provided by fault diagnosis algorithms; their aim is to improve the understanding of the algorithm’s inner workings by the operators, thus enabling them to take a more informed decision by combining the results with their personal experiences. Specifically, we propose counterfactual event fingerprints, a representation of the difference between the current event diagnosis and the closest alternative explanation, which can be presented in a graphical way. The proposed methodology is applied and evaluated on a realistic use case using the L-Town benchmark.
This paper investigates the strategic behavior of validators in blockchain systems utilizing the Proof-of-Stake (PoS) consensus mechanism through the application of game theory. A mathematical model of a non-cooperative game with complete information is proposed, where validators act as rational agents aiming to maximize their expected payoff by choosing between honest validation and malicious actions, specifically a double-spending attack. The model incorporates key economic parameters of the system: block and attestation rewards, transaction fees, operational costs, slashing penalties, and the probability of detecting protocol violations. Utility functions for two primary strategies – honest and attacking – are formalized, and conditions for the existence of Nash equilibrium, the central solution concept in game theory, are analyzed.
The analysis demonstrates that under effective punishment mechanisms, the "all-honest" equilibrium is stable: an individual validator has no incentive to deviate from protocol-compliant behavior, as potential losses from penalties significantly outweigh any gains from a failed attack. Conversely, the "all-attackers" equilibrium, while theoretically possible, is practically unattainable due to the prohibitively high cost of acquiring a majority stake, rendering such a strategy economically infeasible. A quantitative example based on a hypothetical network of 1000 validators confirms these findings and highlights the critical importance of balancing incentives for honest behavior with strong disincentives for malicious actions.
The study emphasizes the crucial role of economic security in PoS systems, where stability is ensured not only by technical safeguards but also by carefully designed economic mechanisms. The developed model can be used by blockchain protocol designers to calibrate consensus parameters, thereby promoting decentralization, resilience, and long-term network reliability. Future research can extend the model by incorporating heterogeneous validators, repeated games, and the analysis of other attack vectors.
Abstract Bipartite networks provide a major insight into the organisation of many real-world systems. One of the most relevant issues encountered when modelling a bipartite network is that of facing the information shortage concerning intra-layer linkages. In the present contribution, we propose an unsupervised algorithm to obtain statistically validated projections of bipartite signed networks, according to which any two nodes sharing a statistically significant number of concordant (discordant) relationships are connected by a positive (negative) edge. Our algorithm outputs a matrix of link-specific p values, from which a validated projection can be obtained upon running a multiple-hypothesis testing procedure. After testing our method on synthetic configurations output by a fully controllable generative model, we apply it to several real-world configurations: in all cases, non-trivial mesoscopic structures, induced by relationships that cannot be traced back to the constraints defining the employed benchmarks, hence revealing genuine traces of self-organisation, are detected.
Information theory, Electronic computers. Computer science
Shahabodin Vahidi Mehrajardi, Mohammad Meftah, Amir Hossein Meftah
SUBJECT & OBJECTIVES: Throughout history, the concept of human identity has been a challenging topic in philosophy, with the responses of philosophers influencing various branches of science. In contemporary Western philosophy, there has been a shift towards viewing humans solely as physical beings. However, Islamic philosophy takes a different approach, rooted in rationality and the teachings of Islam. The Misbah Yazdi is a prominent Muslim philosopher who greatly contributed to the field and shed light on many philosophical issues. He derived the concept of human identity from Islamic philosophy, enriching it with Islamic teachings and offering a solution to the puzzle of human identity. He defined human beings based on their soul, rather than simply considering humans as a combination of body and soul.METHOD & FINDING: This research applies a qualitative approach with a critical analysis method. The primary finding of this article is to elucidate the key differences between Islamic and Western philosophy, focusing on Misbah Yazdi's perspective.CONCLUSION: The fundamental disparity between Islamic and Western philosophy concerning human identity lies in the concept of the soul. According to Islamic philosophy, the soul defines an individual's uniqueness, providing them with a clear and stable personality and identity. In contrast, Western philosophy leaves the identity of human beings shrouded in deep ambiguity. Misbah Yazdi presents a distinctive approach to this topic, offering insights and solutions that warrant further exploration.
Kana Banno, Filipe Marcel Fernandes Gonçalves, Clara Sauphar
et al.
During the production of salmonids in aquaculture, it is common to observe growth-stunted individuals. The cause for the so-called “loser fish syndrome” is unclear, which needs further investigation. Here, we present and compare computer vision systems for the automatic detection and classification of loser fish in Atlantic salmon images taken in sea cages. We evaluated two end-to-end approaches (combined detection and classification) based on YoloV5 and YoloV7, and a two-stage approach based on transfer learning for detection and an ensemble of classifiers (e.g., linear perception, Adaline, C-support vector, K-nearest neighbours, and multi-layer perceptron) for classification. To our knowledge, the use of an ensemble of classifiers, considering consolidated classifiers proposed in the literature, has not been applied to this problem before. Classification entailed the assigning of every fish to a healthy and a loser class. The results of the automatic classification were compared to the reliability of human classification. The best-performing computer vision approach was based on YoloV7, which reached a precision score of 86.30%, a recall score of 71.75%, and an F1 score of 78.35%. YoloV5 presented a precision of 79.7%, while the two-stage approach reached a precision of 66.05%. Human classification had a substantial agreement strength (Fleiss’ Kappa score of 0.68), highlighting that evaluation by a human is subjective. Our proposed automatic detection and classification system will enable farmers and researchers to follow the abundance of losers throughout the production period. We provide our dataset of annotated salmon images for further research.
Real-time Hand Gesture Recognition (HGR) has emerged as a vital technology in human-computer interaction, offering intuitive and natural ways for users to interact with computer-vision systems. This comprehensive review explores the advancements, challenges, and future directions in real-time HGR. Various HGR-related technologies have also been investigated, including sensors and vision technologies, which are utilized as a preliminary step in acquiring data in HGR systems. This paper discusses different recognition approaches, from traditional handcrafted feature methods to state-of-the-art deep learning techniques. Learning paradigms have been analyzed such as supervised, unsupervised, transfer, and adaptive learning in the context of HGR. A wide range of applications has been covered, from sign language recognition to healthcare and security systems. Despite significant developments in the computer vision domain, challenges remain in areas such as environmental robustness, gesture complexity, computational efficiency, and user adaptability. Lastly, this paper concludes by highlighting potential solutions and future research directions trying to develop more robust, efficient, and user-friendly real-time HGR systems.
Lev Raskin, Larysa Sukhomlyn , Dmytro Sokolov
et al.
The work considers system controlled parameters information value assessing technology in the task of its state identifying. The purpose of the study is to improve the standard methodology for controlled parameters information value assessing. The proposed method is based on the controlled parameters value probabilities analysis falling into the subintervals of the interval of possible values for different states of the system. When the value of the controlled parameter falls into the left or right boundary subintervals of the compatibility interval for any state of the object, the conclusion about its state is made taking into account possible errors of the first or second kind in this case. When the controlled parameter value enters the central subinterval, useful information appears if the corresponding probabilities for the states H1 and H2 are differ significantly. Thus, it is shown that taking into account the probabilities of fuzzy values of the controlled parameter falling into the compatibility interval for various states of the object significantly increases its informational value.
To enhance the efficacy of intermittent hypoxia training in sports, this study presents an intelligent training model that utilizes a graph neural network. The model incorporates the particle filter method to establish a real-time processing system for physiological signals generated during intermittent hypoxia training, enabling frequency tracking and network sorting. Additionally, an ARMA model is utilized to facilitate real-time carrier frequency estimation and time-hopping detection of physiological signals. An enhanced frequency tracking method is proposed based on the Graph Neural Network (GNN) and ARMA model to improve the accuracy of frequency tracking while minimizing algorithm complexity. The experimental results indicate that the fusion of the GNN and the proposed intermittent hypoxia training model can effectively enhance the effects of intermittent hypoxia training in sports.
Lorenzo Bianchi, Daniele Carnevale, Fabio DelFrate
et al.
Abstract A novel distributed control architecture for unmanned aircraft system (UASs) based on the new Robot Operating System (ROS) 2 middleware is proposed, endowed with industrial‐grade tools that establish a novel standard for high‐reliability distributed systems. The architecture has been developed for an autonomous quadcopter to design an inclusive solution ranging from low‐level sensor management and soft real‐time operating system setup and tuning to perception, exploration, and navigation modules orchestrated by a finite‐state machine. The architecture proposed in this study builds on ROS 2 with its scalability and soft real‐time communication functionalities, while including security and safety features, optimised implementations of localisation algorithms, and integrating an innovative and flexible path planner for UASs. Finally, experimental results have been collected during tests carried out both in the laboratory and in a realistic environment, showing the effectiveness of the proposed architecture in terms of reliability, scalability, and flexibility.
Objective: The purpose of this study is to identify financing strategies in Tehran central library of public universities. Methods: This study in terms of implementation is library then survey, and data collection tools are study texts then a researcher made questionnaire. First, the current status of libraries in terms of funding from the heads or deputies or their representatives point of view is discussed. Then the questionnaires were distributed among the managers and librarians to inform us about their viewpoint about financing strategies. To measure the current situation the study population consisted of all managers and heads of 11 academic universities and the attitude assessment was in a census method. A total of 156 librarians responded to the questionnaires. The analysis of the findings of this study has been done through SPSS. Findings: The findings of the study show that the current strategies used to finance in the central libraries of academic universities of Tehran are not appropriate to the current situation. Besides, it was founded that there is a significant relationship between the main variables of the research, namely financing strategies, and its dimensions, and income-generating activities, financing methods, and training in the financing, respectively, have the greatest impact on financing strategies in university central libraries.Originality: This study for the first time examines the approach of financing in Tehran public university libraries in addition, using the perspectives of managers and librarians, offers some guidelines for financing in academic libraries.Keywords:
Information theory, Bibliography. Library science. Information resources
This study investigates the relationship between the phonemic content of texts in English and the emotional valence they inspire. The sublexical content is presented in terms of biphones composed by one vowel and one consonant. The statistical analysis of a vast corpus of emotionally evaluated sentences reveals a strong correlation between this sublexical presentation and the evaluations of valence provided by the readers. An initial test performed with other valence-rated prose texts makes believing that the feature observed within the corpus can be useful for the emotion classification of texts.
هدف: هدف پژوهش حاضر تحلیل اثرات واسطهای مؤلفههای مدیریت دانش در بهبود عملکرد منابع انسانی بر پایه الگوی پایههای ساختمان مدیریت دانش پروست و همکاران (2000) میباشد.روش: این پژوهش از نظر ماهیت پیمایشی- تحلیلی و از نظر هدف کاربردی صورت گرفته است. جامعه آماری در این پژوهش 370 نفر از کارکنان شرکتهای پتروشیمی منطقه ویژه اقتصادی انرژی پارس جنوبی میباشد. برای گردآوری دادهها از پرسشنامه استفاده شده است. دادهها به کمک دو نرم افزار SPSS21، Lisrel8 و آزمونهای آماری توصیفی و استنباطی(کولموگروف - اسمیرنوف، اسپیرمن و رگرسیون گام به گام) مورد تجزیه و تحلیل قرارگرفتند.یافتهها: ضریب همبستگی در سطح خطای (05/0 p
Rodrigue Tchamna, Moonyong Lee, Iljoong Youn
et al.
Linear quadratic regulator is a powerful technique for dealing with the control design of any linear and nonlinear system after linearization of the system around an operating point. For small systems, which have fewer state variables, the transformation of the performance index from scalar to matrix form can be straightforward. On the other hand, as the system becomes large with many state variables and controllers, appropriate design and notations should be defined to make it easy to automatically implement the technique for any large system without the need to redesign from scratch every time one requires a new system. The main aim of this article was to deal with this issue. This article shows how to automatically obtain the matrix form of the performance index matrices from the scalar version of the performance index. Control of a full-vehicle in cornering was taken as a case study in this article.