Semonti Banik, Sajal Chandra Banik, Sarker Safat Mahmud
ABSTRACT The essential factor in developing multi‐robot systems is the generation of an optimal path for task completion by multiple robots. To ensure effective path planning, this paper studies the recent publications and provides a detailed review of the path planning approaches to avoid collisions in uncertain environments. In this article, path‐planning approaches for multiple robots are categorized primarily into classical, heuristic, and artificial intelligence‐based methods. Among the heuristic approaches, bio‐inspired approaches are mostly employed to optimize the classical approaches to enhance their adaptability. The articles are analyzed based on static and dynamic scenarios, real‐time experiments, and simulations involving hybrid solutions. The increasing focus on using hybrid approaches in dynamic environments is found mostly in the papers employing heuristic and AI‐based approaches. In real‐time applications, AI‐based approaches are highly implemented in comparison to heuristic and classical approaches. Moreover, the findings from this review, highlighting the strengths and drawbacks of each algorithm, can help researchers select the appropriate approach to overcome the limitations in designing efficient multi‐robot systems.
PHILEMON UTEN EMMOH, christopher ifeanyi Eke, Timothy Moses
Selection of important features is very vital in machine learning tasks involving high-dimensional dataset with large features. It helps in reducing the dimensionality of a dataset and improving model performance. Most of the feature selection techniques have restriction in the kind of dataset to be used. This study proposed a feature selection technique that is based on statistical lift measure to select important features from a dataset. The proposed technique is a generic approach that can be used in any binary classification dataset. The technique successfully determined the most important feature subset and outperformed the existing techniques. The proposed technique was tested on lungs cancer dataset and happiness classification dataset. The effectiveness of the proposed technique in selecting important features subset was evaluated and compared with other existing techniques, namely Chi-Square, Pearson Correlation and Information Gain. Both the proposed and the existing techniques were evaluated on five machine learning models using four standard evaluation metrics such as accuracy, precision, recall and F1-score. The experimental results of the proposed technique on lung cancer dataset shows that logistic regression, decision tree, adaboost, gradient boost and random forest produced a predictive accuracy of 0.919%, 0.935%, 0.919%, 0.935% and 0.935% respectively, and that of happiness classification dataset produced a predictive accuracy of 0.758%, 0.689%, 0.724%, 0.655% and 0.689% on random forest, k-nearest neighbor, decision tree, gradient boost and cat boost respectively, which outperformed the existing techniques.
An effective energy management strategy (EMS) is essential to optimize the energy efficiency of electric vehicles (EVs). With the advent of advanced machine learning techniques, the focus on developing sophisticated EMS for EVs is increasing. Here, we introduce LearningEMS: a unified framework and open-source benchmark designed to facilitate rapid development and assessment of EMS. LearningEMS is distinguished by its ability to support a variety of EV configurations, including hybrid EVs, fuel cell EVs, and plug-in EVs, offering a general platform for the development of EMS. The framework enables detailed comparisons of several EMS algorithms, encompassing imitation learning, deep reinforcement learning (RL), offline RL, model predictive control, and dynamic programming. We rigorously evaluated these algorithms across multiple perspectives: energy efficiency, consistency, adaptability, and practicability. Furthermore, we discuss state, reward, and action settings for RL in EV energy management, introduce a policy extraction and reconstruction method for learning-based EMS deployment, and conduct hardware-in-the-loop experiments. In summary, we offer a unified and comprehensive framework that comes with three distinct EV platforms, over 10 000 km of EMS policy data set, ten state-of-the-art algorithms, and over 160 benchmark tasks, along with three learning libraries. Its flexible design allows easy expansion for additional tasks and applications. The open-source algorithms, models, data sets, and deployment processes foster additional research and innovation in EV and broader engineering domains.
Jiunn-Jye Sheu, Jui-Ning Yeh, Pei-Hsun Sung
et al.
This study tested the hypothesis that ITRI Biofilm prevents adhesion of the chest cavity. Combined extracorporeal shock wave (ECSW) + bone marrow-derived autologous endothelial progenitor cell (EPC) therapy was superior to monotherapy for improving heart function (left ventricular ejection fraction [LVEF]) in minipigs with ischemic cardiomyopathy (IC) induced by an ameroid constrictor applied to the mid-left anterior descending artery. The minipigs ( n = 30) were equally designed into group 1 (sham-operated control), group 2 (IC), group 3 (IC + EPCs/by directly implanted into the left ventricular [LV] myocardium; 3 [+]/3[–] ITRI Biofilm), group 4 (IC + ECSW; 3 [+]/[3] – ITRI Biofilm), and group 5 (IC + EPCs–ECSW; 3 [+]/[3] – ITRI Biofilm). EPC/ECSW therapy was administered by day 90, and the animals were euthanized, followed by heart harvesting by day 180. In vitro studies demonstrated that cell viability/angiogenesis/cell migratory abilities/mitochondrial concentrations were upregulated in EPCs treated with ECSW compared with those in EPCs only (all P s < 0.001). The LVEF was highest in group 1/lowest in group 2/significantly higher in group 5 than in groups 3/4 (all P s < 0.0001) by day 180, but there was no difference in groups 3/4. The adhesion score was remarkably lower in patients who received ITRI Biofilm treatment than in those who did not (all P s <0.01). The protein expressions of oxidative stress (NOX-1/NOX-2/oxidized protein)/apoptotic (mitochondrial-Bax/caspase3/PARP)/fibrotic (TGF-β/Smad3)/DNA/mitochondria-damaged (γ-H2AX/cytosolic-cytochrome-C/p-DRP1), and heart failure/pressure-overload (BNP [brain natriuretic peptide]/β-MHC [beta myosin heavy chain]) biomarkers displayed a contradictory manner of LVEF among the groups (all P s < 0.0001). The protein expression of endothelial biomarkers (CD31/vWF)/small-vessel density revealed a similar LVEF within the groups (all P s < 0.0001). ITRI Biofilm treatment prevented chest cavity adhesion and was superior in restoring IC-related LV dysfunction when combined with EPC/ECSW therapy compared with EPC/ECSW therapy alone.
Data classification and grading is the foundation for ensuring the safe circulation of data and promoting the release of data value.This paper focuses on the key task of government data classification and grading in digital reform.Using a theoretical case study method and based on publicly released plans by various provincial governments and ministries, the implementation of government data classification and grading in China is systematically sorted and quantitatively analyzed.This paper summarizes four key processes and five characteristics of the implementation of government data classification and classification in China.Based on the special complexity of the classification and grading of government data, this paper puts forward four problems corresponding solutions in the implementation of the classification and grading of government data in China, such as unclear overall target positioning, different classification and grading objects, separated classification and grading relations, and different security grading standards.Based on the practice of classification and grading government data of a national ministry, this paper verifies the scientificity and effectiveness of the solutions, and provides a reference for constructing a unified national government data classification and grading system.
This paper presents a novel approach to enhancing the security and reliability of drone communications through the integration of Quantum Random Number Generators (QRNG) in Frequency Hopping Spread Spectrum (FHSS) systems. We propose a multi-drone framework that leverages QRNG technology to generate truly random frequency hopping sequences, significantly improving resistance against jamming and interception attempts. Our method introduces a concurrent access protocol for multiple drones to share a QRNG device efficiently, incorporating robust error handling and a shared memory system for random number distribution. The implementation includes secure communication protocols, ensuring data integrity and confidentiality through encryption and Hash-based Message Authentication Code (HMAC) verification. We demonstrate the system’s effectiveness through comprehensive simulations and statistical analyses, including spectral density, frequency distribution, and autocorrelation studies of the generated frequency sequences. The results show a significant enhancement in the unpredictability and uniformity of frequency distributions compared to traditional pseudo-random number generator-based approaches. Specifically, the frequency distributions of the drones exhibited a relatively uniform spread across the available spectrum, with minimal discernible patterns in the frequency sequences, indicating high unpredictability. Autocorrelation analyses revealed a sharp peak at zero lag and linear decrease to zero values for other lags, confirming a general absence of periodicity or predictability in the sequences, which enhances resistance to predictive attacks. Spectral analysis confirmed a relatively flat power spectral density across frequencies, characteristic of truly random sequences, thereby minimizing vulnerabilities to spectral-based jamming. Statistical tests, including Chi-squared and Kolmogorov-Smirnov, further confirm the unpredictability of the frequency sequences generated by QRNG, supporting enhanced security measures against predictive attacks. While some short-term correlations were observed, suggesting areas for improvement in QRNG technology, the overall findings confirm the potential of QRNG-based FHSS systems in significantly improving the security and reliability of drone communications. This work contributes to the growing field of quantum-enhanced wireless communications, offering substantial advancements in security and reliability for drone operations. The proposed system has potential applications in military, emergency response, and secure commercial drone operations, where enhanced communication security is paramount.
H. M. Rezk, Wedad Albalawi, H. A. Abd El-Hamid
et al.
In this article, some fractional Hardy-Leindler-type inequalities will be illustrated by utilizing the chain law, Hölder’s inequality, and integration by parts on fractional time scales. As a result of this, some classical integral inequalities will be obtained. Also, we would have a variety of well-known dynamic inequalities as special cases from our outcomes when α=1.
Monib Ahmad, Abraiz Khattak, Abdul Kashif Janjua
et al.
The globally soaring energy prices and electricity shortfall are major hurdles in the economic development of Pakistan. To cope with periodic power outages, small and medium enterprise (SME) business owners have to fall back on alternate power sources such as backup generators and uninterruptible power supplies (UPS), which further increase the per kWh cost of electricity, power quality issues, and greenhouse gas (GHG) emissions. On the contrary, grid-tied solar photovoltaic (PV) systems are not only economical and sustainable but support the national power grid to mitigate environmental emissions. This study aims to investigate and compare the techno-economic viability of grid-connected solar photovoltaic power plants for the manufacturing SME sector in four different districts of Punjab, Pakistan. Based on the technical, financial, and environmental indicators, a detailed techno-economic, sensitivity, and GHG emission analysis is conducted using RETScreen Expert software. The research findings clearly show that the proposed solar PV projects for all four locations are technically, financially, and environmentally viable, however, Sargodha as compared to other sites is the most feasible location with the highest capacity factor of 17.8 %, highest internal rate of return 14.9 %, lowest payback period 7.7 years, and least levelized cost of electricity 8.5 ¢/kWh. For validation, the simulation results are compared with performance metrics from PV plants erected in various parts of the world. Applying the same research approach to the whole industrial sector of Punjab recommends adding 13,469 MW of PV capacity to satisfy the industry’s 20446.21 GWh annual energy consumption and to cut emissions by 90,17,581 t CO2 per year. This research work presents guidelines for researchers to evaluate the feasibility of suitable PV technologies for the SME sector thereby helping investors to have a holistic view of potential investment zones.
BackgroundInflammation proteins including interleukins (ILs) have been reported to be related to obstructive sleep apnea (OSA). The aims of this study were to estimate the levels for several key interleukins in OSA and the causal effects between them.MethodWeighted mean difference (WMD) was used to compare the expression differences of interleukins between OSA and control, and the changed levels during OSA treatments in the meta-analysis section. A two-sample Mendelian randomization (MR) was used to estimate the causal directions and effect sizes between OSA risks and interleukins. The inverse-variance weighting (IVW) was used as the primary method followed by several other MR methods including MR Egger, Weighted median, and MR-Robust Adjusted Profile Score as sensitivity analysis.ResultsNine different interleukins—IL-1β, IL-2, IL-4, IL-6, IL-8, IL-12, IL-17, IL-18, and IL-23—were elevated in OSA compared with control to varying degrees, ranging from 0.82 to 100.14 pg/ml, and one interleukin, IL-10, was decreased by 0.77 pg/ml. Increased IL-1β, IL-6, and IL-8 rather than IL-10 can be reduced in OSA by effective treatments. Further, the MR analysis of the IVW method showed that there was no significant evidence to support the causal relationships between OSA and the nine interleukins—IL-1β, IL-2, IL-4, IL-5, IL-6, IL-8, IL-10, IL-17, and IL-18. Among them, the causal effect of OSA on IL-5 was almost significant [estimate: 0.267 (−0.030, 0.564), p = 0.078]. These results were consistent in the sensitivity analysis.ConclusionsAlthough IL-1β, IL-2, IL-4, IL-6, IL-8, IL-12, IL-17, IL-18, and IL-23 were increasing and IL-10 was reducing in OSA, no significant causal relationships were observed between them by MR analysis. Further research is needed to test the causality of OSA risk on elevated IL-5 level.
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial development in automatic diagnostic systems. The objective of this work is to obtain the DT of a Photovoltaic Solar Farm (PVSF) with a deep-learning (DL) approach. To build such a DT, sensor-based time series are properly analyzed and processed. The resulting data are used to train a DL model (e.g., autoencoders) in order to detect anomalies of the physical system in its DT. Results show a reconstruction error around 0.1, a recall score of 0.92 and an Area Under Curve (AUC) of 0.97. Therefore, this paper demonstrates that the DT can reproduce the behavior as well as detect efficiently anomalies of the physical system.
Abstract Face hallucination is a super‐resolution technique specially designed to reconstruct high‐resolution faces from low‐resolution faces. Most state‐of‐the‐art algorithms leverage position‐patch prior knowledge of human faces to better super‐resolve face images. However, most of them assume the training face dataset is sufficiently large, well cropped or aligned. This paper, proposes a novel example‐based face hallucination method, based on cluster consistent dictionary learning with the assumption that human faces have similar facial structures. In this method, the paired face image patches are firstly labelled as face areas including eyes, nose, mouth and other parts, as well as non‐face areas without requiring the training face images cropped and aligned. Then, the training patches are clustered according their labels and textures. The cluster consistent dictionary is learned to represent the low‐resolution patches and the high‐resolution patches. Finally, the high‐resolution patches of the input low‐resolution face image can be efficiently generated by using the adjusted anchored neighbourhood regression. As utilizing the labelled facial parts prior knowledge, the proposed method represents more details in the reconstruction. Experimental results demonstrate that the authors' algorithm outperforms many state‐of‐the‐art techniques for face hallucination under different datasets.
Abstract Functional encryption (FE) is a novel paradigm for encryption scheme which allows tremendous flexibility in accessing encrypted information. In FE, a user can learn specific function of encrypted messages by restricted functional key and reveal nothing else about the messages. Inner product encryption (IPE) is a special type of functional encryption where the decryption algorithm, given a ciphertext related to a vector x and a secret key related to a vector y, computes the inner product x·y. In this paper, we construct an efficient private-key functional encryption (FE) for inner product with simulation-based security, which is much stronger than indistinguishability-based security, under the External Decisional Linear assumption in the standard model. Compared with the existing schemes, our construction is faster in encryption and decryption, and the master secret key, secret keys and ciphertexts are shorter.
Solving the challenge of occupancy prediction is crucial in order to design efficient and sustainable office spaces and automate lighting, heating, and air circulation in these facilities. In office spaces where large areas need to be observed, multiple sensors must be used for full coverage. In these cases, it is normally important to keep the costs low, but also to make sure that the privacy of the people who use such environments are preserved. Low-cost and low-resolution heat (thermal) sensors can be very useful to build solutions that address these concerns. However, they are extremely sensitive to noise artifacts which might be caused by heat prints of the people who left the space or by other objects, which are either using electricity or exposed to sunlight. There are some earlier solutions for occupancy prediction that employ low-resolution heat sensors; however, they have not addressed nor compensated for such heat artifacts. Therefore, in this paper, we presented a low-cost and low-energy consuming smart space implementation to predict the number of people in the environment based on whether their activity is static or dynamic in time. We used a low-resolution <inline-formula><math display="inline"><semantics><mrow><mo>(</mo><mn>8</mn><mo>×</mo><mn>8</mn><mo>)</mo></mrow></semantics></math></inline-formula> and non-intrusive heat sensor to collect data from an actual meeting room. We proposed two novel workflows to predict the occupancy; one that is based on computer vision and one based on machine learning. Besides comparing the advantages and disadvantages of these different workflows, we used several state-of-the-art explainability methods in order to provide a detailed analysis of the algorithm parameters and how the image properties influence the resulting performance. Furthermore, we analyzed noise resources that affect the heat sensor data. The experiments show that the feature classification based method gives high accuracy when the data are clean from noise artifacts. However, when there are noise artifacts, the computer vision based method can compensate for those artifacts providing robust results. Because the computer vision based method requires an empty room recording, the feature classification based method should be chosen either when there is no expectancy of seeing noise artifacts in the data or when there is no empty recording available. We hope that our analysis brings light into understanding how to handle very low-resolution heat images in these environments. The presented workflows could be used in various domains and applications other than smart offices, where occupancy prediction is essential, e.g., for elderly care.
In this paper, we introduce the notion of commutator of two elements in a specific NeutroGroup. Then we define the notion of a NeutroNilpotentGroup and we study some of their properties. Moreover, we show that the intersection of two NeutroNilpotentGroups is a NeutroNilpotentGroup. Also, we show that the quotient of a NeutroNilpotentGroup is a NeutroNilpotentGroup. Specially, using NeutroHomomorphism we prove the NeutroNilpotentcy is closed with respect to homomorphic image.
One of developing retail company and is one of the biggest retail companies in Indonesia, namely Alfamart which is owned by PT. Sumber Alfaria Trijaya, Tbk. Alfamart must have the best marketing strategy and increase innovation for the satisfaction of customers in order to survive in high business competition. One strategy to improve marketing is the arrangement of product displays in stores known as planograms. Planogram is a concept that is used in planning the arrangement and placement of products according to certain categories based on consumer spending habits that aim to increase sales at retail. This research was conducted to create a web-based planogram master application using the Flask framework with the python programming language. The method used in this study is the RESTful API, which is the implementation of web services that work through HTTP links. This research produces a web-based master data application that can be used by users in entering data needed in making a planogram.
Keywords: RESTful API, Python Flask, Planogram
This article compares the two most prominent courses of Advanced Placement (AP) computer science study offered throughout 9-12 grades in the U.S. The structure, guidelines, components, and exam formats of the traditional AP Computer Science A course and the relatively newer AP Computer Science Principles course were compared to examine differences in content and emphases. A depth-of-learning analysis was conducted employing Bloom’s Revised Taxonomy to examine potential differences in rigor and challenge represented by the two options, particularly as it relates to acquiring computer programming proficiency. Analyses suggest structural differences in both course content and end-of-course exam components likely result in less depth and rigor in the new Computer Science Principles course as compared to the Computer Science A course. A lower minimum standard for learning programming skills in the Computer Science Principles course was observed, making it a less viable option for students looking to acquire skills transferable to future computer science study or employment. The potential implications for students choosing the new course over the traditional offering, as well as for schools opting for the new course as its sole or primary offering are discussed.
Roberto Sánchez-Cabrero, Óscar Costa-Román, Francisco Javier Pericacho-Gómez
et al.
This study describes the social and demographic profile of the first generation of users of marketed virtual reality (VR) viewers in Spain and, subsequently, it assesses the interest in its use as a learning tool. For that purpose, an online questionnaire created ad hoc was administered to a sample of 117 participants. The relationship between twelve variables was analysed comparing means through the Snedecor's F distribution and the contingency tables through the Chi-squared test and Somers' D. Among other issues, it was concluded that the virtual reality user profile at present corresponds to a person older than 36, mainly men, with higher education and having acquired their viewer no longer than one year ago. Concerning the interests of virtual reality users as a learning tool, only a few of them currently use virtual reality for this aim, but they mainly show an interest in using the virtual reality as a learning method and they feel optimism regarding the future use of this technology as a learning tool. However, this is not the case among users of video game consoles (PSVR), who are mainly men not interested in their use as a learning tool at present. Finally, it can be stated that current use as a learning tool among teachers and students is occasional and preferably via smartphones.
Vrettos Moulos, George Chatzikyriakos, Vassilis Kassouras
et al.
In modern societies, the rampant growth of the Internet, both on the technological and social level, has created fertile ground for the emergence of new types of risk. On top of that, it enhances pre-existing threats by offering new means for accessing and exploiting Critical Infrastructures. As the kinds of potential threats evolve, the security, safety and resilience of these infrastructures must be updated accordingly, both at a prevention, as well as a real-time confrontation level. Our research approaches the security of these infrastructures with a focus on the data and utilization of every possible piece of information that derives from this ecosystem. Such a task is quite daunting, since the quantity of data that requires processing resides in the Big Dataspace. To address this, we introduce a new well-defined Information Life Cycle in order to properly model and optimise the way information flows through a modern security system. This life cycle covers all the possible stages, starting from the collection phase up until the exploitation of information intelligence. That ensures the efficiency of data processing and filtering while increasing both the veracity and validity of the final outcome. In addition, an agile Framework is introduced that is optimised to take full advantage of the Information Life Cycle. As a result, it exploits the generated knowledge taking the correct sequence of actions that will successfully address possible threats. This Framework leverages every possible data source that could provide vital information to Critical Infrastructures by performing analysis and data fusion being able to cope with data variety and variability. At the same time, it orchestrates the pre-existing processes and resources of these infrastructures. Through rigorous testing, it was found that response time against hazards was dramatically decreased. As a result, this Framework is an ideal candidate for strengthening and shielding the infrastructures’ resilience while improving management of the resources used.
Engineering machinery, tools, and implements, Technological innovations. Automation