A Systematic Literature Review on Modern Cryptographic and Authentication Schemes for Securing the Internet of Things
Tehseen Hussain, Fraz Ahmad, Dr. Zia Ur Rehman
The rapid integration of the Internet of Things (IoT) into healthcare ecosystems has revolutionized patient monitoring and data accessibility; however, it has simultaneously expanded the cyber-attack surface, leaving sensitive medical data vulnerable to sophisticated breaches. This systematic literature review (SLR) addresses the critical challenge of balancing high-level security with the severe resource constraints of medical sensors and edge devices. By synthesizing evidence from 80 high-impact studies including 18 primary research articles published between 2022 and 2025 this paper evaluates the quality and efficacy of emerging cryptographic frameworks. The methodology utilizes a rigorous quality assessment framework to categorize research into "Strong," "Moderate," and "Weak" tiers. Key findings reveal a significant paradigm shift toward lightweight symmetric ciphers, such as GIFT and PRESENT, and certificateless authentication protocols like ELWSCAS, which reduce communication overhead in narrow-band environments. The analysis further explores the role of blockchain-assisted decentralization and DNA-based encryption in mitigating Single Point of Failure risks and providing high entropy. While decentralized models significantly enhance data integrity, they frequently encounter a scalability wall regarding transaction latency. Furthermore, the review assesses quantum readiness, noting that while lattice-based standards are being ported to microcontrollers, memory footprints remain a barrier for simpler sensors. Ultimately, this SLR maps the current technical frontiers and provides a strategic roadmap for future research, emphasizing the transition toward lightweight, quantum-resistant architectures as the next essential step in securing the global healthcare IoT infrastructure.
Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Funding
The research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Data Fabrication/Falsification Statement
The author(s) declare that no data has been fabricated, falsified, or manipulated in this study.
Participant Consent
The authors confirm that Informed consent was obtained from all participants, and confidentiality was duly maintained.
Copyright and Licensing
For all articles published in the NIJEC journal, Copyright (c) of this study is with author(s).
Systems engineering, Engineering design
Correction to: IGFM: An Enhanced Graph Similarity Computation Method with Fine‑Grained Analysis
Min Pei, Jianke Yu, Chen Chen
et al.
Information technology, Electronic computers. Computer science
Optimizing Energy Consumption of Edge-Cloud Environments: A comparative Study Between PPO and PSO
Alejandro Espinosa, Xavier Samos, Daniel Ulied
et al.
Abstract As the usage of the edge-cloud continuum increases, Kubernetes presents itself as a solution that allows easy control and deployment of applications in these highly-distributed and heterogeneous environments. In this context, Artificial Intelligence methods have been proposed to aid in the task allocation process to optimize different aspects of the system, such as application execution time, load balancing or energy consumption. In this paper, we present a comparative study focused on optimizing energy consumption through dynamic task allocation in a realistic V2X application scenario. We evaluate and compare two methods representing the most common algorithmic families for resource allocation: Particle Swarm Optimization (PSO) and Proximal Policy Optimization (PPO). Our methodology includes the design of a custom Kubernetes Operator to enforce the models’ node recommendations, allowing for rigorous, real-world validation against the base Kubernetes scheduler. Experiments demonstrate that while both PSO and PPO models successfully reduce energy consumption, PSO delivers the highest savings, reducing energy use by up to 20%. Crucially, our study highlights a key trade-off: although PSO is performance-superior for energy, the PPO model remains a faster and more computationally lightweight option that can be used widely on any kind of device, even with limited resources.
Electronic computers. Computer science
A comprehensive construction of deep neural network‐based encoder–decoder framework for automatic image captioning systems
Md Mijanur Rahman, Ashik Uzzaman, Sadia Islam Sami
et al.
Abstract This study introduces a novel encoder–decoder framework based on deep neural networks and provides a thorough investigation into the field of automatic picture captioning systems. The suggested model uses a “long short‐term memory” decoder for word prediction and sentence construction, and a “convolutional neural network” as an encoder that is skilled at object recognition and spatial information retention. The long short‐term memory network functions as a sequence processor, generating a fixed‐length output vector for final predictions, while the VGG‐19 model is utilized as an image feature extractor. For both training and testing, the study uses a variety of photos from open‐access datasets, such as Flickr8k, Flickr30k, and MS COCO. The Python platform is used for implementation, with Keras and TensorFlow as backends. The experimental findings, which were assessed using the “bilingual evaluation understudy” metric, demonstrate the effectiveness of the suggested methodology in automatically captioning images. By addressing spatial relationships in images and producing logical, contextually relevant captions, the paper advances image captioning technology. Insightful ideas for future study directions are generated by the discussion of the difficulties faced during the experimentation phase. By establishing a strong neural network architecture for automatic picture captioning, this study creates opportunities for future advancement and improvement in the area.
Photography, Computer software
Data recommendation algorithm of network security event based on knowledge graph
Xianwei ZHU, Wei LIU, Zihao LIU
et al.
To address the difficulty faced by network security operation and maintenance personnel in timely and accurate identification of required data during network security event analysis, a recommendation algorithm based on a knowledge graph for network security events was proposed.The algorithm utilized the network threat framework ATT&CK to construct an ontology model and establish a network threat knowledge graph based on this model.It extracted relevant security data such as attack techniques, vulnerabilities, and defense measures into interconnected security knowledge within the knowledge graph.Entity data was extracted based on the knowledge graph, and entity vectors were obtained using the TransH algorithm.These entity vectors were then used to calculate data similarity between entities in network threat data.Disposal behaviors were extracted from literature on network security event handling and treated as network security data entities.A disposal behavior matrix was constructed, and the behavior matrix enabled the vector representation of network threat data.The similarity of network threat data entities was calculated based on disposal behaviors.Finally, the similarity between network threat data and threat data under network security event handling behavior was fused to generate a data recommendation list for network security events, which established correlations between network threat domains based on user behavior.Experimental results demonstrate that the algorithm performs optimally when the fusion weight α=7 and the recommended data volume K=5, achieving a recall rate of 62.37% and an accuracy rate of 68.23%.By incorporating disposition behavior similarity in addition to data similarity, the algorithm better represents factual disposition behavior.Compared to other algorithms, this algorithm exhibits significant advantages in recall rate and accuracy, particularly when the recommended data volume is less than 10.
Electronic computers. Computer science
Paddy Yield Prediction in Tamilnadu Delta Region Using MLR-LSTM Model
Sathya P, Gnanasekaran P
Crop yield forecasting has been well studied in recent decades and is significant in protecting food security. Crop growth is a complex phenomenon that depends on various factors. Machine learning and deep learning trends have emerged as important innovations in this field. We propose to utilize crop, weather, and soil data from agricultural datasets to evaluate yield prediction behavior. Paddy being a staple food crop in India is chosen for this research. In this paper, we propose hybrid architecture for paddy yield prediction, namely, MLR-LSTM, which combines Multiple Linear Regression and Long Short-Term Memory to utilize their complementary nature. The results are compared with traditional machine learning methods such as Support vector machine, Long short-term memory and Random forest method. Evaluation metrics such as Coefficient of Determination (R2), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Square Error (MSE), F1 score, Recall, and Precision are used to evaluate the hybrid method and traditional models. The results obtained from the proposed hybrid method indicates that the hybrid model delivers better R2, RMSE, MAE, MSE values of 0.93, 0.1549, 0.199, and 0.024 respectively.
Electronic computers. Computer science, Cybernetics
An ensemble approach for imbalanced multiclass malware classification using 1D-CNN
Binayak Panda, Sudhanshu Shekhar Bisoyi, Sidhanta Panigrahy
Dependence on the internet and computer programs demonstrates the significance of computer programs in our day-to-day lives. Such demands motivate malware developers to create more malware, both in terms of quantity and variety. Researchers are constantly faced with hurdles while attempting to protect themselves from potential hazards and risks due to malware authors’ usage of code obfuscation techniques. Metamorphic and polymorphic variations are easily able to elude the widely utilized signature-based detection procedures. Researchers are more interested in deep learning approaches than machine learning techniques to analyze the behavior of such a vast number of virus variants. Researchers have been drawn to the categorization of malware within itself in addition to the classification of malware against benign programs to examine the behavioral differences between them. In order to investigate the relationship between the application programming interface (API) calls throughout API sequences and classify them, this work uses the one-dimensional convolutional neural network (1D-CNN) model to solve a multiclass classification problem. On API sequences, feature vectors for distinctive APIs are created using the Word2Vec word embedding approach and the skip-gram model. The one-vs.-rest approach is used to train 1D-CNN models to categorize malware, and all of them are then combined with a suggested ModifiedSoftVoting algorithm to improve classification. On the open benchmark dataset Mal-API-2019, the suggested ensembled 1D-CNN architecture captures improved evaluation scores with an accuracy of 0.90, a weighted average F1-score of 0.90, and an AUC score of more than 0.96 for all classes of malware.
Electronic computers. Computer science
A metaheuristic with a neural surrogate function for Word Sense Disambiguation
Azim Keshavarzian Nodehi, Nasrollah Moghadam Charkari
Word Sense Disambiguation (WSD) is one of the earliest problems in natural language processing which aims to determine the correct sense of words in context. The semantic information provided by WSD systems is highly beneficial to many tasks such as machine translation, information extraction, and semantic parsing. In this work, a new approach for WSD is proposed which uses a neural network as a surrogate fitness function in a metaheuristic algorithm. Also, a new method for simultaneous training of word and sense embeddings is proposed in this work. Accordingly, the node2vec algorithm is employed on the WordNet graph to generate sequences containing both words and senses. These sequences are then used along with paragraphs from Wikipedia in the word2vec algorithm to generate embeddings for words and senses at the same time. In order to address data imbalance in this task, sense probability distribution data extracted from the training corpus is used in the search process of the proposed simulated annealing algorithm. Furthermore, we introduce a new approach for clustering and mapping senses in the WordNet graph, which considerably improves the accuracy of the proposed method. In this approach, nodes in the WordNet graph are clustered on the condition that no two senses of the same word be present in one cluster. Then, repeatedly, all nodes in each cluster are mapped to a randomly selected node from that cluster, meaning that the representative node can take advantage of the training instances of all the other nodes in the cluster. Training the proposed method in this work is done using the SemCor dataset and the SemEval-2015 dataset has been used as the validation set. The final evaluation of the system is performed on SensEval-2, SensEval-3, SemEval-2007, SemEval-2013, SemEval-2015, and the concatenation of all five mentioned datasets. The performance of the system is also evaluated on the four content word categories, namely, nouns, verbs, adjectives, and adverbs. Experimental results show that the proposed method achieves accuracies in the range of 74.8 to 84.6 percent in the ten aforementioned evaluation categories which are close to and in some cases better than the state of the art in this task.
Cybernetics, Electronic computers. Computer science
Agent-based multi-tier SLA negotiation for intercloud
Lin Li, Li Liu, Shalin Huang
et al.
Abstract The evolving intercloud enables idle resources to be traded among cloud providers to facilitate utilization optimization and to improve the cost-effectiveness of the service for cloud consumers. However, several challenges are raised for this multi-tier dynamic market, in which cloud providers not only compete for consumer requests but also cooperate with each other. To establish a healthier and more efficient intercloud ecosystem, in this paper a multi-tier agent-based fuzzy constraint-directed negotiation (AFCN) model for a fully distributed negotiation environment without a broker to coordinate the negotiation process is proposed. The novelty of AFCN is the use of a fuzzy membership function to represent imprecise preferences of the agent, which not only reveals the opponent’s behavior preference but can also specify the possibilities prescribing the extent to which the feasible solutions are suitable for the agent’s behavior. Moreover, this information can guide each tier of negotiation to generate a more favorable proposal. Thus, the multi-tier AFCN can improve the negotiation performance and the integrated solution capacity in the intercloud. The experimental results demonstrate that the proposed multi-tier AFCN model outperforms other agent negotiation models and demonstrates the efficiency and scalability of the intercloud in terms of the level of satisfaction, the ratio of successful negotiation, the average revenue of the cloud provider, and the buying price of the unit cloud resource.
Computer engineering. Computer hardware, Electronic computers. Computer science
Braid-DB: Toward AI-Driven Science with Machine Learning Provenance
J. Wozniak, Zhengchun Liu, Rafael Vescovi
et al.
7 sitasi
en
Computer Science
Ontology Based Governance for Employee Services
Eleftherios Tzagkarakis, Haridimos Kondylakis, George Vardakis
et al.
Advances in computers and communications have significantly changed almost every aspect of our daily activity. In this maze of change, governments around the world cannot remain indifferent. Public administration is evolving and taking on a new form through e-government. A large number of organizations have set up websites, establishing an online interface with the citizens and businesses with which it interacts. However, most organizations, especially the decentralized agencies of the ministries and local authorities, do not offer their information electronically despite the fact that they provide many information services that are not integrated with other e-government services. Besides, these services are mainly focused on serving citizens and businesses and less on providing services to employees. In this paper, we describe the process of developing an ontology to support the administrative procedures of decentralized government organizations. Finally, we describe the development of an e-government portal that provides employees services that are processed online, using the above ontology for modeling and data management.
Industrial engineering. Management engineering, Electronic computers. Computer science
Evaluation of a calibration transfer between a bench top and portable Mid-InfraRed spectrometer for cocaine classification and quantification.
J. Eliaerts, N. Meert, P. Dardenne
et al.
A portable Fourier Transform Mid-InfraRed (FT-MIR) spectrometer using Attenuated Total Reflectance (ATR) sampling is used for daily routine screening of seized powders. Earlier, ATR-FT-MIR combined with Support Vector Machines (SVM) algorithms resulted in a significant improvement of the screening method to a reliable and straightforward classification and quantification tool for both cocaine and levamisole. However, can this tool be transferred to new (hand-held) devices, without loss of the extensive data set? The objective of this study was to perform a calibration transfer between a newly purchased bench top (BT) spectrometer and a portable (P) spectrometer with existing calibration models. Both instruments are from the same brand and have identical characteristics and acquisition parameters (FT instrument, resolution of 4 cm-1 and wavenumber range 4000 to 500 cm-1). The original SVM classification model (n = 515) and SVM quantification model (n = 378) were considered for the transfer trial. Three calibration transfer strategies were assessed: 1) adjustment of slope and bias; 2) correction of spectra from the new instrument BT to P using Piecewise Direct Standardization (PDS) and 3) building a new mixed instrument model with spectra of both instruments. For each approach, additional cocaine powders were measured (n = 682) and the results were compared with GC-MS and GC-FID. The development of a mixed instrument model was the most successful in terms of performance. The future strategy of a mixed model allows applying the models, developed in the laboratory, to portable instruments that are used on-site, and vice versa. The approach offers opportunities to exchange data within a network of forensic laboratories using other FT-MIR spectrometers.
21 sitasi
en
Chemistry, Medicine
Pré-localisation des données pour la modélisation 3D de tunnels : développements et évaluations
Christophe Heinkelé, Pierre Charbonnier, Philippe Foucher
et al.
Le présent article décrit l'implémentation d'une méthode de pré-location décimétrique d'images au sein de grands volumes de données dans des tunnels navigables et routiers. Elle repose sur une technique d'odométrie visuelle simplifiée, ce qui la rend rapide et facile à mettre en oeuvre. Cette méthode permet de structurer les données afin d'améliorer les traitements postérieurs, comme par exemple la reconstruction 3D par photogrammétrie. La méthode est évaluée sur la précision de la localisation par comparaison avec des techniques de localisation plus conventionnelles. La structuration des données qui découle de cette localisation des images au sein de l'ouvrage constitue l'aspect le plus important du travail présenté ici.
Instruments and machines, Applied optics. Photonics
Potential applications of peptide nucleic acid in biomedical domain
Kshitij RB Singh, Parikipandla Sridevi, Ravindra Pratap Singh
Abstract Peptide Nucleic Acid (PNA) are DNA/RNA synthetic analogs with 2‐([2‐aminoethyl] amino) acetic acid backbone. They partake unique antisense and antigene properties, just due to its inhibitory effect on transcription and translation; they also undergo complementary binding to RNA/DNA with high affinity and specificity. Hence, to date, many methods utilizing PNA for diagnosis and treatment of various diseases namely cancer, AIDS, human papillomavirus, and so on, have been designed and developed. They are being used widely in polymerase chain reaction modulation/mutation, fluorescent in‐situ hybridization, and in microarray as a probe; they are also utilized in many in‐vitro and in‐vivo assays and for developing micro and nano‐sized biosensor/chip/array technologies. Earlier reviews, focused only on PNA properties, structure, and modifications related to diagnostics and therapeutics; our review emphasizes on PNA properties and synthesis along with its potential applications in diagnosis and therapeutics. Furthermore, prospects in biomedical applications of PNAs are being discussed in depth.
Engineering (General). Civil engineering (General), Electronic computers. Computer science
Can Artificial Entities Assert?
O. Freiman, Boaz Miller
There is an existing debate regarding the view that technological instruments, devices, or machines can assert or testify. A standard view in epistemology is that only humans can testify. However, the notion of quasi-testimony acknowledges that technological devices can assert or testify under some conditions, without denying that humans and machines are not the same. Indeed, there are four relevant differences between humans and instruments. First, unlike humans, machine assertion is not imaginative or playful. Second, machine assertion is prescripted and context restricted. As such, computers currently cannot easily switch contexts or make meaningful relevant assertions in contexts for which they were not programmed. Third, while both humans and computers make errors, they do so in different ways. Computers are very sensitive to small errors in input, which may cause them to make big errors in output. Moreover, automatic error control is based on finding irregularities in data without trying to establish whether they make sense. Fourth, testimony is produced by a human with moral worth, while quasi-testimony is not. Ultimately, the notion of quasi-testimony can serve as a bridge between different philosophical fields that deal with instruments and testimony as sources of knowledge, allowing them to converse and agree on a shared description of reality, while maintaining their distinct conceptions and ontological commitments about knowledge, humans, and nonhumans.
The Epistemic Importance of Technology in Computer Simulation and Machine Learning
M. Resch, Andrea R. Kaminski
18 sitasi
en
Computer Science
Digitalization of Manufacturing Processes: Proposal and Experimental Results
R. P. Rolle, Vinícius de O. Martucci, E. P. Godoy
The Industry 4.0 is a new industrial paradigm that aims to fulfill the needs for more reliable, flexible and efficient industrial processes by implementing digital technology on the shop floor. The development of smart devices, new software tools and communication protocols makes it possible to connect real machines and instruments to the virtual space, enabling more sophisticated control and even future predictions. Digital Twins are an approach for intercommunicating physical and virtual machines or systems, whose main goal is to improve performance on the real system by using information from virtual tools that simulate the physical parts. Typical applications involve performance analysis, bottleneck detection, failure prediction and others. The ambitious aim of replicating the behavior of whole machines or systems also brings a lot of challenges: modeling and simulating complex systems with acceptable computational costs, assuring real-time communication and developing methods for deep analysis are some goals for researchers and vendors. This paper presents an architecture proposal for practical implementation of digital twins, that is based on an open-source tool for process control, lightweight communication protocols and flexible tools for modeling and 3D visualization. The implementation is meant to make the platform as general as possible, so that a myriad of machines and production systems can be modeled and represented on the digital twin architecture.
15 sitasi
en
Computer Science
TRIST: TREE RECOGNITION INTELLIGENT SYSTEM
Laura ONAC
Plant recognition represents a challenging computer vision problem due to the great variations of shape and texture among plant organs, within the same species. This paper proposes a light-weight, but reasonably deep Convolutional Neural Network architecture able to carry out this classification task. Multiple experiments were conducted with the proposed network architecture on the MEW2012 and Swedish leaf datasets. The experiments showed promising results, outperforming the current state-of-the-art systems that rely exclusively on a convolutional network for plant classification.
Electronic computers. Computer science
Botnet detection using graph-based feature clustering
Sudipta Chowdhury, Mojtaba Khanzadeh, Ravi Akula
et al.
Abstract Detecting botnets in a network is crucial because bots impact numerous areas such as cyber security, finance, health care, law enforcement, and more. Botnets are becoming more sophisticated and dangerous day-by-day, and most of the existing rule based and flow based detection methods may not be capable of detecting bot activities in an efficient and effective manner. Hence, designing a robust and fast botnet detection method is of high significance. In this study, we propose a novel botnet detection methodology based on topological features of nodes within a graph: in degree, out degree, in degree weight, out degree weight, clustering coefficient, node betweenness, and eigenvector centrality. A self-organizing map clustering method is applied to establish clusters of nodes in the network based on these features. Our method is capable of isolating bots in clusters of small sizes while containing the majority of normal nodes in the same big cluster. Thus, bots can be detected by searching a limited number of nodes. A filtering procedure is also developed to further enhance the algorithm efficiency by removing inactive nodes from consideration. The methodology is verified using the CTU-13 datasets, and benchmarked against a classification-based detection method. The results show that our proposed method can efficiently detect the bots despite their varying behaviors.
Computer engineering. Computer hardware, Information technology