The rapid integration of the Internet of Things (IoT) into healthcare ecosystems has revolutionized patient monitoring and data accessibility; however, it has simultaneously expanded the cyber-attack surface, leaving sensitive medical data vulnerable to sophisticated breaches. This systematic literature review (SLR) addresses the critical challenge of balancing high-level security with the severe resource constraints of medical sensors and edge devices. By synthesizing evidence from 80 high-impact studies including 18 primary research articles published between 2022 and 2025 this paper evaluates the quality and efficacy of emerging cryptographic frameworks. The methodology utilizes a rigorous quality assessment framework to categorize research into "Strong," "Moderate," and "Weak" tiers. Key findings reveal a significant paradigm shift toward lightweight symmetric ciphers, such as GIFT and PRESENT, and certificateless authentication protocols like ELWSCAS, which reduce communication overhead in narrow-band environments. The analysis further explores the role of blockchain-assisted decentralization and DNA-based encryption in mitigating Single Point of Failure risks and providing high entropy. While decentralized models significantly enhance data integrity, they frequently encounter a scalability wall regarding transaction latency. Furthermore, the review assesses quantum readiness, noting that while lattice-based standards are being ported to microcontrollers, memory footprints remain a barrier for simpler sensors. Ultimately, this SLR maps the current technical frontiers and provides a strategic roadmap for future research, emphasizing the transition toward lightweight, quantum-resistant architectures as the next essential step in securing the global healthcare IoT infrastructure.
Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Funding
The research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Data Fabrication/Falsification Statement
The author(s) declare that no data has been fabricated, falsified, or manipulated in this study.
Participant Consent
The authors confirm that Informed consent was obtained from all participants, and confidentiality was duly maintained.
Copyright and Licensing
For all articles published in the NIJEC journal, Copyright (c) of this study is with author(s).
Following in the footsteps of the success of Mathlib - the centralised library of formalised mathematics in Lean - CSLib is a rapidly-growing centralised library of formalised computer science and software. In this paper, we present its founding technical principles, operation, abstractions, and semantic framework. We contribute reusable semantic interfaces (reduction and labelled transition systems), proof automation, CI/testing support for maintaining automation and compatibility with Mathlib, and the first substantial developments of languages and models.
Urslla Uchechi Izuazu, Cosmas Ifeanyi Nwakanma, Dong-Seong Kim
et al.
Abstract Deep learning-based intrusion detection systems (DL-IDS) have proven effective in detecting cyber threats. However, their vulnerability to adversarial attacks and environmental noise, particularly in industrial settings, limits practical application. Current IDS models often assume ideal conditions, overlooking noise and adversarial manipulations, leading to degraded performance when deployed in real-world environments. Additionally, the black-box nature of DL model complicates decision-making, especially in industrial control systems (ICS) network, where understanding model behavior is crucial. This paper introduces the eXplainable Cyber-Threat Detection Framework (XC-TDF), a novel solution designed to overcome these challenges. XC-TDF enhances robustness against noise and adversarial attacks using regularization and adversarial training respectively, and also improves transparency through an eXplainable Artificial Intelligence (XAI) module. Simulation results demonstrate its effectiveness, showing resilience to perturbation by achieving commendable accuracy of 100% and 99.4% on the Wustl-IIoT2021 and Edge-IIoT datasets, respectively.
Muhammad Attique Khan, Usama Shafiq, Ameer Hamza
et al.
Abstract Deep learning has significantly contributed to medical imaging and computer-aided diagnosis (CAD), providing accurate disease classification and diagnosis. However, challenges such as inter- and intra-class similarities, class imbalance, and computational inefficiencies due to numerous hyperparameters persist. This study aims to address these challenges by presenting a novel deep-learning framework for classifying and localizing gastrointestinal (GI) diseases from wireless capsule endoscopy (WCE) images. The proposed framework begins with dataset augmentation to enhance training robustness. Two novel architectures, Sparse Convolutional DenseNet201 with Self-Attention (SC-DSAN) and CNN-GRU, are fused at the network level using a depth concatenation layer, avoiding the computational costs of feature-level fusion. Bayesian Optimization (BO) is employed for dynamic hyperparameter tuning, and an Entropy-controlled Marine Predators Algorithm (EMPA) selects optimal features. These features are classified using a Shallow Wide Neural Network (SWNN) and traditional classifiers. Experimental evaluations on the Kvasir-V1 and Kvasir-V2 datasets demonstrate superior performance, achieving accuracies of 99.60% and 95.10%, respectively. The proposed framework offers improved accuracy, precision, and computational efficiency compared to state-of-the-art models. The proposed framework addresses key challenges in GI disease diagnosis, demonstrating its potential for accurate and efficient clinical applications. Future work will explore its adaptability to additional datasets and optimize its computational complexity for broader deployment.
Computer applications to medicine. Medical informatics
Daisuke Ishihara, Motonobu Kimura, Ryotaro Suetsugu
et al.
In this study, we propose a 2.5-dimensional (2.5-D) structure approach for insect-mimetic flapping-wing air vehicles (FWAVs). The proposed approach includes design and fabrication methods. To our best knowledge, this study is the first one that develops a flapping system for FWAVs without any post-assembly of structural components. The proposed structure consists of a transmission, a supporting frame, and elastic wings. The transmission transforms the small translational displacement produced by a piezoelectric bimorph into a large rotational displacement of the wings. The size is reduced using the proposed design method. Then, the 2.5-D structure is fabricated using the proposed polymer MEMS micromachining method. The presented micro flapping system flaps the wing with a stroke angle and flapping frequency comparable to those of actual small insects using resonance. The results confirm that the proposed approach can miniaturize FWAVs.
Daniel Apai, Rory Barnes, Matthew M. Murphy
et al.
The search for extraterrestrial life in the Solar System and beyond is a key science driver in astrobiology, planetary science, and astrophysics. A critical step is the identification and characterization of potential habitats, both to guide the search and to interpret its results. However, a well-accepted, self-consistent, flexible, and quantitative terminology and method of assessment of habitability are lacking. Our paper fills this gap based on a three year-long study by the NExSS Quantitative Habitability Science Working Group. We reviewed past studies of habitability, but find that the lack of a universally valid definition of life prohibits a universally applicable definition of habitability. A more nuanced approach is needed. We introduce a quantitative habitability assessment framework (QHF) that enables self-consistent, probabilistic assessment of the compatibility of two models: First, a habitat model, which describes the probability distributions of key conditions in the habitat. Second, a viability model, which describes the probability that a metabolism is viable given a set of environmental conditions. We provide an open-source implementation of this framework and four examples as a proof of concept: (a) Comparison of two exoplanets for observational target prioritization; (b) Interpretation of atmospheric O2 detection in two exoplanets; (c) Subsurface habitability of Mars; and (d) Ocean habitability in Europa. These examples demonstrate that our framework can self-consistently inform astrobiology research over a broad range of questions. The proposed framework is modular so that future work can expand the range and complexity of models available, both for habitats and for metabolisms.
This paper reviews literature pertaining to the development of data science as a discipline, current issues with data bias and ethics, and the role that the discipline of information science may play in addressing these concerns. Information science research and researchers have much to offer for data science, owing to their background as transdisciplinary scholars who apply human-centered and social-behavioral perspectives to issues within natural science disciplines. Information science researchers have already contributed to a humanistic approach to data ethics within the literature and an emphasis on data science within information schools all but ensures that this literature will continue to grow in coming decades. This review article serves as a reference for the history, current progress, and potential future directions of data ethics research within the corpus of information science literature.
User experience can be a measure of acceptance of an application. The user experience measurement aims to provide input in evaluating the development, improvement and maintenance of the UIN Maulana Malik Ibrahim Malang Internal Quality Assurance System (SPMI) application based on the level of user experience using the User Experience Questionnaire (UEQ). 6 aspects of UEQ were chosen because they have advantages over the others. User trust can decrease, even lead to frustration when a user experiences a failure in the user experience. It is feared that the data entered into the SPMI application is inaccurate or seems perfunctory. This has the potential to affect the evaluation and improvement of each study program. This research was conducted to SPMI admins from each study program at UIN Maulana Malik Ibrahim Malang. The results of measuring user experience from the SPMI application of UIN Maulana Malik Ibrahim Malang showed positive evaluation results with categories above the average, namely in the aspects of perspicuity (1.712), dependability (1.470), and stimulation (1.167), novelty (0.917). Meanwhile, in terms of attractiveness (1.626) and efficiency (1.864). The results are included in the good category, but efforts still need to be made to improve the quality of the novelty to be more innovative in its development.
Md Mijanur Rahman, Ashik Uzzaman, Sadia Islam Sami
et al.
Abstract This study introduces a novel encoder–decoder framework based on deep neural networks and provides a thorough investigation into the field of automatic picture captioning systems. The suggested model uses a “long short‐term memory” decoder for word prediction and sentence construction, and a “convolutional neural network” as an encoder that is skilled at object recognition and spatial information retention. The long short‐term memory network functions as a sequence processor, generating a fixed‐length output vector for final predictions, while the VGG‐19 model is utilized as an image feature extractor. For both training and testing, the study uses a variety of photos from open‐access datasets, such as Flickr8k, Flickr30k, and MS COCO. The Python platform is used for implementation, with Keras and TensorFlow as backends. The experimental findings, which were assessed using the “bilingual evaluation understudy” metric, demonstrate the effectiveness of the suggested methodology in automatically captioning images. By addressing spatial relationships in images and producing logical, contextually relevant captions, the paper advances image captioning technology. Insightful ideas for future study directions are generated by the discussion of the difficulties faced during the experimentation phase. By establishing a strong neural network architecture for automatic picture captioning, this study creates opportunities for future advancement and improvement in the area.
Deep learning has enabled major advances across most areas of artificial intelligence research. This remarkable progress extends beyond mere engineering achievements and holds significant relevance for the philosophy of cognitive science. Deep neural networks have made significant strides in overcoming the limitations of older connectionist models that once occupied the centre stage of philosophical debates about cognition. This development is directly relevant to long-standing theoretical debates in the philosophy of cognitive science. Furthermore, ongoing methodological challenges related to the comparative evaluation of deep neural networks stand to benefit greatly from interdisciplinary collaboration with philosophy and cognitive science. The time is ripe for philosophers to explore foundational issues related to deep learning and cognition; this perspective paper surveys key areas where their contributions can be especially fruitful.
Edafetanure-Ibeh Faith, Evah Patrick Tamarauefiye, Mark Uwuoruya Uyi
The aim of attending an educational institution is learning, which in turn is sought after for the reason of independence of thoughts, ideologies as well as physical and material independence. This physical and material independence is gotten from working in the industry, that is, being a part of the independent working population of the country. There needs to be a way by which students upon graduation can easily adapt to the real world with necessary skills and knowledge required. This problem has been a challenge in some computer science departments, which after effects known after the student begins to work in an industry. The objectives of this project include: Designing a web based chat application for the industry and computer science department, Develop a web based chat application for the industry and computer science and Evaluate the web based chat application for the industry and computer science department. Waterfall system development lifecycle is used in establishing a system project plan, because it gives an overall list of processes and sub-processes required in developing a system. The descriptive research method applied in this project is documentary analysis of previous articles. The result of the project is the design, software a web-based chat application that aids communication between the industry and the computer science department and the evaluation of the system. The application is able to store this information which can be decided to be used later. Awareness of the software to companies and universities, implementation of the suggestions made by the industry in the computer science curriculum, use of this software in universities across Nigeria and use of this not just in the computer science field but in other field of study
Kevin Tran-Nguyen, Caroline Berger, Roxanne Bennett
et al.
BackgroundPostfracture acute pain is often inadequately managed in older adults. Mobile health (mHealth) technologies can offer opportunities for self-management of pain; however, insufficient apps exist for acute pain management after a fracture, and none are designed for an older adult population.
ObjectiveThis study aims to design, develop, and evaluate an mHealth app prototype using a human-centered design approach to support older adults in the self-management of postfracture acute pain.
MethodsThis study used a multidisciplinary and user-centered design approach. Overall, 7 stakeholders (ie, 1 clinician-researcher specialized in internal medicine, 2 user experience designers, 1 computer science researcher, 1 clinical research assistant researcher, and 2 pharmacists) from the project team, together with 355 external stakeholders, were involved throughout our user-centered development process that included surveys, requirement elicitation, participatory design workshops, mobile app design and development, mobile app content development, and usability testing. We completed this study in 3 phases. We analyzed data from prior surveys administered to 305 members of the Canadian Osteoporosis Patient Network and 34 health care professionals to identify requirements for designing a low-fidelity prototype. Next, we facilitated 4 participatory design workshops with 6 participants for feedback on content, presentation, and interaction with our proposed low-fidelity prototype. After analyzing the collected data using thematic analysis, we designed a medium-fidelity prototype. Finally, to evaluate our medium-fidelity prototype, we conducted usability tests with 10 participants. The results informed the design of our high-fidelity prototype. Throughout all the phases of this development study, we incorporated inputs from health professionals to ensure the accuracy and validity of the medical content in our prototypes.
ResultsWe identified 3 categories of functionalities necessary to include in the design of our initial low-fidelity prototype: the need for support resources, diary entries, and access to educational materials. We then conducted a thematic analysis of the data collected in the design workshops, which revealed 4 themes: feedback on the user interface design and usability, requests for additional functionalities, feedback on medical guides and educational materials, and suggestions for additional medical content. On the basis of these results, we designed a medium-fidelity prototype. All the participants in the usability evaluation tests found the medium-fidelity prototype useful and easy to use. On the basis of the feedback and difficulties experienced by participants, we adjusted our design in preparation for the high-fidelity prototype.
ConclusionsWe designed, developed, and evaluated an mHealth app to support older adults in the self-management of pain after a fracture. The participants found our proposed prototype useful for managing acute pain and easy to interact with and navigate. Assessment of the clinical outcomes and long-term effects of our proposed mHealth app will be evaluated in the future.
Malaria adalah penyakit yang disebabkan oleh parasite bernama Plasmodium. Tercatat keseluruhan kasus malaria yang terjadi di Indonesia pada tahun 2019 adalah sebanyak 250.644 kasus. Dan kasus malaria tertinggi terjadi di provinsi Papua, yaitu sebesar 86% atau sebanyak 216.380 kasus. Di Provinsi Papua, penyakit malaria dialami oleh semua usia dan bulan-bulan terjadi peningkatan pasien penderita malaria juga sangat bervariasi. Hal ini mengakibatkan dinas Kesehatan mengalami kesulitan dalam mengelompokan jenis malaria berdasarkan usia pasien dan bulan-bulan kejadian. Sebenarnya sudah ada penelitian yang menjelaskan pengelompokan jenis-jenis malaria, namun belum dijelaskan secara terperinci masing-masing kelompok malaria seperti Malaria Tropika, Malaria Tertiana, Malaria Quartana, Malaria Ovale. Tujuan dari penelitian ini adalah, melakukan analisis cluster terhadap beberapa jenis malaria, usia dan bulan kejadian. Metode cluster yang digunakan dalam penelitian ini adalah metode Single Linkage dan K-Means. Selanjutnya kedua metode akan di evalusi menggunakan standar deviasi. Metode terbaik yang dapat digunakan untuk analisis cluster adalah metode yang memiliki nilai standar deviasi lebih kecil. Data yang digunakan dalam penelitian ini adalah data sekunder yang diperoleh dari Dinas Kesehatan Provinsi Papua. Hasil penelitian menunjukan bahwa, metode Single Linkage lebih akurat dibandingkan dengan K-Means. Dimana dari 50 pasien terdapat 47 pasien lebih dominan terkena penyakit malaria tertiana yaitu pada rentang usia remaja dan dewasa pada bulan juni. Sehingga diharapkan pemerintah Provinsi Papua dapat memberikan sosialisasi kepada masyarakat, khususnya mereka yang pada rentang usia remaja dan dewasa. Karena hampir 94% penyakit malaria tertiana di derita oleh mereka yang berusia remaja dan dewasa.
In this paper, we derive a novel method as a generalization over LCEs such as E2C. The method develops the idea of learning a locally linear state space by adding a multi-step prediction, thus allowing for more explicit control over the curvature. We show that the method outperforms E2C without drastic model changes which come with other works, such as PCC and P3C. We discuss the relation between E2C and the presented method and derive update equations. We provide empirical evidence, which suggests that by considering the multi-step prediction, our method – ms-E2C – allows learning much better latent state spaces in terms of curvature and next state predictability. Finally, we also discuss certain stability challenges we encounter with multi-step predictions and how to mitigate them.
Word Sense Disambiguation (WSD) is one of the earliest problems in natural language processing which aims to determine the correct sense of words in context. The semantic information provided by WSD systems is highly beneficial to many tasks such as machine translation, information extraction, and semantic parsing. In this work, a new approach for WSD is proposed which uses a neural network as a surrogate fitness function in a metaheuristic algorithm. Also, a new method for simultaneous training of word and sense embeddings is proposed in this work. Accordingly, the node2vec algorithm is employed on the WordNet graph to generate sequences containing both words and senses. These sequences are then used along with paragraphs from Wikipedia in the word2vec algorithm to generate embeddings for words and senses at the same time. In order to address data imbalance in this task, sense probability distribution data extracted from the training corpus is used in the search process of the proposed simulated annealing algorithm. Furthermore, we introduce a new approach for clustering and mapping senses in the WordNet graph, which considerably improves the accuracy of the proposed method. In this approach, nodes in the WordNet graph are clustered on the condition that no two senses of the same word be present in one cluster. Then, repeatedly, all nodes in each cluster are mapped to a randomly selected node from that cluster, meaning that the representative node can take advantage of the training instances of all the other nodes in the cluster. Training the proposed method in this work is done using the SemCor dataset and the SemEval-2015 dataset has been used as the validation set. The final evaluation of the system is performed on SensEval-2, SensEval-3, SemEval-2007, SemEval-2013, SemEval-2015, and the concatenation of all five mentioned datasets. The performance of the system is also evaluated on the four content word categories, namely, nouns, verbs, adjectives, and adverbs. Experimental results show that the proposed method achieves accuracies in the range of 74.8 to 84.6 percent in the ten aforementioned evaluation categories which are close to and in some cases better than the state of the art in this task.
Natália Dal Pizzol, Eduardo Dos Santos Barbosa, Soraia Raupp Musse
This study presents an automated bibliometric analysis of 6569 research papers published in thirteen Brazilian Computer Science Society (SBC) conferences from 1999 to 2021. Our primary goal was to gather data to understand the gender representation in publications in the field of Computer Science. We applied a systematic assignment of gender to 23.573 listed papers authorships, finding that the gender gap for women is significant, with female authors being under-represented in all years of the study.