The extensive embrace of Android has amplified malware risks, resulting in a need for better detection methods. This article investigates the area of static analysis, which analyses applications without execution by examining code and manifest files. We focus on studies from 2022 to 2025, regarding the feature extraction, datasets, feature selection, and approaches based on Machine Learning (ML) and Deep Learning (DL). We conclude by defining the major limitations and research gaps presented in studies regarding static analysis, and many insights for potential development of detection models that are efficient, accurate, and lightweight to improve detection patterns of Android malware.
Aleksanteri Hamalainen, Aku Karhinen, Jesse Miettinen
et al.
Rotating machines are extremely common in many industries, and their maintenance involves substantial costs and labor. Most recent studies aiming to automate fault diagnosis have focused on deep learning, but industry adoption has been slow owing to the lack of well-curated datasets and the complexity of the methods. We propose a new method called Rapid Few-shot Condition Monitoring (Rapid-FSCM), which enables the rapid deployment of deep learning-based condition monitoring models and is readily extensible to future advancements in the field. This will make it simpler for the industry to conduct machine condition monitoring without the cost of an expert. Rapid-FSCM utilizes few-shot learning and the InceptionTime convolutional neural network to enable training on data from a related base domain more readily available than data from the target domain. In addition, the prototypical networks method for few-shot learning is modified to enable the deployment of the model as an anomaly detector, even before any fault samples have been recorded. After faults have occurred and been recorded, the model demonstrates the ability to initiate fault diagnosis without further retraining. Validated with three datasets, two gear datasets from a test bench with complex features, and the CWRU bearing dataset, the model was shown to have high accuracy in target domains containing unseen faults, sensors, operating conditions, and even entirely new components. The developed method can be used to rapidly deploy a condition monitoring model for any rotating machine without the need to first conduct a large data acquisition process.
As edge computing environments become increasingly dynamic, the need for efficient job scheduling and proactive fault prevention is becoming paramount. In such environments, minimizing machine downtime and maintaining productivity are critical challenges. In this paper, we propose an integrated approach to scheduling optimization that combines deep learning-based fault prediction with Satisfiability Modulo Theories (SMT)-based scheduling techniques. The proposed system predicts fault probabilities for machines in real time by leveraging operational state features such as temperature, vibration, tool wear, and operating hours. These fault predictions are then used as inputs to the SMT solver, which dynamically optimizes job scheduling. The system ensures task completion within deadlines while minimizing fault risks and optimizing resource utilization. To achieve this, the deep learning model continuously updates fault probabilities through a rolling prediction mechanism, allowing the scheduling system to proactively adapt to changing machine conditions. The SMT solver incorporates these predictions into its optimization process, ensuring that the schedule dynamically reflects the latest system state. The proposed method has been evaluated in simulated production line scenarios, demonstrating significant reductions in machine faults, improved scheduling efficiency, and enhanced overall system reliability. By integrating predictive maintenance with optimization techniques, this research contributes to the development of robust and adaptive scheduling systems for dynamic production environments.
Uttam U. Deshpande, Supriya Shanbhag, Ramesh Koti
et al.
Phone calls are strictly forbidden in certain locations due to the potential security threats. Mobile phones’ growing capabilities have also increased the risk of their misuse in places that are restricted, like manufacturing plants. Unauthorized mobile phone use in these environments can lead to significant safety hazards, operational disruptions, and security breaches. There is an urgent need to develop an intelligent system that can identify the presence of individuals as well as cellphone usage. We propose an advanced Artificial Intelligence and Computer Vision-based real-time cell phone detection system to detect mobile phone usage in restricted zones. Modern deep learning approaches, such as YOLOv8 for real-time object detection to accurately detect cell phone usage, are combined with dense layers of ResNet-50 to perform image classification tasks. We highlight the critical need for such detection systems in manufacturing settings and discuss the specific challenges encountered. To support this research, we have developed a custom dataset of 2,150 images, which features a diverse array of images with varying foreground and background elements to reflect real-world conditions. Our experimental results demonstrate that YOLOv8 achieves a Mean Average Precision (mAP50) of 49.5% at 0.5 IoU for cellphone detection tasks and an accuracy of 96.03% for prediction tasks. These findings underscore the effectiveness of our AI and CV-based system in detecting unauthorized mobile phone usage in restricted zones.
Abstract Background The P53-mutated Hepatocellular Carcinoma (HCC) is an aggressive variant associated with vascular endothelial growth factor (VEGF) overexpression and increased microvascular density. This study aimed to develop an MRI-based deep learning model for predicting P53-mutated HCC. Methods A total of 312 HCC patients who underwent gadolinium-enhanced MRI and were pathologically confirmed between January 2018 and December 2023 were retrospectively enrolled. Participants were randomly divided into training and test dataset at an 8:2 ratio. We developed an EfficientNetV2-based deep learning model, constructing arterial phase (AP) model, portal venous phase (VP), T2-weighted imaging (T2WI), hepatobiliary phase (HBP) single-sequence model, and combined models to predict P53 mutation status. Model performance was evaluated using the area under the curve (AUC), accuracy, sensitivity, specificity, precision, and F1 score as metrics. Differences in AUC values were compared using Delong’s test. Results A total of 312 pathologically confirmed HCC patients (age: 56 ± 9 years; male = 240) were included, with a training dataset (n = 249) and test dataset (n = 63).Among single-sequence models, the HBP model demonstrated superior diagnostic performance (AUC = 0.715) compared to T2WI, AP, and VP models. The multiphase combined model (T2WI + AP + VP) significantly outperformed single-sequence models, achieving AUCs of 0.982 (95% CI: 0.959–1.000) in the training dataset and 0.914 (95% CI: 0.819–1.000) in the test dataset. However, incorporating the HBP sequence into the combined model (T2WI + AP + VP + HBP) did not further improve diagnostic performance (P > 0.05). Advances in knowledge The combined model incorporating AP, VP, T2WI, and HBP sequences demonstrated numerically highest performance in predicting P53-mutated HCC.
Urslla Uchechi Izuazu, Cosmas Ifeanyi Nwakanma, Dong-Seong Kim
et al.
Abstract Deep learning-based intrusion detection systems (DL-IDS) have proven effective in detecting cyber threats. However, their vulnerability to adversarial attacks and environmental noise, particularly in industrial settings, limits practical application. Current IDS models often assume ideal conditions, overlooking noise and adversarial manipulations, leading to degraded performance when deployed in real-world environments. Additionally, the black-box nature of DL model complicates decision-making, especially in industrial control systems (ICS) network, where understanding model behavior is crucial. This paper introduces the eXplainable Cyber-Threat Detection Framework (XC-TDF), a novel solution designed to overcome these challenges. XC-TDF enhances robustness against noise and adversarial attacks using regularization and adversarial training respectively, and also improves transparency through an eXplainable Artificial Intelligence (XAI) module. Simulation results demonstrate its effectiveness, showing resilience to perturbation by achieving commendable accuracy of 100% and 99.4% on the Wustl-IIoT2021 and Edge-IIoT datasets, respectively.
Soybean is a vital crop globally and a key source of food, feed, and biofuel. With advancements in high-throughput technologies, soybeans have become a key target for genetic improvement. This comprehensive review explores advances in multi-omics, artificial intelligence, and economic sustainability to enhance soybean resilience and productivity. Genomics revolution, including marker-assisted selection (MAS), genomic selection (GS), genome-wide association studies (GWAS), QTL mapping, GBS, and CRISPR-Cas9, metagenomics, and metabolomics have boosted the growth and development by creating stress-resilient soybean varieties. The artificial intelligence (AI) and machine learning approaches are improving genetic trait discovery associated with nutritional quality, stresses, and adaptation of soybeans. Additionally, AI-driven technologies like IoT-based disease detection and deep learning are revolutionizing soybean monitoring, early disease identification, yield prediction, disease prevention, and precision farming. Additionally, the economic viability and environmental sustainability of soybean-derived biofuels are critically evaluated, focusing on trade-offs and policy implications. Finally, the potential impact of climate change on soybean growth and productivity is explored through predictive modeling and adaptive strategies. Thus, this study highlights the transformative potential of multidisciplinary approaches in advancing soybean resilience and global utility.
Rieke Löper, Lennart Abels, Daniel Otero Baguer
et al.
Lentigo maligna (LM) is a melanoma in situ with high cumulative sun damage. Histological evaluation of resection margins is difficult and time-consuming. Melanocyte density (MD) is a suitable, quantifiable, and reproducible diagnostic criterion. In this retrospective single-centre study, we investigated whether an artificial intelligence (AI) tool can support the assessment of LM. Training and evaluation were based on MD in Sox-10-stained digitalised slides. In total, 86 whole slide images (WSIs) from LM patients were annotated and used as a training set. The test set consisted of 177 slides. The tool was trained to detect the epidermis, measure its length, and determine the MD. A cut-off of ≥30 melanocytes per 0.5 mm of epidermis length was defined as positive. Our AI model automatically recognises the epidermis and measures the MD. The model was trained on nuclear immunohistochemical signals and can also be applied to other nuclear stains, such as PRAME or MITF. The WSI is automatically visualised by a three-colour heat map with a subdivision into low, borderline, and high melanocyte density. The cut-offs can be adjusted individually. Compared to manually counted ground truth MD, the AI model achieved high sensitivity (87.84%), specificity (72.82%), and accuracy (79.10%), and an area under the curve (AUC) of 0.818 in the test set. This automated tool can assist (dermato) pathologists by providing a quick overview of the WSI at first glance and making the time-consuming assessment of resection margins more efficient and more reproducible. The AI model can provide significant benefits in the daily routine workflow.
Muhammad Attique Khan, Usama Shafiq, Ameer Hamza
et al.
Abstract Deep learning has significantly contributed to medical imaging and computer-aided diagnosis (CAD), providing accurate disease classification and diagnosis. However, challenges such as inter- and intra-class similarities, class imbalance, and computational inefficiencies due to numerous hyperparameters persist. This study aims to address these challenges by presenting a novel deep-learning framework for classifying and localizing gastrointestinal (GI) diseases from wireless capsule endoscopy (WCE) images. The proposed framework begins with dataset augmentation to enhance training robustness. Two novel architectures, Sparse Convolutional DenseNet201 with Self-Attention (SC-DSAN) and CNN-GRU, are fused at the network level using a depth concatenation layer, avoiding the computational costs of feature-level fusion. Bayesian Optimization (BO) is employed for dynamic hyperparameter tuning, and an Entropy-controlled Marine Predators Algorithm (EMPA) selects optimal features. These features are classified using a Shallow Wide Neural Network (SWNN) and traditional classifiers. Experimental evaluations on the Kvasir-V1 and Kvasir-V2 datasets demonstrate superior performance, achieving accuracies of 99.60% and 95.10%, respectively. The proposed framework offers improved accuracy, precision, and computational efficiency compared to state-of-the-art models. The proposed framework addresses key challenges in GI disease diagnosis, demonstrating its potential for accurate and efficient clinical applications. Future work will explore its adaptability to additional datasets and optimize its computational complexity for broader deployment.
Computer applications to medicine. Medical informatics
Image colorization is a fundamental task in computer vision that aims to predict the missing color channels from grayscale images. In recent years, fully automatic approaches based on deep learning have become the dominant paradigm. However, these methods often produce visually unnatural results, such as color bleeding or inconsistent colorization in homogeneous regions. On the other hand, user interactive methods, such as point interactive colorization, propagate colors based on user-provided hints and tend to produce more natural and spatially consistent results. Nevertheless, when no hints are provided, the generated images may suffer from low color saturation. In this study, we propose a novel fully automatic colorization framework that combines the strengths of both paradigms: a conventional fully automatic colorization model is used as a hint generator, and a conventional point interactive colorization model is employed as a hint propagator. By treating the interactive model as a propagator within an automatic pipeline, our method ensures that the inherent colorfulness in automatic models is preserved while achieving the spatial consistency characteristic of interactive methods. Importantly, the proposed framework is fully automatic, requires no manual input, and does not necessitate retraining, as it can directly leverage existing pretrained models. We evaluated the proposed method using various fully automatic colorization models and a representative point interactive model. The results demonstrate that our method effectively reduces color inconsistencies in continuous regions and improves visual realism.
IntroductionBrain tumors are a common disease that affects millions of people worldwide. Considering the severity of brain tumors (BT), it is important to diagnose the disease in its early stages. With advancements in the diagnostic process, Magnetic Resonance Imaging (MRI) has been extensively used in disease detection. However, the accurate identification of BT is a complex task, and conventional techniques are not sufficiently robust to localize and extract tumors in MRI images. Therefore, in this study, we used a deep learning model combined with a segmentation algorithm to localize and extract tumors from MR images.MethodThis paper presents a Deep Learning (DL)-based You Look Only Once (YOLOv7) model in combination with the Grab Cut algorithm to extract the foreground of the tumor image to enhance the detection process. YOLOv7 is used to localize the tumor region, and the Grab Cut algorithm is used to extract the tumor from the localized region.ResultsThe performance of the YOLOv7 model with and without the Grab Cut algorithm is evaluated. The results show that the proposed approach outperforms other techniques, such as hybrid CNN-SVM, YOLOv5, and YOLOv6, in terms of accuracy, precision, recall, specificity, and F1 score.DiscussionOur results show that the proposed technique achieves a high dice score between tumor-extracted images and ground truth images. The findings show that the performance of the YOLOv7 model is improved by the inclusion of the Grab Cut algorithm compared to the performance of the model without the algorithm.
Neoplasms. Tumors. Oncology. Including cancer and carcinogens
Md Mijanur Rahman, Ashik Uzzaman, Sadia Islam Sami
et al.
Abstract This study introduces a novel encoder–decoder framework based on deep neural networks and provides a thorough investigation into the field of automatic picture captioning systems. The suggested model uses a “long short‐term memory” decoder for word prediction and sentence construction, and a “convolutional neural network” as an encoder that is skilled at object recognition and spatial information retention. The long short‐term memory network functions as a sequence processor, generating a fixed‐length output vector for final predictions, while the VGG‐19 model is utilized as an image feature extractor. For both training and testing, the study uses a variety of photos from open‐access datasets, such as Flickr8k, Flickr30k, and MS COCO. The Python platform is used for implementation, with Keras and TensorFlow as backends. The experimental findings, which were assessed using the “bilingual evaluation understudy” metric, demonstrate the effectiveness of the suggested methodology in automatically captioning images. By addressing spatial relationships in images and producing logical, contextually relevant captions, the paper advances image captioning technology. Insightful ideas for future study directions are generated by the discussion of the difficulties faced during the experimentation phase. By establishing a strong neural network architecture for automatic picture captioning, this study creates opportunities for future advancement and improvement in the area.
Leander van Eekelen, Joey Spronck, Monika Looijen-Salamon
et al.
Abstract Programmed death-ligand 1 (PD-L1) expression is currently used in the clinic to assess eligibility for immune-checkpoint inhibitors via the tumor proportion score (TPS), but its efficacy is limited by high interobserver variability. Multiple papers have presented systems for the automatic quantification of TPS, but none report on the task of determining cell-level PD-L1 expression and often reserve their evaluation to a single PD-L1 monoclonal antibody or clinical center. In this paper, we report on a deep learning algorithm for detecting PD-L1 negative and positive tumor cells at a cellular level and evaluate it on a cell-level reference standard established by six readers on a multi-centric, multi PD-L1 assay dataset. This reference standard also provides for the first time a benchmark for computer vision algorithms. In addition, in line with other papers, we also evaluate our algorithm at slide-level by measuring the agreement between the algorithm and six pathologists on TPS quantification. We find a moderately low interobserver agreement at cell-level level (mean reader-reader F1 score = 0.68) which our algorithm sits slightly under (mean reader-AI F1 score = 0.55), especially for cases from the clinical center not included in the training set. Despite this, we find good AI-pathologist agreement on quantifying TPS compared to the interobserver agreement (mean reader-reader Cohen’s kappa = 0.54, 95% CI 0.26–0.81, mean reader-AI kappa = 0.49, 95% CI 0.27—0.72). In conclusion, our deep learning algorithm demonstrates promise in detecting PD-L1 expression at a cellular level and exhibits favorable agreement with pathologists in quantifying the tumor proportion score (TPS). We publicly release our models for use via the Grand-Challenge platform.
Satellite fog computing (SFC) achieves computation, caching, and other functionalities through collaboration among fog nodes. Satellites can provide real-time and reliable satellite-to-ground fusion services by pre-caching content that users may request in advance. However, due to the high-speed mobility of satellites, the complexity of user-access conditions poses a new challenge in selecting optimal caching locations and improving caching efficiency. Motivated by this, in this paper, we propose a real-time caching scheme based on a Double Deep Q-Network (Double DQN). The overarching objective is to enhance the cache hit rate. The simulation results demonstrate that the algorithm proposed in this paper improves the data hit rate by approximately 13% compared to methods without reinforcement learning assistance.