Giuseppe Cota, Gaetano Scaramozzino, Marco Chiesa
et al.
Background: Dental radiographs are essential for diagnosis and treatment planning in modern dentistry. However, their manual interpretation is time-consuming and subject to variability, highlighting the need for automated tools to improve efficiency and consistency. This study aims to validate ORTHOSEG, a deep learning-based system designed to automate the segmentation of anatomical, pathological, and non-pathological elements in radiographs, including orthopantomograms, bitewings, and periapical images. Methods: ORTHOSEG’s performance was evaluated using a rigorously curated dataset of 150 dental radiographs, including 50 orthopantomograms, 50 bitewings, and 50 periapical images, with manual annotations by expert clinicians serving as the ground truth. The system’s segmentation performance was assessed using standard evaluation metrics, including mean Dice Similarity Coefficient (<i>mDSC</i>) and mean Intersection over Union (<i>mIoU</i>), and inference time was also recorded. Results: The system achieved high accuracy, with <i>mDSC</i> and <i>mIoU</i> values of 0.635 ± 0.233 and 0.576 ± 0.214, respectively. In particular for orthopantomograms, it achieved an <i>mDSC</i> of 0.756 ± 0.174 and an <i>mIoU</i> of 0.684 ± 0.172, surpassing existing benchmarks. Its segmentation capabilities extend to approximately 70 distinct elements, underscoring its comprehensive utility. The system demonstrated efficient computational performance, with processing times of 19.745 ± 3.625 s for orthopantomograms, 8.467 ± 0.903 s for bitewings, and 5.653 ± 0.897 s for periapical radiographs on standard clinical hardware. Conclusions: ORTHOSEG demonstrates efficiency suitable for integration into routine workflows. This study confirms ORTHOSEG’s reliability and potential to improve diagnostic workflows, offering clinicians a valuable tool for faster and more detailed radiograph analysis. Future research will focus on extending validation across diverse clinical scenarios to ensure broader applicability. However, this study has limitations, including the use of a dataset derived from a European population and the absence of usability and clinical workflow evaluation, which should be addressed in future studies.
Zahra Mardani, Ali Salehi, Fatemeh Jabbarpor
et al.
Background and aim: Artificial intelligence has garnered significant attention recently, and its application in medicine and dentistry has been proposed. However, few studies have been done in the field of dental implants. Investigating the factors affecting its accuracy is also very important. Therefore, the present study was conducted to investigate the diagnostic accuracy of artificial intelligence in bone density in implant surgery.
Material and methods: The relevant published literature was gathered through a systematic search of four electronic databases: Web of Science, Scopus, MEDLINE/PubMed, and Cochrane. The developed PICO question served as the basis for the search terms. Only articles published in English within the previous five years (January 2019 and February 2025) were included in the search. The accuracy of AI was used as an effect size in a fixed-effects model and inverse-variance methods, with 95% confidence intervals (CI). All data analysis was performed using Stata.v18 software (latest version; year 2025).
Results: Artificial intelligence-guided implant surgery was 87% accurate (ES 0.87, 95% CI: -0.01, 1.75). According to meta-regression, a higher bone density increased the risk of angular and implant apex deviations.
Conclusions: According to the present meta-analysis, the accuracy of the implant pattern designed with artificial intelligence is high, and bone density is higher than the reasons that can lead to implant deviation.
Abstract The brain is structurally and functionally modular, although recent evidence has raised questions about the extent of both types of modularity. Using a simple, toy artificial neural network setup that allows for precise control, we find that structural modularity does not in general guarantee functional specialization (across multiple measures of specialization). Further, in this setup (1) specialization only emerges when features of the environment are meaningfully separable, (2) specialization preferentially emerges when the network is strongly resource-constrained, and (3) these findings are qualitatively similar across several different variations of network architectures. Finally, we show that functional specialization varies dynamically across time, and these dynamics depend on both the timing and bandwidth of information flow in the network. We conclude that a static notion of specialization is likely too simple a framework for understanding intelligence in situations of real-world complexity, from biology to brain-inspired neuromorphic systems.
Satriagasa Muhammad Chrisna, Suryatmojo Hatma, Kusumandari Ambar
et al.
Accurate land use information is vital for effective watershed monitoring and management. This study explores the use of ChatGPT-4o, a multimodal large language model (LLM), to interpret UAV-derived orthomosaics in the Tamansari Catchment, Central Java, Indonesia. High-resolution imagery from 2018 and 2025 was analyzed through natural language prompts to identify land use types and detect changes over time. Results revealed a significant shift toward intensive agriculture, with agroforestry decreasing from 32.3% to 4.8% and secondary forest cover halving from 19.4% to 9.7%. A hybrid validation strategy was applied, combining internal spatial consistency checks with external visual verification using Google Street View. While the method does not produce pixel-based classification maps, it enables descriptive interpretation without requiring advanced technical skills. The findings demonstrate that ChatGPT-4o can serve as a rapid, accessible, and cost-effective tool for participatory watershed monitoring, especially in data-scarce or low-resource environments. Further integration with ground-truth data is recommended to improve accuracy.
Prottay Kumar Adhikary, Isha Motiyani, Gayatri Oke
et al.
Abstract
BackgroundThe quality and accessibility of menstrual health education (MHE) in low- and middle-income countries, including India, remain inadequate due to persistent challenges (eg, poverty, social stigma, and gender inequality). While community-driven initiatives have sought to raise awareness, artificial intelligence offers a scalable and efficient solution for disseminating accurate information. However, existing general-purpose large language models (LLMs) are often ill-suited for this task, tending to exhibit low accuracy, cultural insensitivity, and overly complex responses. To address these limitations, we developed MenstLLaMA—a specialized LLM tailored to the Indian context and designed to deliver MHE empathetically, supportively, and accessibly.
ObjectiveWe aimed to develop and evaluate MenstLLaMA—a specialized LLM tailored to deliver accurate, culturally sensitive MHE—and assess its effectiveness in comparison to existing general-purpose models.
MethodsWe curated MENST—a novel, domain-specific dataset comprising 23,820 question-answer pairs aggregated from medical websites, government portals, and health education resources. This dataset was systematically annotated with metadata capturing age groups, regions, topics, and sociocultural contexts. MenstLLaMA was developed by fine-tuning Meta-LLaMA-3-8B-Instruct, using parameter-efficient fine-tuning with low-rank adaptation to achieve domain alignment while minimizing computational overhead. We benchmarked MenstLLaMA against 9 state-of-the-art general-purpose LLMs, including GPT-4o, Claude-3, Gemini 1.5 Pro, and Mistral. The evaluation followed a multilayered framework: (1) automatic evaluation using standard natural language processing metrics (BLEU [Bilingual Evaluation Understudy], METEOR [Metric for Evaluation of Translation with Explicit Ordering], ROUGE-L [Recall-Oriented Understudy for Gisting Evaluation-Longest Common Subsequence], and BERTScore [Bidirectional Encoder Representations from Transformers Score]); (2) evaluation by clinical experts (N=18), who rated 200 expert-curated queries for accuracy and appropriateness; (3) medical practitioner interaction through the ISHA (Intelligent System for Menstrual Health Assistance) interactive chatbot, assessing qualitative dimensions (eg, relevance, understandability, preciseness, correctness,context sensitivity
ResultsMenstLLaMA achieved the highest scores in BLEU (0.059) and BERTScore (0.911), outperforming GPT-4o (BLEU: 0.052, BERTScore: 0.896) and Claude-3 (BERTScore: 0.888). Clinical experts preferred MenstLLaMA’s responses over gold-standard answers in several culturally sensitive cases. In medical practitioners’ evaluations using the ISHA—the chat interface powered by MenstLLaMA—the model scored 3.5 in relevanceunderstandabilityprecisenesscorrectnesscontext sensitivityunderstandabilityrelevanceprecisenesscorrectnesstoneflowcontext sensitivity
ConclusionsMenstLLaMA demonstrates exceptional accuracy, empathy, and user satisfaction within the domain of MHE, bridging critical gaps left by general-purpose LLMs. Its potential for integration into broader health education platforms positions it as a transformative tool for menstrual well-being. Future research could explore its long-term impact on public perception and menstrual hygiene practices, while expanding demographic representation, enhancing context sensitivity, and integrating multimodal and voice-based interactions to improve accessibility across diverse user groups.
Computer applications to medicine. Medical informatics, Public aspects of medicine
The rapid development of Generative Artificial Intelligence (GAI) in higher education presents abundant opportunities for its high-quality development, but also introduces ethical concerns. This paper analyses the benefits of GAI in higher education in terms of teaching, learning, and evaluation. Additionally, it explores the ethical risks and reasons associated with GAI in higher education covering aspects such as data, algorithms, academia, and teacher-student relationships. The paper concludes by proposing methods and strategies to mitigate these ethical risks.
Nanotechnologists and medical researchers are working hard to develop new and innovative ways to use nanorobots as nanomedicine to improve healthcare outcomes and revolutionize the field of therapeutics. Nanotechnology has the potential to revolutionize healthcare by providing new ways of treating chronic diseases in the field of medicine. A “Gold Nano Thermo Robot” (GNTR) model has been proposed in this research article, which can be considered a nanomedicine that will deliver controlled thermal therapy to targeted malignant tissues without damaging healthy tissues. The proposed nanotherapeutic system, empowered with a nano sensor network, interbody body communication network, and Internet of nanomedical things, has been used to normalize and control hyperthermal waves in real-time that have been used to eliminate breast cancer cells using the “SEE and TREAT” technique. To generate hyperthermia, which has been irradiated by laser pulses to propose GNTR, a Coulomb explosion took place, and a huge amount of dispersed hyperthermia waves were produced. To convert the intensity of dispersed and irregular hyperthermia into a regulated and disciplined format, a Finite Difference Method has been used to develop a “Heat Control System.” A comparative analysis has been provided of the intricate relationship between the required radius of Gold Nano Thermo Robots and the volume depth of the tumor for penetration, with a keen focus on evaluating how different GNTR sizes fit or do not fit for the task of effectively treating tumors at various depths within cancer tumors. Furthermore, the effectiveness of treatment has multifaceted outcomes that have been acquired by the interplay between two critical factors, the temperature limit and therapy duration. By examining a comprehensive matrix of thermal therapy durations (ranging from 25 minutes to 60 minutes) alongside various temperature limits (ranging from 33°C to 60°C). The best fit and the best response therapy session have been verified with a temperature limit of 42 °C for 30 minutes, achieving near-complete tumor ablation with minimum harm to the healthy tissues. The complex physical effects on the Gold Nano Robots surfaces due to the Coulomb explosion procedure are also provided in the form of simulation analysis, and an explanation is given in nine panels.
Orken Mamyrbayev, Keylan Alimhan, Dina Oralbekova
et al.
In this study, we investigated the use of the pre-sowing electrophysical stimulation of seeds, particularly focusing on optimizing technological regimes for enhancing seed quality. The aim of this study was to improve sunflower seed germination utilizing laser optical radiation. The methods explored involved the pre-sowing stimulation of oilseeds and analyzing the key mechanisms affecting germination. Through our experimental research, we sought to identify the most effective laser irradiation parameters, ensuring the maximum seed quality improvement with minimal energy use. Using seeds of the first reproduction, we employed artificial aging to simulate a reduced seed quality and determined optimal irradiation regimes. Standard methods were followed to assess seed quality before and after irradiation, with 6–7 days of further exposure. Seed germination was carried out under controlled light and temperature conditions using the “on paper” method with paper napkins. A full factorial experiment was performed and key parameters for laser irradiation were determined, confirming that the pre-sowing laser pulse treatment significantly improved seed quality. In this research, we developed a biotechnical system for processing seeds and propose a method to adjust irradiation parameters based on the initial seed quality. The system effectively enhanced germination and crop yield, offering a reliable solution for improving sunflower seed productivity through laser treatment.
Paulo Monteiro de Carvalho Monson, Vinicius Augusto Dare de Almeida, Gabriel Augusto David
et al.
Computer vision applications demand a significant amount of data for effective training and inference in many computer vision tasks. However, data insufficiency situations usually happen due to multiple reasons, resulting in computational models whose performance is inadequate. Traditional data augmentation techniques are presented to solve this overfitting problem; however, their application is not always possible or desirable. In this context, this paper addresses a different data augmentation technique for classification methods based on adversarial images to reduce the impact of sample imbalance utilizing the Fast Gradient Sign Method (FGSM) with added noise to enhance classifier performance. To validate the method, a set of images was used for the classification of diseases in coffee plants due to the soil’s lack of nutrients. The results showed an improvement in the model performance for this classification in coffee plants proving the validity of the proposed method, which can be used as an alternative to traditional data augmentation methods.
Abstract Cortical network undergoes rewiring everyday due to learning and memory events. To investigate the trends of population adaptation in neocortex overtime, we record cellular activity of large-scale cortical populations in response to neutral environments and conditioned contexts and identify a general intrinsic cortical adaptation mechanism, naming rectified activity-dependent population plasticity (RAPP). Comparing each adjacent day, the previously activated neurons reduce activity, but remain with residual potentiation, and increase population variability in proportion to their activity during previous recall trials. RAPP predicts both the decay of context-induced activity patterns and the emergence of sparse memory traces. Simulation analysis reveal that the local inhibitory connections might account for the residual potentiation in RAPP. Intriguingly, introducing the RAPP phenomenon in the artificial neural network show promising improvement in small sample size pattern recognition tasks. Thus, RAPP represents a phenomenon of cortical adaptation, contributing to the emergence of long-lasting memory and high cognitive functions.
The main objective of the study was to analyze the historical evolution of administrative law in Jordan from 1970 to the present. The research methodology involved the use of historical analysis and hermeneutics method. The historical analysis revealed significant developments in the development of administrative law in Jordan from 1970 to the present. In particular, the period from 1970 to 2000 was broadly characterized by intense reforms aimed at modernizing public administration and legislation to promote social justice and economic growth. With the advent of digital technologies, from 2000 to 2024 there was a significant impact of artificial intelligence on administrative processes, generating new opportunities to optimize public management and improve the quality of services provided to citizens. In the global context, administrative law has also gone through a difficult path of adaptation to new challenges such as globalization and rapid technological change. It is concluded that, through constant adaptation and dialectical innovation, administrative law continues to provide effective and fair governance that meets the needs of modern society.
Khadijeh Moulaei, Mohammad Reza Afrash, Mohammad Parvin
et al.
Abstract The need for intubation in methanol-poisoned patients, if not predicted in time, can lead to irreparable complications and even death. Artificial intelligence (AI) techniques like machine learning (ML) and deep learning (DL) greatly aid in accurately predicting intubation needs for methanol-poisoned patients. So, our study aims to assess Explainable Artificial Intelligence (XAI) for predicting intubation necessity in methanol-poisoned patients, comparing deep learning and machine learning models. This study analyzed a dataset of 897 patient records from Loghman Hakim Hospital in Tehran, Iran, encompassing cases of methanol poisoning, including those requiring intubation (202 cases) and those not requiring it (695 cases). Eight established ML (SVM, XGB, DT, RF) and DL (DNN, FNN, LSTM, CNN) models were used. Techniques such as tenfold cross-validation and hyperparameter tuning were applied to prevent overfitting. The study also focused on interpretability through SHAP and LIME methods. Model performance was evaluated based on accuracy, specificity, sensitivity, F1-score, and ROC curve metrics. Among DL models, LSTM showed superior performance in accuracy (94.0%), sensitivity (99.0%), specificity (94.0%), and F1-score (97.0%). CNN led in ROC with 78.0%. For ML models, RF excelled in accuracy (97.0%) and specificity (100%), followed by XGB with sensitivity (99.37%), F1-score (98.27%), and ROC (96.08%). Overall, RF and XGB outperformed other models, with accuracy (97.0%) and specificity (100%) for RF, and sensitivity (99.37%), F1-score (98.27%), and ROC (96.08%) for XGB. ML models surpassed DL models across all metrics, with accuracies from 93.0% to 97.0% for DL and 93.0% to 99.0% for ML. Sensitivities ranged from 98.0% to 99.37% for DL and 93.0% to 99.0% for ML. DL models achieved specificities from 78.0% to 94.0%, while ML models ranged from 93.0% to 100%. F1-scores for DL were between 93.0% and 97.0%, and for ML between 96.0% and 98.27%. DL models scored ROC between 68.0% and 78.0%, while ML models ranged from 84.0% to 96.08%. Key features for predicting intubation necessity include GCS at admission, ICU admission, age, longer folic acid therapy duration, elevated BUN and AST levels, VBG_HCO3 at initial record, and hemodialysis presence. This study as the showcases XAI's effectiveness in predicting intubation necessity in methanol-poisoned patients. ML models, particularly RF and XGB, outperform DL counterparts, underscoring their potential for clinical decision-making.
Abstract Recent studies have demonstrated great values of deep‐learning (DL) methods for improving El Niño‐Southern Oscillation (ENSO) predictions. However, the black‐box nature of DL makes it challenging to physically interpret mechanisms responsible for successful ENSO predictions. Here, we demonstrate an interpretable method by performing perturbation experiments to predictors and quantifying input‐output relationships in predictions by using a transformer‐based model; ENSO‐related thermal precursors serving as initial conditions during multi‐month time intervals (TIs) are identified in the equatorial‐northern Pacific, acting to precondition input predictors to provide for long‐lead ENSO predictability. Results reveal the existence of upper‐ocean temperature anomaly pathways and consistent phase propagations of thermal precursors around the tropical Pacific. It is illustrated that three‐dimensional thermal fields and their basinwide evolution during long TIs act to enhance long‐lead prediction skills of ENSO. These physically explainable results indicate that neural networks can adequately represent predictable precursors in the input predictors for successful ENSO predictions.
ObjectiveThis study compares the relationships between five anthropometric indices, a body shape index (ABSI), body roundness index (BRI), waist circumference (WC), body mass index (BMI) and waist-to-height ratio (WHtR), and hypertension, assessing their predictive capacities. The aim is to determine the specific numerical changes in hypertension incidence, systolic blood pressure (SBP) and diastolic blood pressure (DBP) for each increase in standard deviation of these indices, and to identify the optimal predictive indicators for different populations, including the calculation of cutoff values.MethodsThis study used data from the NHANES datasets spanning 2007 to 2018. Logistic regression analysis was used to quantify the associations between these anthropometric indices and hypertension, calculating β coefficients and odds ratios (ORs). Receiver operating characteristic (ROC) analysis was used to evaluate the predictive ability of each index for hypertension.ResultsFor each increase in standard deviation in WC, BMI, WHtR, ABSI and BRI, the prevalence of hypertension increased by 33% (95% CI: 27%–40%), 32% (95% CI: 26%–38%), 35% (95% CI: 28%–42%), 9% (95% CI: 4%–16%) and 32% (95% CI: 26%–38%), respectively. The SBP correspondingly increased by 2.36 mmHg (95% CI: 2.16–2.56), 2.41 mmHg (95% CI: 2.21–2.60), 2.48 mmHg (95% CI: 2.28–2.68), 0.42 mmHg (95% CI: 0.19–0.66) and 2.46 mmHg (95% CI: 2.26–2.66), respectively. Similarly, DBP increased by 1.83 mmHg (95% CI: 1.68–1.98), 1.72 mmHg (95% CI: 1.58–1.87), 1.72 mmHg (95% CI: 1.57–1.88), 0.44 mmHg (95% CI: 0.27–0.62) and 1.64 mmHg (95% CI: 1.48–1.79). In the youth and middle-aged groups, WC had the best predictive ability, with AUCs of 0.749 and 0.603, respectively. Among the elderly group, the AUCs for all five indices ranged between 0.5 and 0.52.ConclusionIncreases in WC, BMI, WHtR and BRI are significantly associated with higher incidences of hypertension and increases in SBP and DBP, while the impact of ABSI on blood pressure is relatively weak. Stratified analysis indicates significant age-related differences in the predictive value of these indices, with the strongest associations observed in the youth group, followed by the middle age group, and the weakest in the elderly. WC demonstrates excellent predictive ability across youth populations.
Automated machine learning (AutoML), which aims to facilitate the design and optimization of machine-learning models with reduced human effort and expertise, is a research field with significant potential to drive the development of artificial intelligence in science and industry. However, AutoML also poses challenges due to its resource and energy consumption and environmental impact, aspects that have often been overlooked. This paper predominantly centers on the sustainability implications arising from computational processes within the realm of AutoML. Within this study, a proof of concept has been conducted using the widely adopted Scikit-learn library. Energy efficiency metrics have been employed to fine-tune hyperparameters in both Bayesian and random search strategies, with the goal of enhancing the environmental footprint. These findings suggest that AutoML can be rendered more sustainable by thoughtfully considering the energy efficiency of computational processes. The obtained results from the experimentation are promising and align with the framework of Green AI, a paradigm aiming to enhance the ecological footprint of the entire AutoML process. The most suitable proposal for the studied problem, guided by the proposed metrics, has been identified, with potential generalizability to other analogous problems.
Many constraints limit the accuracy level of classification of a face recognition system in smart office automation application, and these limitations make mask face recognition an important research area. In this research, a novel deep learning based Faster R-CNN which integrates with Internet of Things (IoT) to overcome the security issues in the office. The images of existing employees were gathered in a database and these images are pre-processed to train the neural network. Faster R-CNN employs VGG-16 as the foundation of its architecture to extract the features from pre-processed pictures. The recent development in Internet of Things (IoT) and deep learning have made it possible to addressing the difficulties of face recognition with deep neural network. Based on the feature classification, when a member of an organization approaches the door, it instantly opens. The door remains locked if it is an unknown individual. The images of a both authorized and unauthorized person were stored in a cloud and send it to the office manager for monitoring. The proposed Faster R-CNN model attain the accuracy range 99.3% better than the existing system. The proposed Faster R-CNN improves the overall accuracy ranges of 2.06%, 5.63%, 9.36%, and 3.54% better than Deep CNN, SVM, LBPH, and OMTCNN respectively.
Electric apparatus and materials. Electric circuits. Electric networks
Shadow detection provides worthwhile information for remote sensing applications, e.g. building height estimation. Shadow areas are formed in the opposite side of the sunlight radiation to tall objects, and thus, solar illumination angle is required to find probable shadow areas. In recent years, Very High Resolution (VHR) imagery provides more detailed data from objects including shadow areas. In this regard, the motivation of this paper is to propose a reliable feature, Shadow Low Gradient Direction (SLGD), to automatically determine shadow and solar illumination direction in VHR data. The proposed feature is based on inherent spatial feature of fine-resolution shadow areas. Therefore, it can facilitate shadow-based operations, especially when the solar illumination information is not available in remote sensing metadata. Shadow intensity is supposed to be dependent on two factors, including the surface material and sunlight illumination, which is analyzed by directional gradient values in low gradient magnitude areas. This feature considers the sunlight illumination and ignores the material differences. The method is fully implemented on the Google Earth Engine cloud computing platform, and is evaluated on VHR data with 0.3m resolution. Finally, SLGD performance is evaluated in determining shadow direction and compared in refining shadow maps.
Sarah A. Graham, Viveka Pitter, Jonathan H. Hori
et al.
Objective The National Diabetes Prevention Program (DPP) reduces diabetes incidence and associated medical costs but is typically staffing-intensive, limiting scalability. We evaluated an alternative delivery method with 3933 members of a program powered by conversational Artificial Intelligence (AI) called Lark DPP that has full recognition from the Centers for Disease Control and Prevention (CDC). Methods We compared weight loss maintenance at 12 months between two groups: 1) CDC qualifiers who completed ≥4 educational lessons over 9 months (n = 191) and 2) non-qualifiers who did not complete the required CDC lessons but provided weigh-ins at 12 months (n = 223). For a secondary aim, we removed the requirement for a 12-month weight and used logistic regression to investigate predictors of weight nadir in 3148 members. Results CDC qualifiers maintained greater weight loss at 12 months than non-qualifiers (M = 5.3%, SE = .8 vs. M = 3.3%, SE = .8; p = .015), with 40% achieving ≥5%. The weight nadir of 3148 members was 4.2% (SE = .1), with 35% achieving ≥5%. Male sex ( β = .11; P = .009), weeks with ≥2 weigh-ins ( β = .68; P < .0001), and days with an AI-powered coaching exchange ( β = .43; P < .0001) were associated with a greater likelihood of achieving ≥5% weight loss. Conclusions An AI-powered DPP facilitated weight loss and maintenance commensurate with outcomes of other digital and in-person programs not powered by AI. Beyond CDC lesson completion, engaging with AI coaching and frequent weighing increased the likelihood of achieving ≥5% weight loss. An AI-powered program is an effective method to deliver the DPP in a scalable, resource-efficient manner to keep pace with the prediabetes epidemic.
Computer applications to medicine. Medical informatics