Advances in proton therapy technology and global clinical applications
Qi Zhang, Wencui Yang, Lina Tan
et al.
Proton therapy, by leveraging its unique physical characteristic of the Bragg peak, enables high-precision dose delivery to the tumor target while effectively protecting surrounding normal tissues, and has become an important representative of advanced radiotherapy. This review aims to systematically summarize key technological breakthroughs in recent years that have driven the progress of proton therapy, including compact superconducting accelerators, pencil beam scanning (PBS), image-guided proton therapy (IGPT), and the transformative ultra-high dose rate FLASH radiotherapy, while highlighting the role of artificial intelligence (AI) in advancing proton therapy toward real-time adaptive precision radiotherapy. The article also explores the global distribution and development status of proton centers, with a specific analysis of China’s notable advancements as an emerging market in center construction, equipment localization, and the treatment of characteristic local tumor types. Moving forward, it is essential to continue promoting technological integration and innovation, strengthen high-quality clinical research, and develop a more accessible, intelligent, and personalized proton therapy system to achieve broader clinical application and patient benefit.
Neoplasms. Tumors. Oncology. Including cancer and carcinogens
Raman Spectroscopy Pre-Trained Encoder: A Self-Supervised Learning Approach for Data-Efficient Domain-Independent Spectroscopy Analysis
Abhiraam Eranti, Yogesh Tewari, Rafael Palacios
et al.
Deep-learning methods have boosted the analytical power of Raman spectroscopy, yet they still require large, task-specific, labeled datasets and often fail to transfer across application domains. The study explores pre-trained encoders as a solution. Pre-trained encoders have significantly impacted Natural Language Processing and Computer Vision with their ability to learn transferable representations that can be applied to a variety of datasets, significantly reducing the amount of time and data required to create capable models. The following work puts forward a new approach that applies these benefits to Raman Spectroscopy. The proposed approach, RSPTE (Raman Spectroscopy Pre-Trained Encoder), is designed to learn generalizable spectral representations without labels. RSPTE employs a novel domain adaptation strategy using unsupervised Barlow Twins decorrelation objectives to learn fundamental spectral patterns from multi-domain Raman Spectroscopy datasets containing samples from medicine, biology, and mineralogy. Transferability is demonstrated through evaluation on several models created by fine-tuning RSPTE for different application domains: Medicine (detection of Melanoma and COVID), Biology (Pathogen Identification), and Agriculture. As an example, using only 20% of the dataset, models trained with RSPTE achieve accuracies ranging 50%–86% (depending on the dataset used) while without RSPTE the range is 9%–57%. Using the full dataset, accuracies with RSPTE range 81%–97%, and without pre-training 51%–97%. Current methods and state-of-the-art models in Raman Spectroscopy are compared to RSPTE for context, and RSPTE exhibits competitive results, especially with less data as well. These results provide evidence that the proposed RSPTE model can effectively learn and transfer generalizable spectral features across different domains, achieving accurate results with less data in less time (both data collection time and training time).
Electrical engineering. Electronics. Nuclear engineering
Micro‐/Nanohierarchical Surfaces for Enhanced Pool Boiling in Large‐Area Silicon Multichips
Youngseob Lee, Kiwan Kim, Yunseo Kim
et al.
With the rising demand for data centers, the need for an efficient thermal management approach becomes increasingly critical. This study examines the enhancement in pool boiling heat transfer on a customized multichip module, designed to mimic artificial intelligence chip layouts for high‐performance computing. Experiments are conducted on smooth surfaces and hierarchical structures integrating micropillars and porous copper, specifically copper inverse opal (CuIO) and copper nanowire (NW). The results demonstrate significant enhancements in critical heat flux (CHF) and heat transfer coefficient (HTC) through these hierarchical structures. Notably, the NW‐CuIO‐integrated hierarchical structure exhibits the highest CHF (234 W cm−2), achieving a 166% enhancement over smooth silicon. The HTC enhancement is more pronounced for the CuIO‐integrated hierarchical structure; this structure achieves an HTC of 70.3 kW m−2 K−1, which represents a 166% improvement. The heater layout, engineered surfaces, and their synergistic effects are analyzed through visualization. The observed boiling inversion phenomena further underscore the importance of sequential activation of nucleation sites in improving boiling performance. This study provides valuable insights into the mechanisms governing the enhancement of boiling heat transfer and offers practical guidance for developing efficient thermal management solutions for data centers.
Optimized CNN-BiLSTM framework for reactive power management and voltage profile improvement in renewable energy based power grids
Lijo Jacob Varghese, Suma Sira Jacob, Jaisiva Selvaraj
et al.
Abstract This article describes a method for improving power grid voltage profiles by more effectively regulating reactive power through the integration of hybrid renewable energy systems (HRES) in smart grids. The unpredictable nature of renewable energy sources RES, such as wind turbines and solar systems, causes an unstable voltage profile throughout the grid, underscoring the problem of voltage fluctuation in power grids. This article proposes DSTATCOM, a reactive power adjustment device, to address these voltage fluctuations and provide the grid with the required var. DSTATCOM assists in preserving voltage stability by consistently lowering the voltage drop, which guarantees an increase in active power flow. Therefore, the overall voltage profile throughout the electrical grid gets improved. Convolutional neural networks (CNN) with bidirectional long short-term memory (BiLSTM) combined to form the proposed solution, which controls and maximizes DSTATCOM performance. These advanced artificial intelligence (AI) methods helps in dynamic reactive power management, improving the grid’s voltage profile and DSTATCOM’s performance. In smart grid situations, this method works well for real-time voltage regulation since CNNs are employed for feature extraction and BiLSTM helps capture temporal dependencies in the grid’s power behavior. The CNN-BiLSTM network’s weights are also adjusted using an adaptive parrot optimizer (APO). The proposed approach was implemented using the MATLAB/Simulink environment, and three different scenarios were used to assess its performance. Simulation results confirm that the method achieved up to 33.4% loss reduction, improved voltage stability index (VSI) to 1.02 p.u, minimized total harmonic distortion (THD) below 1.7%, and cut settling time to 0.075 s. The hybrid PV/wind setup ensured superior voltage stability, while the model attained high prediction accuracy with an R2 of 0.9672 and RMSE of 3.0094. By controlling reactive power balance, the created system assures grid stability, improves the voltage profile, and reduces power loss.
Validity and reliability of the Chinese translation of the attitude scale towards the use of artificial intelligence technologies in nursing (ASUAITIN)- a cross-sectional study
Rong Hu, Yan-fei Ma, Chun-yan Wang
et al.
Abstract Aim This study aimed to test the psychometric properties of the Attitudes Toward Artificial Intelligence in Nursing Scale (ASUAITIN) in Chinese culture. Methods A convenience sampling method was employed to recruit 499 clinical nurses from Sichuan Province, China. Following the Brislin model, the ASUAITIN scale underwent forward translation, synthesis, and back-translation. Cultural adaptation and revisions were conducted through expert consultation and pilot testing, resulting in the finalized ASUAITIN-C. The structural validity of the scale was evaluated using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Reliability was assessed through internal consistency and test-retest reliability. Results The ASUAITIN-C consists of 15 items and two factors, which account for 73.278% of the total variance. The overall Cronbach’s α coefficient of the ASUAITIN-C is 0.785, while the Cronbach’s α values for the two subdimensions are 0.920 and 0.948, respectively. The test-retest reliability over two months, measured by the intraclass correlation coefficient (ICC), is 0.91. Conclusion Studies have shown that this questionnaire is a valid and reliable tool for assessing nurses’ attitudes toward using artificial intelligence in the nursing field.
AI-assisted neurocognitive assessment protocol for older adults with psychiatric disorders
Diego D. Díaz-Guerra, Marena de la C. Hernández-Lugo, Yunier Broche-Pérez
et al.
IntroductionEvaluating neurocognitive functions and diagnosing psychiatric disorders in older adults is challenging due to the complexity of symptoms and individual differences. An innovative approach that combines the accuracy of artificial intelligence (AI) with the depth of neuropsychological assessments is needed.ObjectivesThis paper presents a novel protocol for AI-assisted neurocognitive assessment aimed at addressing the cognitive, emotional, and functional dimensions of older adults with psychiatric disorders. It also explores potential compensatory mechanisms.MethodologyThe proposed protocol incorporates a comprehensive, personalized approach to neurocognitive evaluation. It integrates a series of standardized and validated psychometric tests with individualized interpretation tailored to the patient’s specific conditions. The protocol utilizes AI to enhance diagnostic accuracy by analyzing data from these tests and supplementing observations made by researchers.Anticipated resultsThe AI-assisted protocol offers several advantages, including a thorough and customized evaluation of neurocognitive functions. It employs machine learning algorithms to analyze test results, generating an individualized neurocognitive profile that highlights patterns and trends useful for clinical decision-making. The integration of AI allows for a deeper understanding of the patient’s cognitive and emotional state, as well as potential compensatory strategies.ConclusionsBy integrating AI with neuro-psychological evaluation, this protocol aims to significantly improve the quality of neurocognitive assessments. It provides a more precise and individualized analysis, which has the potential to enhance clinical decision-making and overall patient care for older adults with psychiatric disorders.
An Ensemble Machine Learning Model for Early Prediction of Vancomycin-Induced Acute Kidney Injury in ICU Patients
Faezeh Aghamirzaei, Ahmad Ali Abin, Farzaneh Futuhi
Introduction: Acute Kidney Injury (AKI) is a severe complication of vancomycin treatment due to its nephrotoxic effects. However, research on predicting AKI in this high-risk group remains limited. This study presents a stacking ensemble machine learning model designed to predict the onset of AKI in this patient population.
Methods: Leveraging data from 314 ICU patients, the model incorporates SHapley Additive exPlanations (SHAP) for enhanced interpretability, identifying key predictors such as serum creatinine levels, glucose variability, and patient age. The model achieved an Area Under the Curve (AUC) of 0.94, outperforming existing predictive approaches. By utilizing readily available clinical data and determining an optimal temporal prediction window, this model facilitates proactive clinical decision-making, aiming to reduce the risk of AKI and improve patient outcomes.
Results: The stacking ensemble model achieved 92\% accuracy, 93\% precision, 92\% sensitivity, and 0.94 AUC in 314 ICU patients, pinpointing creatinine, glucose variability, and age as critical AKI predictors.
Conclusion: The findings suggest that integrating advanced machine learning techniques with interpretable artificial intelligence (AI) can provide a scalable and cost-effective solution for early AKI detection in diverse healthcare settings.
Medical emergencies. Critical care. Intensive care. First aid
Patient information needs for transparent and trustworthy cardiovascular artificial intelligence: A qualitative study.
Austin M Stroud, Sarah A Minteer, Xuan Zhu
et al.
As health systems incorporate artificial intelligence (AI) into various aspects of patient care, there is growing interest in understanding how to ensure transparent and trustworthy implementation. However, little attention has been given to what information patients need about these technologies to promote transparency of their use. We conducted three asynchronous online focus groups with 42 patients across the United States discussing perspectives on their information needs for trust and uptake of AI, focusing on its use in cardiovascular care. Data were analyzed using a rapid content analysis approach. Our results suggest that patients have a set of core information needs, including specific information factors pertaining to the AI tool, oversight, and healthcare experience, that are relevant to calibrating trust as well as perspectives concerning information delivery, disclosure, consent, and physician AI use. Identifying patient information needs is a critical starting point for calibrating trust in healthcare AI systems and designing strategies for information delivery. These findings highlight the importance of patient-centered engagement when developing AI model documentation and communicating and provisioning information about these technologies in clinical encounters.
Computer applications to medicine. Medical informatics
Multi-objective Particle Swarm Optimization Algorithm Guided by Extreme Learning Decision Network
ZHANG Yifan, SONG Wei
When solving multi-objective optimization problems, particle swarm optimization algorithms usually employ preset example selection methods and search strategies, which cannot be adjusted according to specific optimization states. In the face of different optimization problems, inappropriate search strategies cannot effectively guide the population, resulting in low search performance of the population. To solve the above problems, a multi-objective particle swarm optimization algorithm guided by extreme learning decision network (ELDN-PSO) is proposed. First of all, the multi-objective optimization problem is decomposed into several scalar subproblems, and an extreme learning decision network is constructed. The network takes the particle position as input, and selects appropriate search actions for each particle according to the optimization state. The fitness change of a particle on the subproblem is obtained as the training sample for the reinforcement learning, and the training speed is improved by extreme learning machine. In the process of optimization, the network is automatically adjusted according to the optimization states, and it selects the appropriate search strategy for the particles at different search stages. Secondly, the non-dominated solutions in the multi-objective optimization problem are difficult to compare. Thus, the leadership of each solution is quantified into a comparable value, so that the examples are more clearly selected for the particles. In addition, an external archive is used to store better particles to maintain the quality of the solutions and guide the population. Comparative experiments are carried out on the ZDT and DTLZ test functions. The results show that ELDN-PSO can effectively cope with different Pareto front shapes, improving the optimization speed as well as the convergence and diversity of the solutions.
Electronic computers. Computer science
Tachyon: Enhancing stacked models using Bayesian optimization for intrusion detection using different sampling approaches
T. Anitha Kumari, Sanket Mishra
The integration of sensors in the monitoring of essential bodily measurements, air quality, and energy consumption in buildings demonstrates the importance of the Internet of Things (IoT) in everyday life. These security breaches are caused by rudimentary and immature security protocols that are implemented on IoT devices. An intrusion detection system is used to detect security threats and system-level applications to detect malicious activities. This paper introduces Tachyon, a combination of various statistical and tree-based Artificial Intelligence (AI) techniques, such as Extreme Gradient Boosting (XGBoost), Random Forest (RF), Bidirectional Auto-Regressive Transformers (BART), Logistic Regression (LR), Multivariate Adaptive Regression Splines (MARS), Decision Tree (DT), and a top k stack ensemble to distinguish between normal and malicious attacks in a binary classification setting. The IoTID2020 dataset used in this study consists of 6,25,783 samples with 83 features. An initial examination of the data reveals its unbalanced nature. To create a balanced dataset, a range of sampling techniques were used, including Oversampling, Undersampling, Synthetic Minority Oversampling Technique (SMOTE), Random Oversampling Examples (ROSE), Borderline Synthetic Minority Oversampling Technique (b-SMOTE), and Adaptive Synthetic (ADASYN). In addition, principal component analysis (PCA) and partial least squares (PLS) were used to determine the most significant features. The experimental results demonstrate that the stacked ensemble achieved a performance of 99.8%, which is better than the baseline approaches. An ablation study of ensemble models was also conducted to assess the performance of the proposed model in various scenarios.
Electronic computers. Computer science
Cobdock: an accurate and practical machine learning-based consensus blind docking method
Sadettin Y. Ugurlu, David McDonald, Huangshu Lei
et al.
Abstract Probing the surface of proteins to predict the binding site and binding affinity for a given small molecule is a critical but challenging task in drug discovery. Blind docking addresses this issue by performing docking on binding regions randomly sampled from the entire protein surface. However, compared with local docking, blind docking is less accurate and reliable because the docking space is too largetly sampled. Cavity detection-guided blind docking methods improved the accuracy by using cavity detection (also known as binding site detection) tools to guide the docking procedure. However, it is worth noting that the performance of these methods heavily relies on the quality of the cavity detection tool. This constraint, namely the dependence on a single cavity detection tool, significantly impacts the overall performance of cavity detection-guided methods. To overcome this limitation, we proposed Consensus Blind Dock (CoBDock), a novel blind, parallel docking method that uses machine learning algorithms to integrate docking and cavity detection results to improve not only binding site identification but also pose prediction accuracy. Our experiments on several datasets, including PDBBind 2020, ADS, MTi, DUD-E, and CASF-2016, showed that CoBDock has better binding site and binding mode performance than other state-of-the-art cavity detector tools and blind docking methods.
Information technology, Chemistry
Public Functions Implementation by Artificial Intelligence: Current Practices and Prospects for Common Measures within Particular Periods across Continents and Regions
Atabek Atabekov
The paper explores practices regarding the implementation by AI of public functions through the analysis of research activities, and administrative and legal regulations of AI in countries of various regions and continents. The hypothesis is that there might be some global trends regarding the AI phenomenon within international institutional vision, research, and national authorities with the goal to suggest common measures within the identification of short, medium, and long-term periods to provide public authorities with trajectories to regulate the AI in terms of its implementation of public functions regarding countries of different regions. The empirical research uses administrative and legal documents, information, and analytical materials from diverse countries. The study uses the comparative method and formal logic tools. The main findings suggest modeling measures within the identification of short, medium, and long-term periods and single out measures that are common to diverse countries, regarding the implementation by AI of public functions.
Social sciences (General)
Leveraging Generative AI Tools for Enhanced Lesson Planning in Initial Teacher Education at Post Primary
Frank Kehoe
The rapid development of generative AI (artificial intelligence) tools such as ChatGPT and Google Bard has opened new possibilities for enhancing lesson planning in initial teacher education (ITE). These tools have the capability to generate tailored educational content, alleviating time constraints while concurrently enhancing the quality of teaching. By simply providing specific requirements and objectives, teachers can obtain comprehensive and well-structured lesson plans and subject plans. This paper explores the potential of generative AI tools to revolutionise lesson planning in initial teacher education. It begins by reviewing lesson planning using a generative AI tool, highlighting the challenges and opportunities that exist. A sample lesson plan and a sample scheme of work are further created. While these tools are revolutionising the way teachers work, these tools will not replace real human teachers. Teachers will always need to supplement generative AI content with their own insights and experience, allowing them to make informed pedagogical decisions.
Special aspects of education
Smart_Eye: A Navigation and Obstacle Detection for Visually Impaired People through Smart App
Bhasha Pydala, T. Pavan Kumar, K. Khaja Baseer
Vision is extremely important in our lives. The loss of sight is a serious issue for anyone. According to the WHO, one-sixth of the world's population suffers from vision impairment. According to World Health Organization (WHO) statistics published in December 2021, more than 283 million people worldwide suffer from sight problems, including 39 million blind people and 228 million people with low vision. Navigation in unfamiliar environments is a significant challenge for the partially sighted and visually impaired. Improving visual information on object location and content can aid navigation in unfamiliar environments. Many efforts have been made over the years to develop various devices to assist the visually impaired and improve their quality of life. Numerous efforts have been made over the decades to develop gadgets to support the visually impaired as well as enhance the quality of their lives by trying to make them skilled. There are many existing navigation alternatives that can aid these people. However, in practice, navigation alternatives are infrequently adopted and implemented. For universal use, many of these gadgets are either too heavy or too expensive. While emphasizing related strengths and limitations, it is necessary to produce a minimally expensive assistive device for people with visual disabilities. The proposed model provides an efficient solution for VIPs to roam from place to place by themselves through smart applications with AI and sensor technology. The smart application captures and classifies the images. The obstacles are detected through ultrasonic sensors. The user can get a sense of the obstacles in the path through voice command. The proposed model is very helpful for the VIPs in terms of qualitative and quantitative performance measures. This enables a ranking of the evaluated systems according to their potential influence on Visually Impaired people's lives.
Engineering (General). Civil engineering (General), Technology (General)
Conceptualisation de l'espace numérique dans l'enseignement-apprentissage des langues
France Lafleur
In this article on the conceptualization of the digital space in the teaching-learning of languages, our contribution to the "Architecture of the processes of production and reception" of language (François & Nespoulous, 2014), consists in the identification of the interlinguistic didactic constants of the teaching-learning of foreign languages and their integration into a three-dimensional pedagogical model integrating the stratified structural components of their deep learning. This is an experimental model, but immediately applicable in the teaching-learning-evaluation of languages, which therefore exposes one of the results of our current research-actions in distance learning (FAD).In the introduction, we begin our remarks by presenting the abundant parameters of the place of digital technology in the teaching-learning-assessment of languages. Our methodology is that of analytical and conceptual research on the teaching-learning-evaluation of languages. Our analyses are based on the founding documents of the European Community (EC), in particular the Common European Framework of Reference for Languages, CEFR (Conseil de l'Europe, 2001, 2018, 2021). Our objective is to cover as many of the language components and skills required to update them within the framework of the action-oriented approach advocated by the EC.Our discussion focuses on the technological conditions for applying this model and our conclusion on the prospects, already at our doorstep, of the organic integration of the artificial intelligence of languages into humans.
Special aspects of education, Philology. Linguistics
A Review of Fundamental Optimization Approaches and the Role of AI Enabling Technologies in Physical Layer Security
Mulugeta Kassaw Tefera, Zengwang Jin, Shengbing Zhang
With the proliferation of 5G mobile networks within next-generation wireless communication, the design and optimization of 5G networks are progressing in the direction of improving the physical layer security (PLS) paradigm. This phenomenon is due to the fact that traditional methods for the network optimization of PLS fail to adapt new features, technologies, and resource management to diversified demand applications. To improve these methods, future 5G and beyond 5G (B5G) networks will need to rely on new enabling technologies. Therefore, approaches for PLS design and optimization that are based on artificial intelligence (AI) and machine learning (ML) have been corroborated to outperform traditional security technologies. This will allow future 5G networks to be more intelligent and robust in order to significantly improve the performance of system design over traditional security methods. With the objective of advancing future PLS research, this review paper presents an elaborate discussion on the design and optimization approaches of wireless PLS techniques. In particular, we focus on both signal processing and information-theoretic security approaches to investigate the optimization techniques and system designs of PLS strategies. The review begins with the fundamental concepts that are associated with PLS, including a discussion on conventional cryptographic techniques and wiretap channel models. We then move on to discuss the performance metrics and basic optimization schemes that are typically adopted in PLS design strategies. The research directions for secure system designs and optimization problems are then reviewed in terms of signal processing, resource allocation and node/antenna selection. Thereafter, the applications of AI and ML technologies in the optimization and design of PLS systems are discussed. In this context, the ML- and AI-based solutions that pertain to end-to-end physical layer joint optimization, secure resource allocation and signal processing methods are presented. We finally conclude with discussions on future trends and technical challenges that are related to the topics of PLS system design and the benefits of AI technologies.
Sparsity Increases Uncertainty Estimation in Deep Ensemble
Uyanga Dorjsembe, Ju Hong Lee, Bumghi Choi
et al.
Deep neural networks have achieved almost human-level results in various tasks and have become popular in the broad artificial intelligence domains. Uncertainty estimation is an on-demand task caused by the black-box point estimation behavior of deep learning. The deep ensemble provides increased accuracy and estimated uncertainty; however, linearly increasing the size makes the deep ensemble unfeasible for memory-intensive tasks. To address this problem, we used model pruning and quantization with a deep ensemble and analyzed the effect in the context of uncertainty metrics. We empirically showed that the ensemble members’ disagreement increases with pruning, making models sparser by zeroing irrelevant parameters. Increased disagreement im-plies increased uncertainty, which helps in making more robust predictions. Accordingly, an energy-efficient compressed deep ensemble is appropriate for memory-intensive and uncertainty-aware tasks.
Electronic computers. Computer science
The meta-analysis of smart data international researches
Omid Aliour, Shima Moradi, Saeed Ghaffari
Smart data is the raw material for many activities such as automation, intelligent systems, artificial intelligence and for the fourth industrial revolution. The purpose of this study is to systematically analyze all smart data related studies published from 1980 to the end of September 2017. Also, the study of probabilistic patterns is another purpose of this research. Regarding the search model of Winer, Amike and Lee in 2008, the articles of this study were extracted using a systematic search in the Web of Science database and 220 articles were selected as the final population. They were considered to identify authors, objective, population, countries and universities, funders, years, publication terms, citation status, keywords, subject, format, language, and authorship. The main findings show that Sen Soumya has the highest number of articles (63.3%) in this field, while the United States with 33.63 percent, Princeton University with 18.3 percent, and the National Science Foundation of China with 2.72 percent have the largest share in countries, universities, and institutions. The objectives of 72.77% of articles were smart data applications and 84.54 percent of the articles have been made on nonhuman societies. Most research in this area (20.9%) was conducted in 2016. The IEEE Conference on Computer Communications Workshops has published most articles in this field (18.3%). Average citations received is 4.4. The keyword «system” (18.3%) is the most common. 39.44 percent of the published articles relate to computer science. 52/64 percent of the articles were published in the form of a conference. 18.88 percent of articles are written in English. 90.5 percent of the articles are written by single authors and 94 percent of them are written by several writers. The results of the current study indicate the variety and extent of the components studied.
Bibliography. Library science. Information resources
Prediction of irinotecan toxicity in metastatic colorectal cancer patients based on machine learning models with pharmacokinetic parameters
Esther Oyaga-Iriarte, Asier Insausti, Onintza Sayar
et al.
Irinotecan (CPT-11) is a drug used against a wide variety of tumors, which can cause severe toxicity, possibly leading to the delay or suspension of the cycle, with the consequent impact on the prognosis of survival. The main goal of this work is to predict the toxicities derived from CPT-11 using artificial intelligence methods.The data for this study is conformed of 53 cycles of FOLFIRINOX, corresponding to patients with metastatic colorectal cancer. Supported by several demographic data, blood markers and pharmacokinetic parameters resulting from a non-compartmental pharmacokinetic study of CPT-11 and its metabolites (SN-38 and SN-38-G), we use machine learning techniques to predict high degrees of different toxicities (leukopenia, neutropenia and diarrhea) in new patients.We predict high degree of leukopenia with an accuracy of 76%, neutropenia with 75% and diarrhea with 91%. Among other variables, this study shows that the areas under the curve of CPT-11, SN-38 and SN-38-G play a relevant role in the prediction of the studied toxicities.The presented models allow to predict the degree of toxicity for each cycle of treatment according to the particularities of each patient. Keywords: Colorectal cancer, Irinotecan, Machine learning, Pharmacokinetics, Toxicity
Therapeutics. Pharmacology
Emergentist View on Generative Narrative Cognition: Considering Principles of the Self-Organization of Mental Stories
Taisuke Akimoto
We consider the essence of human intelligence to be the ability to mentally (internally) construct a world in the form of stories through interactions with external environments. Understanding the principles of this mechanism is vital for realizing a human-like and autonomous artificial intelligence, but there are extremely complex problems involved. From this perspective, we propose a conceptual-level theory for the computational modeling of generative narrative cognition. Our basic idea can be described as follows: stories are representational elements forming an agent’s mental world and are also living objects that have the power of self-organization. In this study, we develop this idea by discussing the complexities of the internal structure of a story and the organizational structure of a mental world. In particular, we classify the principles of the self-organization of a mental world into five types of generative actions, i.e., connective, hierarchical, contextual, gathering, and adaptive. An integrative cognition is explained with these generative actions in the form of a distributed multiagent system of stories.
Electronic computers. Computer science