Hasil untuk "artificial intelligence"

Menampilkan 20 dari ~1402303 hasil · dari CrossRef, DOAJ

JSON API
DOAJ Open Access 2026
Egocentric Hand Activity Video Dataset and Bidirectional Motion-Priors for Hand Action Recognition

Jiyoung Seo, Dong In Lee, Pilhyeon Lee et al.

Recognizing tool-based hand activities from a first-person view is a critical yet challenging task in computer vision, due to the complexity of hand-object interactions and often subtle, ambiguous motion patterns. In real-world manufacturing scenarios, these challenges are exacerbated by bidirectional action pairs whose visual cues are almost identical, with differences revealed only through subtle motion dynamics. However, existing datasets rarely capture these direction-sensitive interactions at scale, particularly in realistic tool-use contexts, limiting the ability of current models to learn fine-grained motion dynamics essential for accurate recognition. We introduce Ego-Bi (Egocentric-Bidirectional dataset), a large-scale, real-world egocentric RGB video dataset comprising 1,223 video sequences and 622,737 frames that cover diverse tool-use activities in unconstrained environments. Ego-Bi provides an extended 38-category hand type taxonomy, detailed object–tool labels, and challenging bidirectional action pairs, offering rich semantic and temporal cues for modeling complex hand–object interactions. In addition, to address the ambiguity in motion dynamics, we propose a BMP (Bidirectional Motion Prior module) that derives rotation and directional cues from predicted 3D hand poses to improve class separability of visually similar actions. Experimental results on Ego-Bi demonstrate that our approach improves bidirectional action recognition accuracy by + 8.96% over the baseline, while also yielding consistent gains across general action classes without requiring costly 3D pose annotations. Furthermore, the proposed motion priors generalize effectively to other egocentric benchmarks, underscoring their robustness in handling visually similar, direction-sensitive actions.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2026
A multi-branch network for cooperative spectrum sensing via attention-based and CNN feature fusion

Doi Thi Lan, Quan T. Ngo, Luong Vuong Nguyen et al.

Abstract In cognitive radio (CR) systems, the accurate detection of spectrum holes is a cornerstone for efficient spectrum utilization. However, the increasing complexity of CR environments, particularly those with multiple primary users (PUs), has made precise spectrum sensing a paramount challenge. To address this challenge, this study introduces the ATC model, a novel deep learning architecture that integrates a parallel combination of attention mechanism-based networks and a Convolutional Neural Network (CNN). This hybrid design enables the model to capture both spatial and temporal features from the distinct statistics of sensing signals, thereby enhancing the accuracy of spectrum state detection. The model employs a Graph Attention Network (GAT) to extract complex topological features from graph-structured data derived from received signal strength, dynamically highlighting the most relevant information. To complement this, a CNN processes the sample covariance matrix of sensing signals, unlocking localized statistical correlations and hierarchical feature representations by treating the matrix as an image. Temporal dynamics, such as PU activity patterns, are modeled using a Transformer encoder, which leverages a self-attention mechanism to learn sequential features effectively. The proposed model is evaluated using both simulated and real-world datasets. For the simulated datasets, the model is assessed and compared with baseline methods under multi-PU scenarios across different channel models. For the real-world dataset, the experimental setup is configured for a single-PU scenario due to practical data collection limitations. In both cases, the ATC model demonstrates improved performance over the benchmarked spectrum sensing methods, exhibiting higher accuracy and robustness within the respective evaluation settings.

Medicine, Science
DOAJ Open Access 2026
AI-enhanced professional learning communities: a new era of personalized teacher education

Mohammad Hossein Arefian

Language teacher education programs can become more reflective, inclusive, collaborative, situated, and inquiry-based. One such professional approach to incorporate these characters can be through personalized language teacher education (PLTE). Due to the importance of using AI and professional learning communities (PLCs) for developing a personalized teacher education, this study explored how AI-enhanced PLCs could be leveraged to create a more responsive, inclusive, and personalized teacher education. Still, a significant gap exists in understanding how AI can be specially integrated into PLCs to create personalized pathways for ELT preservice teachers, mainly in under-resourced contexts. To conduct this exploratory case study, 8 Iranian English language teaching (ELT) pre-service teachers were purposively selected from a teacher education university. Data was collected from group discussion, artifacts, and interviews, and the result of the thematic analysis revealed that AI-enhanced PLCs fostered personalized, reflective, and collaborative development by addressing individual teaching needs and providing innovative strategies. By addressing individual teaching needs and providing innovative instructional strategies, AI facilitated a dynamic learning environment. However, effective integration required overcoming challenges like limited AI literacy and contextual mismatches, highlighting the potential for tailored, impactful education. This study can inform teacher educators, policymakers, administrators, and teachers to integrate AI into their PLCs to develop a PLTE.

Education (General)
DOAJ Open Access 2026
GREEN FINANCE IN THE CONTEXT OF CLIMATE CHANGE: A BIBLIOMETRIC ANALYSIS OF THE ACADEMIC LITERATURE (2001–2025)

SPULBAR CRISTI , DUPIR MIHAI CATALIN

This article analyses the evolution, structure, and dynamics of the academic literature on green finance in the context of climate change, using a bibliometric approach applied to publications indexed in the Web of Science – Core Collection for the period 2001–2025. The methodology is based on descriptive and relational bibliometric indicators, including the analysis of scientific production, sources, authors’ impact, co-authorship networks, and keyword co occurrence, complemented by thematic maps and temporal analyses of emerging themes, conducted using the Bibliometrix package within the R environment. The results highlight an accelerated growth of academic interest after 2016, with a concentration of publications in economics and finance journals such as Energy Economics, Finance Research Letters, and International Review of Financial Analysis, as well as a polycentric structure of international collaborations dominated by East Asia and Europe. The conceptual analysis reveals three major thematic clusters: the performance and impact of green investments, energy transition and sustainable economic growth, and systemic risks and financial stability. The emergence of themes such as financial digitalisation, fintech, and artificial intelligence indicates recent directions of research diversification. The article contributes by providing a systematic mapping of a rapidly maturing field and by identifying epistemic gaps, highlighting the need to expand comparative studies, interdisciplinary approaches, and analyses of green finance in emerging and transition economies.

Commercial geography. Economic geography, Economics as a science
DOAJ Open Access 2026
Formula for the Digital Wellbeing of the Personality

Svetlana V. Chigarkova, Galina U. Soldatova

Background. In the context of the digitalisation of everyday life, digital wellbeing is a concept that has recently emerged. It signifies the need to reflect on the impact of digital transformations on various spheres of human life, and is becoming the most important type of a person’s wellbeing. Objectives. The study is devoted to the analysis of modern approaches to psychological wellbeing in the digital world and digital wellbeing as a socio-psychological phenomenon. Methods. The study involved a theoretical analysis and systematisation of modern scientific approaches to digital wellbeing. The socio-cognitive concept of digital socialisation served as the methodological framework to the study. Results. The key areas of research on the relationship between wellbeing and different aspects of digital technology use are identified. These aspects are: digital access, digital inequality and digital competence; problematic internet use, screen time and gaming; the impact of digital technologies on cognitive development; social media use and digital practices as factors of wellbeing; the development of artificial intelligence technologies as a new challenge to wellbeing. The existing concepts of digital wellbeing have been analysed and a formula for digital wellbeing has been proposed, comprised of three components. These are firstly satisfaction with connectedness and management of mixed reality, secondly self-efficacy and management of digital extended personality, and thirdly satisfaction and management of digital sociality. Conclusions. The development of a formula for digital wellbeing contributes to the understanding of constructive strategies of human adaptation and pre-adaptation in the context of increasing digitalisation of everyday life. These are necessary both to maintain an optimal level of stability of society and to ensure its development in the near future in response to new socio-technological challenges.

DOAJ Open Access 2025
AI-heat transfer analysis of casson fluid in uniformly heated enclosure with semi heated baffle

Khalil Ur Rehman, Wasfi Shatanawi, Lok Yian Yian

The heat transfer in Casson fluid with natural convection claims various applications namely thermal regulation in biological systems, solar collectors, polymer processing, and geothermal applications to mention just a few. Owing to such motivation, we have offered artificial intelligence-based solution outcomes for heat transfer aspects in Casson fluid flow in a partially heated square enclosure with free convection effect. The semi-heated triangular baffle is installed at the center of the cavity. The bottom and right walls have the same amount of heat. The left wall of the cavity is taken cold and the top wall is taken insulated. The surface of triangular baffle and cavity walls are carried with non-slip condition. Finite element method (FEM) with hybrid meshing is used to solve the developed flow equations. AI-based neural networks model is used to examine the variation in Nusselt number for the involved flow parameters. MSE=2.15008e-6, 5.81476e-5, and 3.51888e-4 for training, validation, and testing respectively, suggesting good model performance on Nusselt number data along the bottom and vertical walls. We have observed that the heat transfer coefficient improves as Rayleigh and Prandtl numbers increase. We believe that the present AI-based outcomes will be helpful for predicting natural convection phenomena subject to thermal engineering standpoints.

DOAJ Open Access 2025
Psychological impacts of AI-induced job displacement among Indian IT professionals: a Delphi-validated thematic analysis

Vinod Sharma, Saikat Deb, Yogesh Mahajan et al.

Purpose This study investigates the psychological impact of Artificial Intelligence (AI)-driven job displacement among Indian IT professionals. It specifically explores how individuals psychologically experience the loss of roles due to automation, and how these experiences influence their emotional, cognitive, and behavioural well-being. Method A qualitative phenomenological approach was used to capture the lived experiences of 24 IT professionals who faced AI-induced job loss or reassignment. Data were collected via in-depth semi-structured interviews and analysed through thematic analysis. To ensure rigour and theoretical saturation, a three-round Delphi process involving 20 domain experts—spanning clinical psychology, organizational behaviour, and AI policy—was used to validate and refine the emergent themes. Results Six core psychological themes were identified: emotional shock, erosion of professional identity, chronic anxiety and anticipatory rumination, social withdrawal, adaptive and maladaptive coping strategies, and perceived organizational betrayal. These themes reflect a multilayered resource loss, including identity, control, employability, and social belonging. Conclusion AI-driven role redundancy in the Indian IT sector is more than a labour market shift a deep psychological disruption. This study underscores the urgent need for organizations, mental health practitioners, and policymakers to develop anticipatory and compassionate interventions that can buffer the mental health consequences of technological transformation.

Medicine (General)
DOAJ Open Access 2025
Electromagnetic Field Distribution Mapping: A Taxonomy and Comprehensive Review of Computational and Machine Learning Methods

Yiannis Kiouvrekis, Theodor Panagiotakopoulos

Electromagnetic field (EMF) exposure mapping is increasingly important for ensuring compliance with safety regulations, supporting the deployment of next-generation wireless networks, and addressing public health concerns. While numerous surveys have addressed specific aspects of radio propagation or radio environment maps, a comprehensive and unified overview of EMF mapping methodologies has been lacking. This review bridges that gap by systematically analyzing computational, geospatial, and machine learning approaches used for EMF exposure mapping across both wireless communication engineering and public health domains. A novel taxonomy is introduced to clarify overlapping terminology—encompassing radio maps, radio environment maps, and EMF exposure maps—and to classify construction methods, including analytical models, model-based interpolation, and data-driven learning techniques. In addition, the review highlights domain-specific challenges such as indoor versus outdoor mapping, data sparsity, and model generalization, while identifying emerging opportunities in hybrid modeling, big data integration, and explainable AI. By combining perspectives from communication engineering and public health, this work provides a broader and more interdisciplinary synthesis than previous surveys, offering a structured reference and roadmap for advancing robust, scalable, and socially relevant EMF mapping frameworks.

Electronic computers. Computer science
CrossRef Open Access 2024
DeepMPTB: a vaginal microbiome-based deep neural network as artificial intelligence strategy for efficient preterm birth prediction

Oshma Chakoory, Vincent Barra, Emmanuelle Rochette et al.

AbstractIn recent decades, preterm birth (PTB) has become a significant research focus in the healthcare field, as it is a leading cause of neonatal mortality worldwide. Using five independent study cohorts including 1290 vaginal samples from 561 pregnant women who delivered at term (n = 1029) or prematurely (n = 261), we analysed vaginal metagenomics data for precise microbiome structure characterization. Then, a deep neural network (DNN) was trained to predict term birth (TB) and PTB with an accuracy of 84.10% and an area under the receiver operating characteristic curve (AUROC) of 0.875 ± 0.11. During a benchmarking process, we demonstrated that our DL model outperformed seven currently used machine learning algorithms. Finally, our results indicate that overall diversity of the vaginal microbiota should be taken in account to predict PTB and not specific species. This artificial-intelligence based strategy should be highly helpful for clinicians in predicting preterm birth risk, allowing personalized assistance to address various health issues. DeepMPTB is open source and free for academic use. It is licensed under a GNU Affero General Public License 3.0 and is available at https://deepmptb.streamlit.app/. Source code is available at https://github.com/oschakoory/DeepMPTB and can be easily installed using Docker (https://www.docker.com/).

10 sitasi en
DOAJ Open Access 2024
Is cardiovascular risk profiling from UK Biobank retinal images using explicit deep learning estimates of traditional risk factors equivalent to actual risk measurements? A prospective cohort study design

Kohji Nishida, Ryo Kawasaki, Yiming Qian et al.

Objective Despite extensive exploration of potential biomarkers of cardiovascular diseases (CVDs) derived from retinal images, it remains unclear how retinal images contribute to CVD risk profiling and how the results can inform lifestyle modifications. Therefore, we aimed to determine the performance of cardiovascular risk prediction model from retinal images via explicitly estimating 10 traditional CVD risk factors and compared with the model based on actual risk measurements.Design A prospective cohort study design.Setting The UK Biobank (UKBB), a prospective cohort study, following the health conditions including CVD outcomes of adults recruited between 2006 and 2010.Participants A subset of data from the UKBB which contains 52 297 entries with retinal images and 5-year cumulative incidence of major adverse cardiovascular events (MACE) was used. Our dataset is split into 3:1:1 as training set (n=31 403), validation set (n=10 420) and testing set (n=10 474). We developed a deep learning (DL) model to predict 5-year MACE using a two-stage DL neural network.Primary and secondary outcome measures We computed accuracy, area under the receiver operating characteristic curve (AUC) and compared variations in the risk prediction models combining CVD risk factors and retinal images.Results The first-stage DL model demonstrated that the 10 CVD risk factors can be estimated from a given retinal image with an accuracy ranging between 65.2% and 89.8% (overall AUC of 0.738 with 95% CI: 0.710 to 0.766). In MACE prediction, our model outperformed the traditional score-based models, with 8.2% higher AUC than Systematic COronary Risk Evaluation (SCORE), 3.5% for SCORE 2 and 7.1% for the Framingham Risk Score (with p value<0.05 for all three comparisons).Conclusions Our algorithm estimates the 5-year risk of MACE based on retinal images, while explicitly presenting which risk factors should be checked and intervened. This two-stage approach provides human interpretable information between stages, which helps clinicians gain insights into the screening process copiloting with the DL model.

DOAJ Open Access 2024
A comprehensive review of explainable AI for disease diagnosis

Al Amin Biswas

Nowadays, artificial intelligence (AI) has been utilized in several domains of the healthcare sector. Despite its effectiveness in healthcare settings, its massive adoption remains limited due to the transparency issue, which is considered a significant obstacle. To achieve the trust of end users, it is necessary to explain the AI models' output. Therefore, explainable AI (XAI) has become apparent as a potential solution by providing transparent explanations of the AI models' output. In this review paper, the primary aim is to review articles that are mainly related to machine learning (ML) or deep learning (DL) based human disease diagnoses, and the model's decision-making process is explained by XAI techniques. To do that, two journal databases (Scopus and the IEEE Xplore Digital Library) were thoroughly searched using a few predetermined relevant keywords. The PRISMA guidelines have been followed to determine the papers for the final analysis, where studies that did not meet the requirements were eliminated. Finally, 90 Q1 journal articles are selected for in-depth analysis, covering several XAI techniques. Then, the summarization of the several findings has been presented, and appropriate responses to the proposed research questions have been outlined. In addition, several challenges related to XAI in the case of human disease diagnosis and future research directions in this sector are presented.

Computer engineering. Computer hardware, Electronic computers. Computer science
DOAJ Open Access 2024
Accidental injustice: Healthcare AI legal responsibility must be prospectively planned prior to its adoption

Kit Fotheringham, Helen Smith

This article contributes to the ongoing debate about legal liability and responsibility for patient harm in scenarios where artificial intelligence (AI) is used in healthcare.We note that due to the structure of negligence liability in England and Wales, it is likely that clinicians would be held solely negligent for patient harms arising from software defects, even though AI algorithms will share the decision-making space with clinicians.Drawing on previous research, we argue that the traditional model of negligence liability for clinical malpractice cannot be relied upon to offer justice for clinicians and patients. There is a pressing need for law reform to consider the use of risk pooling, alongside detailed professional guidance for the use of AI in healthcare spaces.

DOAJ Open Access 2024
ID-Det: Insulator Burst Defect Detection from UAV Inspection Imagery of Power Transmission Facilities

Shangzhe Sun, Chi Chen, Bisheng Yang et al.

The global rise in electricity demand necessitates extensive transmission infrastructure, where insulators play a critical role in ensuring the safe operation of power transmission systems. However, insulators are susceptible to burst defects, which can compromise system safety. To address this issue, we propose an insulator defect detection framework, ID-Det, which comprises two main components, i.e., the Insulator Segmentation Network (ISNet) and the Insulator Burst Detector (IBD). (1) ISNet incorporates a novel Insulator Clipping Module (ICM), enhancing insulator segmentation performance. (2) IBD leverages corner extraction methods and the periodic distribution characteristics of corners, facilitating the extraction of key corners on the insulator mask and accurate localization of burst defects. Additionally, we construct an Insulator Defect Dataset (ID Dataset) consisting of 1614 insulator images. Experiments on this dataset demonstrate that ID-Det achieves an accuracy of 97.38%, a precision of 97.38%, and a recall rate of 94.56%, outperforming general defect detection methods with a 4.33% increase in accuracy, a 5.26% increase in precision, and a 2.364% increase in recall. ISNet also shows a 27.2% improvement in Average Precision (AP) compared to the baseline. These results indicate that ID-Det has significant potential for practical application in power inspection.

Motor vehicles. Aeronautics. Astronautics
DOAJ Open Access 2024
Enhancing Online Security: A Novel Machine Learning Framework for Robust Detection of Known and Unknown Malicious URLs

Shiyun Li, Omar Dib

The rapid expansion of the internet has led to a corresponding surge in malicious online activities, posing significant threats to users and organizations. Cybercriminals exploit malicious uniform resource locators (URLs) to disseminate harmful content, execute phishing schemes, and orchestrate various cyber attacks. As these threats evolve, detecting malicious URLs (MURLs) has become crucial for safeguarding internet users and ensuring a secure online environment. In response to this urgent need, we propose a novel machine learning-driven framework designed to identify known and unknown MURLs effectively. Our approach leverages a comprehensive dataset encompassing various labels—including benign, phishing, defacement, and malware—to engineer a robust set of features validated through extensive statistical analyses. The resulting malicious URL detection system (MUDS) combines supervised machine learning techniques, tree-based algorithms, and advanced data preprocessing, achieving a high detection accuracy of 96.83% for known MURLs. For unknown MURLs, the proposed framework utilizes CL_K-means, a modified k-means clustering algorithm, alongside two additional biased classifiers, achieving 92.54% accuracy on simulated zero-day datasets. With an average processing time of under 14 milliseconds per instance, MUDS is optimized for real-time integration into network endpoint systems. These outcomes highlight the efficacy and efficiency of the proposed MUDS in fortifying online security by identifying and mitigating MURLs, thereby reinforcing the digital landscape against cyber threats.

DOAJ Open Access 2024
Association between myosteatosis and impaired glucose metabolism: A deep learning whole‐body magnetic resonance imaging population phenotyping approach

Matthias Jung, Hanna Rieder, Marco Reisert et al.

Abstract Background There is increasing evidence that myosteatosis, which is currently not assessed in clinical routine, plays an important role in risk estimation in individuals with impaired glucose metabolism, as it is associated with the progression of insulin resistance. With advances in artificial intelligence, automated and accurate algorithms have become feasible to fill this gap. Methods In this retrospective study, we developed and tested a fully automated deep learning model using data from two prospective cohort studies (German National Cohort [NAKO] and Cooperative Health Research in the Region of Augsburg [KORA]) to quantify myosteatosis on whole‐body T1‐weighted Dixon magnetic resonance imaging as (1) intramuscular adipose tissue (IMAT; the current standard) and (2) quantitative skeletal muscle (SM) fat fraction (SMFF). Subsequently, we investigated the two measures for their discrimination of and association with impaired glucose metabolism beyond baseline demographics (age, sex and body mass index [BMI]) and cardiometabolic risk factors (lipid panel, systolic blood pressure, smoking status and alcohol consumption) in asymptomatic individuals from the KORA study. Impaired glucose metabolism was defined as impaired fasting glucose or impaired glucose tolerance (140–200 mg/dL) or prevalent diabetes mellitus. Results Model performance was high, with Dice coefficients of ≥0.81 for IMAT and ≥0.91 for SM in the internal (NAKO) and external (KORA) testing sets. In the target population (380 KORA participants: mean age of 53.6 ± 9.2 years, BMI of 28.2 ± 4.9 kg/m2, 57.4% male), individuals with impaired glucose metabolism (n = 146; 38.4%) were older and more likely men and showed a higher cardiometabolic risk profile, higher IMAT (4.5 ± 2.2% vs. 3.9 ± 1.7%) and higher SMFF (22.0 ± 4.7% vs. 18.9 ± 3.9%) compared to normoglycaemic controls (all P ≤ 0.005). SMFF showed better discrimination for impaired glucose metabolism than IMAT (area under the receiver operating characteristic curve [AUC] 0.693 vs. 0.582, 95% confidence interval [CI] [0.06–0.16]; P < 0.001) but was not significantly different from BMI (AUC 0.733 vs. 0.693, 95% CI [−0.09 to 0.01]; P = 0.15). In univariable logistic regression, IMAT (odds ratio [OR] = 1.18, 95% CI [1.06–1.32]; P = 0.004) and SMFF (OR = 1.19, 95% CI [1.13–1.26]; P < 0.001) were associated with a higher risk of impaired glucose metabolism. This signal remained robust after multivariable adjustment for baseline demographics and cardiometabolic risk factors for SMFF (OR = 1.10, 95% CI [1.01–1.19]; P = 0.028) but not for IMAT (OR = 1.14, 95% CI [0.97–1.33]; P = 0.11). Conclusions Quantitative SMFF, but not IMAT, is an independent predictor of impaired glucose metabolism, and discrimination is not significantly different from BMI, making it a promising alternative for the currently established approach. Automated methods such as the proposed model may provide a feasible option for opportunistic screening of myosteatosis and, thus, a low‐cost personalized risk assessment solution.

Diseases of the musculoskeletal system, Human anatomy
DOAJ Open Access 2023
AI the creator? Analysing prose and poetry created by Artificial Intelligence

Miriam Kobierski

Artificial intelligence is already developed enough to perform mechanical and computerised tasks, but the ability to convey emotions and recreate human consciousness is currently being studied. This article will primarily deal with AI-generated literature, focusing mainly on short text extracts and poetry. The selected research method in this article is qualitative research. A linguistic analysis of individual texts was carried out, as well as a comparison of the texts themselves. An experiment was conducted in which a group of participants decided whether the presented text was written by a human or artificial intelligence. In addition, I examined the positions in the discussion whether AI poetry can be considered authentic.

English language, English literature
DOAJ Open Access 2023
How can gender be identified from heart rate data? Evaluation using ALLSTAR heart rate variability big data analysis

Itaru Kaneko, Junichiro Hayano, Emi Yuda

Abstract Objective A small electrocardiograph and Holter electrocardiograph can record an electrocardiogram for 24 h or more. We examined whether gender could be verified from such an electrocardiogram and, if possible, how accurate it would be. Results Ten dimensional statistics were extracted from the heart rate data of more than 420,000 people, and gender identification was performed by various major identification methods. Lasso, linear regression, SVM, random forest, logistic regression, k-means, Elastic Net were compared, for Age < 50 and Age ≥ 50. The best Accuracy was 0.681927 for Random Forest for Age < 50. There are no consistent difference between Age < 50 and Age ≥ 50. Although the discrimination results based on these statistics are statistically significant, it was confirmed that they are not accurate enough to determine the gender of an individual.

Medicine, Biology (General)
DOAJ Open Access 2022
REVIEW OF THEORETICAL APPROACHES TO USING OF ARTIFICIAL INTELLIGENCE FOR PLANNING PROBLEMS IN ECONOMICS

Gocha Ugulava

Artificial intelligence methods and technologies are increasingly included in human's everyday life. Managing actors in the context of their activities, from the planning stage to the decision-making stage, are faced with the need to operate with big data, non-linear, exponentially growing, critically overloaded data scenarios. In these conditions, the need to introduce artificial intelligence technologies is due to the exhaustion of the intellectual and analytical capabilities of a person. The article discusses a variety of methods and approaches of artificial intelligence, examines the content of key algorithms, models and theories, their strengths and weaknesses in such important areas of the economy as planning and decision-making. The focus is on their classification. Due to the dependence of the planning process on environmental factors, both classical and non-classical planning environments are discussed. If the environment is fully observable, deterministic and static (external changes are ignored) and discrete in terms of time and action, then we are dealing with a classical planning environment. In the case of a partially observable or stochastic environment, we get a non-classical planning environment. The simplest and most intuitive approach to the planning process algorithms is a Total Order Planning. A scheduling algorithm with parallel execution of actions or without specifying the sequence of their execution is a Partial Order Planning algorithm. Recent research into the development of efficient algorithms has sparked interest in one of the earliest planning approaches – Prepositional Logic Planning. With the Critical Path Method, a schedule of activities is drawn up as part of a plan with zero critical travel time margin for each activity, taking into account the calculation of the time margin for each activity and sequence of activities. A forward-looking planning method for solving complex problems is a hierarchical decomposition based on a Hierarchical Task Networks. The influence of time and resource factors on planning procedures is separately highlighted. Approaches and methods used in a non-classical planning environment: compatible planning, conditional planning, continuous planning, multi-agent planning. Special attention is paid to the issues of constructing planning models in conditions of uncertainty based on the theoretical-probabilistic (stochastic) approaches. Bayesian networks are used to represent vagueness. The Relational Probability Model includes certain constraints on the presentation means, thereby guaranteeing a fully defined probability distributions. The main tasks of probabilistic representation in temporal models are: filtering, forecasting, smoothing, determining a probabilistic explanation. By combining these algorithms and additional enhancements, three large blocks of temporal models can be obtained: Hidden Markov Models, Kalman Filter, and Dynamic Bayesian Network. Decision theory allows the agent to determine the sequence of actions to be performed. A simpler formal system for solving decision-making problems is decision-making networks. The use of expert systems containing information about utility creates additional opportunities. Sequential multiple decision problems in an uncertain environment, such as Markov Decision Processes, are defined using transition models. When several agents interact simultaneously, game theory is used to describe the rational behavior of agents. As we can see, planning has recently become one of the most interesting and relevant directions in the field of artificial intelligence research. There is still a long way to go: it is necessary to develop a clear vision of the problem of choosing the appropriate specific methods depending on the type of task, perhaps by creating completely new methods and approaches.

Economics as a science
DOAJ Open Access 2022
A Comparison of Several AI Techniques for Authorship Attribution on Romanian Texts

Sanda-Maria Avram, Mihai Oltean

Determining the author of a text is a difficult task. Here, we compare multiple Artificial Intelligence techniques for classifying literary texts written by multiple authors by taking into account a limited number of speech parts (prepositions, adverbs, and conjunctions). We also introduce a new dataset composed of texts written in the Romanian language on which we have run the algorithms. The compared methods are artificial neural networks, multi-expression programming, k-nearest neighbour, support vector machines, and decision trees with C5.0. Numerical experiments show, first of all, that the problem is difficult, but some algorithms are able to generate acceptable error rates on the test set.

CrossRef Open Access 2021
Artificial Intelligence Assisted Innovation

Gideon Samid

Artificial Intelligence Assisted Innovation (AIAI) is a technology designed to improve innovation productivity by helping human innovators with all the support tasks that kindle the creative spark, and also with sorting out innovative propositions for their merit. Innovation activity is mushrooming and hence innovative history is an ever growing data accumulation. AIAI identified a universal innovation map, which is processed like the tape in a Turing machine, only here in the Innovation Turing machine, marking an innovation pathway. By mapping innovative history onto these maps, one enables the growing record of innovation history to guide current innovation as to merit, expected cost, estimated duration, etc. Using Monte Carlo and Discriminant Analysis, an Artificial Innovation Assistant runs a dialog with the human innovator with a net effect of accelerated innovation. Users of AIAI are expected to exhibit a commanding lead over innovators guided only by their creativity.

Halaman 2 dari 70116