Thomas Bäck
Hasil untuk "artificial intelligence"
Menampilkan 20 dari ~3571400 hasil · dari DOAJ, Semantic Scholar, arXiv, CrossRef
S. F. Calloni, A. Diena, G. M. Agazzi et al.
ObjectivesTo assess the reliability of semi-quantitative and AI-based quantitative brain volume evaluation (Quantib® ND) in predicting clinical diagnosis in patients with suspected neurodegenerative diseases undergoing initial 1.5 T MRI. Additionally, to analyze the frequency of lobar microbleeds (MBs) at diagnosis.MethodsTwo neuroradiologists (2 vs. 10 years’ experience), blinded to diagnosis, independently evaluated brain atrophy on 3D-T1 images of 133 subjects using Scheltens, Koedam, and Kipps scales. Automated volumetric analysis was performed using Quantib® ND. SWI images were assessed by one neuroradiologist to classify MBs as cortical, juxtacortical, subcortical, or deep. Inter-observer agreement was measured using intraclass correlation coefficients (ICC); correlation with Quantib® ND was analyzed using Spearman's coefficient. Cohen's Kappa assessed agreement with clinical diagnosis.ResultsGood inter-observer agreement was observed for the MTA scale (ICC 0.86 right, 0.82 left) and Kipps scale (ICC 0.76), with moderate concordance for Koedam (ICC 0.66). Frontal and posterior temporal Kipps subregions had good concordance (ICC 0.77, 0.79), while anterior temporal showed poor agreement (ICC 0.59). Diagnostic accuracy was moderate across observers and Quantib® ND. Observer 1 showed 77% sensitivity, 51% specificity; observer 2 had 79% sensitivity, 62% specificity; Quantib® ND reached 56% sensitivity, 74% specificity. Patients exhibited significantly more lobar MBs than non-dementia patients (χ2p = 0.04).ConclusionsSemi-quantitative visual scales proved effective and sensitive for detecting brain atrophy, showing good concordance with automated volumetric data. While AI-based quantification demonstrated higher specificity, visual assessment remained more sensitive. Lobar MBs were more frequent in neurodegenerative cases.
Jungwook Koh, Young Ho Kim, Namgi Kim et al.
Abstract This study focused on developing a machine learning (ML) model to forecast the success of camouflage orthodontic treatment in individuals with skeletal Class III malocclusion and to identify significant predictors to aid treatment planning. A total of 100 adult patients who had skeletal Class III malocclusion and were treated with camouflage orthodontics were analyzed retrospectively. Treatment success was defined by an overjet exceeding 2 mm, proper canine relationship, and appropriate molar relationship (as applicable). Four machine learning algorithms (Random Forest, CART, Neural Network, and XGBoost) were trained and evaluated using fivefold cross-validation. Cephalometric variables were analyzed before and after treatment, and model performance was evaluated. Among all metrics, XGBoost exhibited the best predictive performance, suggesting better generalization. A decision tree model showed that the sagittal position of the lower incisors (L1_x) and palatal length (Palatal L) were the most influential predictors. An L1_x of less than 76 mm and a Palatal L of 41 mm or greater were strongly associated with successful treatment. ML algorithms, particularly XGBoost, can forecast the effectiveness of camouflage treatment for skeletal Class III malocclusion. Key predictors can guide treatment planning and support artificial intelligence-assisted orthodontic decisions.
Juan David Velásquez-Henao
Retail analytics has become a transformative force, leveraging data-driven insights to optimize operations, personalize customer experiences, forecast demand, and enhance supply chain efficiency. This study provides a comprehensive bibliometric analysis of 563 documents indexed in Scopus, profiling the evolution of retail analytics over the past ten years. Key findings include 131 emerging topics clustered into 13 core trends. The analysis highlights the growing application of artificial intelligence, machine learning, and big data to drive decision-making, improve profitability, and enhance competitiveness in the retail industry. This paper addresses critical questions of "what," "where," "when," and "who" in retail analytics research, identifying areas of innovation and future growth, especially in predictive analytics, customer insights, and business operations optimization.
Simin Nazari, Amira Abdelrasoul
Membrane technologies play a vital role in sustainable development due to their efficiency in separation, purification, and chemical processing applications. However, the discovery and optimization of new membrane materials remain largely reliant on trial-and-error experimentation, limiting the pace of innovation. Artificial intelligence (AI) and machine learning (ML) are increasingly being applied to overcome these limitations by enabling data-driven insights, predictive modeling, and rapid material design. These computational approaches have shown significant promise in accelerating membrane fabrication, improving process simulation, detecting and mitigating fouling, and enhancing membrane characterization. This review provides a comprehensive overview of the recent advancements in the integration of AI and ML within membrane and material science. Fundamental AI and ML concepts relevant to membrane science are discussed, together with their applications in membrane fabrication, performance prediction, process modeling, fouling control, and membrane design. Challenges related to data quality, model interpretability, and the integration of domain-specific knowledge are also highlighted, along with potential future research directions. Compared with conventional empirical approaches, the advantages of AI and ML in handling complex, multivariate datasets and accelerating innovation are demonstrated. Overall, this review underscores the transformative potential of AI and ML in developing next-generation membranes with improved efficiency, selectivity, and sustainability across various industrial applications. Although several reviews have explored ML applications in membrane processes, comprehensive integration across material design, fabrication, fouling control, optimization, and process modeling remains limited.
Ruide Li, Wenjun Yan, Chaoqun Xia
Failures in solar photovoltaic (PV) modules generate heat, leading to various hotspots observable in infrared images. Automated hotspot detection technology enables rapid fault identification in PV systems, while PV array detection, leveraging geometric cues from infrared images, facilitates the precise localization of defects. This study tackles the complexities of detecting PV array regions and diverse hotspot defects in infrared imaging, particularly under the conditions of complex backgrounds, varied rotation angles, and the small scale of defects. The proposed model encodes infrared images to extract semantic features, which are then processed through an PV array detection branch and a hotspot detection branch. The array branch employs a diffusion-based anchor-free mechanism with rotated bounding box regression, enabling the robust detection of arrays with diverse rotational angles and irregular layouts. The defect branch incorporates a novel inside-awareness loss function designed to enhance the detection of small-scale objects. By explicitly modeling the dependency distribution between arrays and defects, this loss function effectively reduces false positives in hotspot detection. Experimental validation on a comprehensive PV dataset demonstrates the superiority of the proposed method, achieving a mean average precision (mAP) of 71.64% for hotspot detection and 97.73% for PV array detection.
Тарас Чатченко, Андрій Гриценко
This article explores the impact of digitalization on the labor market within the context of a transforming economy. The purpose of this research is to investigate how digital tools and systems redefine workforce organization, skill relevance, and economic productivity. The results reveal a significant shift from manual and routine labor toward knowledge-intensive, high-skilled positions. Technologies such as automation, artificial intelligence, cloud computing, and platform work are considered as providers of s new forms of employment, such as platform-based work and remote collaboration. Flexibility as a fundamental feature of contemporary labor relations, influencing job types, work schedules, remuneration systems, mobility, and organizational design is analaused and investigated. The practical insights for policymakers, employers, and educators in developing digital competencies and inclusive, future-ready labor systems are offered.
Paweł Nowik
This article examines the emerging role of artificial intelligence (AI) auditing as a mechanism for promoting algorithmic accountability within the European Union’s labour law framework. Focusing on two key legislative instruments—the Artificial Intelligence Act (AI Act) and the Platform Work Directive (PWD)—the study presents a comparative analysis of their respective audit models. While the AI Act introduces a general, risk‑based approach to AI governance centred on ex ante conformity assessments, the PWD establishes a sector‑specific, rights‑based framework that emphasises transparency, human oversight, and worker participation in ex post evaluations of algorithmic management systems. Drawing on legal analysis and interdisciplinary literature, the article explores how each instrument operationalises AI auditing, with particular attention to procedural safeguards, institutional design, and enforcement mechanisms. It argues that, although the AI Act offers a more formalised audit structure, its reliance on internal assessments raises concerns regarding independence and effectiveness. Conversely, while the PWD lacks a mandatory external audit requirement, it compensates through participatory governance tools, including data protection impact assessments, transparency obligations, and individual redress rights.The article concludes that these complementary regulatory models collectively represent a significant normative development in embedding algorithmic accountability within EU labour law. However, their effectiveness will depend upon robust implementation, institutional capacity, and the evolution of audit practices that are not only technically rigorous but also legally enforceable and socially legitimate.
L. Castro, J. Timmis
Yung-Hsiang Hu
Ethical decision-making is challenging for most students. Values clarification exercises (VCEs) can help reduce decisional conflicts and feelings of regret. Scholars have suggested designing values deliberation exercises based on moral dilemma scenarios to help students to identify their values system. However, such exercises are challenging to complete for most teachers and students. Therefore, the development of artificial intelligence (AI)-supported decision aids is warranted. Studies have revealed that using a one-on-one interactive chatbot is a feasible learning strategy for improving the dialectic skills of students. Thus, this study proposed a human–machine learning framework that helps students to perform values clarification in the context of moral dilemmas. To assess the effectiveness of the framework, the present study incorporated the chatbot Chat Generative Pre-trained Transformer into the business ethics course of a university to develop a generative-AI-chatbot-assisted VCE (GAIC-VCE) system for university students. In total, 70 university students were recruited and divided into an experimental group and a control group. The experimental group completed GAIC-VCEs, whereas the control group completed conventional VCEs. The results revealed that the GAIC-VCE system effectively improved the experimental-group students’ ethical self-efficacy and ethical decision-making confidence and reduced their decisional conflicts.
Changdong YU, Xinyang LIU, Cong CHEN et al.
Based on the background of future modern maritime combats, a multi-agent deep reinforcement learning scheme was proposed to complete the cooperative round-up task in the swarm game confrontation of unmanned surface vehicles (USVs). First, based on different combat modes and application scenarios, a multi-agent deep deterministic policy gradient algorithm based on distributed execution was determined, and its principle was introduced. Second, specific combat scenario platforms were simulated, and multi-agent network models, reward function mechanisms, and training strategies were designed. The experimental results show that the method proposed in this article can effectively solve the problem of cooperative round-up decision-making facing USVs from the enemy, and it has high efficiency in different combat scenarios. This work provides theoretical and reference value for the research on intelligent decision-making of USVs in complicated combat scenarios in the future.
Laura E. Suárez, Agoston Mihalik, Filip Milisav et al.
Abstract The connection patterns of neural circuits form a complex network. How signaling in these circuits manifests as complex cognition and adaptive behaviour remains the central question in neuroscience. Concomitant advances in connectomics and artificial intelligence open fundamentally new opportunities to understand how connection patterns shape computational capacity in biological brain networks. Reservoir computing is a versatile paradigm that uses high-dimensional, nonlinear dynamical systems to perform computations and approximate cognitive functions. Here we present conn2res: an open-source Python toolbox for implementing biological neural networks as artificial neural networks. conn2res is modular, allowing arbitrary network architecture and dynamics to be imposed. The toolbox allows researchers to input connectomes reconstructed using multiple techniques, from tract tracing to noninvasive diffusion imaging, and to impose multiple dynamical systems, from spiking neurons to memristive dynamics. The versatility of the conn2res toolbox allows us to ask new questions at the confluence of neuroscience and artificial intelligence. By reconceptualizing function as computation, conn2res sets the stage for a more mechanistic understanding of structure-function relationships in brain networks.
Hila Fruchtman Brot, Victoria L. Mango
Ultrasound (US) is a widely accessible and extensively used tool for breast imaging. It is commonly used as an additional screening tool, especially for women with dense breast tissue. Advances in artificial intelligence (AI) have led to the development of various AI systems that assist radiologists in identifying and diagnosing breast lesions using US. This article provides an overview of the background and supporting evidence for the use of AI in hand held breast US. It discusses the impact of AI on clinical workflow, covering breast cancer detection, diagnosis, prediction of molecular subtypes, evaluation of axillary lymph node status, and response to neoadjuvant chemotherapy. Additionally, the article highlights the potential significance of AI in breast US for low and middle income countries.
Vo-Nguyen Tuyet-Doan, Young-Woo Youn, Hyun-Soo Choi et al.
Recently, deep neural networks have shown remarkable success in fault diagnosis in power systems using partial discharges (PDs), thereby enhancing grid asset safety and reliability. However, the prevailing approaches often adopt centralized large-scale datasets for training, without taking into account the impact of noise environments for Intelligent Electronic Devices (IEDs). Noise environments for PD measurements in gas-insulated switchgear (GIS) introduce variations in feature distributions and class representations, challenging the generalization ability of the trained models in new and diverse conditions. In this study, we propose a Shared Knowledge-based Contrastive Federated Learning (SK-CFL) for PD diagnosis in different noise environments for IEDs. The proposed SK-CFL combines federated learning principles with contrastive learning, empowering IEDs to collaboratively learn and share knowledge as regards PD and noise patterns. The proposed framework can learn representations between the same patterns across different IEDs while ensuring data privacy. Experimental results for PD diagnosis in GIS show that the proposed SK-CFL achieves a performance improvement in fault diagnosis, particularly in new and unseen environments. Specifically, the recall for unknown noise in untrained IED 6 demonstrates 92.86% of the proposed SK-CFL, in comparison with 64.29% and 35.71% of the conventional FL and baseline method, respectively. These results suggest that the proposed SK-CFL approach promises more adaptable, and resilient data-driven approaches since it protects data privacy that can operate effectively in challenging real-world environments.
Willem van der Maden, Derek Lomas, Paul Hekkert
As artificial intelligence (AI) continues advancing, ensuring positive societal impacts becomes critical, especially as AI systems become increasingly ubiquitous in various aspects of life. However, developing "AI for good" poses substantial challenges around aligning systems with complex human values. Presently, we lack mature methods for addressing these challenges. This article presents and evaluates the Positive AI design method aimed at addressing this gap. The method provides a human-centered process to translate wellbeing aspirations into concrete practices. First, we explain the method's four key steps: contextualizing, operationalizing, optimizing, and implementing wellbeing supported by continuous measurement for feedback cycles. We then present a multiple case study where novice designers applied the method, revealing strengths and weaknesses related to efficacy and usability. Next, an expert evaluation study assessed the quality of the resulting concepts, rating them moderately high for feasibility, desirability, and plausibility of achieving intended wellbeing benefits. Together, these studies provide preliminary validation of the method's ability to improve AI design, while surfacing areas needing refinement like developing support for complex steps. Proposed adaptations such as examples and evaluation heuristics could address weaknesses. Further research should examine sustained application over multiple projects. This human-centered approach shows promise for realizing the vision of 'AI for Wellbeing' that does not just avoid harm, but actively benefits humanity.
Madeleine I. G. Daepp, Scott Counts
The digital divide refers to disparities in access to and use of digital tooling across social and economic groups. This divide can reinforce marginalization both at the individual level and at the level of places, because persistent economic advantages accrue to places where new technologies are adopted early. To what extent are emerging generative artificial intelligence (AI) tools subject to these social and spatial divides? We leverage a large-scale search query database to characterize U.S. residents' knowledge of a novel generative AI tool, ChatGPT, during its first six months of release. We identify hotspots of higher-than-expected search volumes for ChatGPT in coastal metropolitan areas, while coldspots are evident in the American South, Appalachia, and the Midwest. Nationwide, counties with the highest rates of search have proportionally more educated and more economically advantaged populations, as well as proportionally more technology and finance-sector jobs in comparison with other counties or with the national average. Observed associations with race/ethnicity and urbanicity are attenuated in fully adjusted hierarchical models, but education emerges as the strongest positive predictor of generative AI awareness. In the absence of intervention, early differences in uptake show a potential to reinforce existing spatial and socioeconomic divides.
Anja Meunier, Michal Robert Žák, Lucas Munz et al.
We introduce Brain-Artificial Intelligence Interfaces (BAIs) as a new class of Brain-Computer Interfaces (BCIs). Unlike conventional BCIs, which rely on intact cognitive capabilities, BAIs leverage the power of artificial intelligence to replace parts of the neuro-cognitive processing pipeline. BAIs allow users to accomplish complex tasks by providing high-level intentions, while a pre-trained AI agent determines low-level details. This approach enlarges the target audience of BCIs to individuals with cognitive impairments, a population often excluded from the benefits of conventional BCIs. We present the general concept of BAIs and illustrate the potential of this new approach with a Conversational BAI based on EEG. In particular, we show in an experiment with simulated phone conversations that the Conversational BAI enables complex communication without the need to generate language. Our work thus demonstrates, for the first time, the ability of a speech neuroprosthesis to enable fluent communication in realistic scenarios with non-invasive technologies.
Abhishek Kaushik, Kayla Rush
Music is a potent form of expression that can communicate, accentuate or even create the emotions of an individual or a collective. Both historically and in contemporary experiences, musical expression was and is commonly instrumentalized for social, political and/or economic purposes. Generative artificial intelligence provides a wealth of both opportunities and challenges with regard to music and its role in society. This paper discusses a proposed project integrating artificial intelligence and popular music, with the ultimate goal of creating a powerful tool for implementing music for social transformation, education, healthcare, and emotional well-being. Given that it is being presented at the outset of a collaboration between a computer scientist/data analyst and an ethnomusicologist/social anthropologist. it is mainly conceptual and somewhat speculative in nature.
Irina V. Levchenko, Albina R. Sadykova, Lyudmila I. Kartashova et al.
Problem statement . Currently, various global and national institutions promote mainstreaming artificial intelligence (AI) technology into training programs for school students. The effectiveness of introducing artificial intelligence into school curricula depends on four factors: 1) defining methodological foundations for creating educational content; 2) selecting and structuring appropriate learning content; 3) adapting the content to the needs of different age groups; 4) integrating the content into school programs. The current study provides theoretical foundations for generating learning content for AI lessons aimed at secondary school students and determines possible ways of integrating that content into school programs. Methodology. The empirical part of the study involved 225 secondary school students aged 11-14 (forms 5 to 9) as well as 125 teachers from comprehensive schools located in Moscow and the Moscow region. Analysis, synthesis, testing and sampling average methods were used. Results. The authors conducted a pilot testing of the developed educational materials, measured students’ AI-related skill and knowledge and processed the obtained data using the method of selective averages. The theoretical research conducted showed the leadership of artificial intelligence training in primary schools, mechanisms for developing learning outcomes in the field of artificial intelligence for primary school students, the opportunity to reveal the possibility of forming the content of artificial intelligence training based on various approaches. The goals and results of teaching the basics of artificial intelligence within the framework of basic school were determined. The content of training was formulated. Conclusion. The research is characterized by scientific and practical novelty, as it helps determine methodological grounds for teaching AI to secondary school students and proposes a detailed unit plan for an AI training course in secondary school.
Muhammad Hassan Jamal, Naila Naz, Muazzam A. Khan Khattak et al.
The increasing dependence on data analytics and artificial intelligence (AI) methodologies across various domains has prompted the emergence of apprehensions over data security and integrity. There exists a consensus among scholars and experts that the identification and mitigation of Multi-step attacks pose significant challenges due to the intricate nature of the diverse approaches utilized. This study aims to address the issue of imbalanced datasets within the domain of Multi-step attack detection. To achieve this objective, the research explores three distinct re-sampling strategies, namely over-sampling, under-sampling, and hybrid re-sampling techniques. The study offers a comprehensive assessment of several re-sampling techniques utilized in the detection of Multi-step attacks on deep learning (DL) models. The efficacy of the solution is evaluated using a Multi-step cyber attack dataset that emulates attacks across six attack classes. Furthermore, the performance of several re-sampling approaches with numerous traditional machine learning (ML) and deep learning (DL) models are compared, based on performance metrics such as accuracy, precision, recall, F-1 score, and G-mean. In contrast to preliminary studies, the research focuses on Multi-step attack detection. The results indicate that the combination of Convolutional Neural Networks (CNN) with Deep Belief Networks (DBN), Long Short-Term Memory (LSTM), and Recurrent Neural Networks (RNN) provides optimal results as compared to standalone ML/DL models. Moreover, the results also depict that SMOTEENN, a hybrid re-sampling technique, demonstrates superior effectiveness in enhancing detection performance across various models and evaluation metrics. The findings indicate the significance of appropriate re-sampling techniques to improve the efficacy of Multi-step attack detection on DL models.
Halaman 41 dari 178570