E. Shortliffe, M. Sepúlveda
Hasil untuk "artificial intelligence"
Menampilkan 20 dari ~3562329 hasil · dari CrossRef, DOAJ, Semantic Scholar
V. Venkatasubramanian
K. Siau, Weiyu Wang
Marian Mazzone, A. Elgammal
Our essay discusses an AI process developed for making art (AICAN), and the issues AI creativity raises for understanding art and artists in the 21st century. Backed by our training in computer science (Elgammal) and art history (Mazzone), we argue for the consideration of AICAN’s works as art, relate AICAN works to the contemporary art context, and urge a reconsideration of how we might define human and machine creativity. Our work in developing AI processes for art making, style analysis, and detecting large-scale style patterns in art history has led us to carefully consider the history and dynamics of human art-making and to examine how those patterns can be modeled and taught to the machine. We advocate for a connection between machine creativity and art broadly defined as parallel to but not in conflict with human artists and their emotional and social intentions of art making. Rather, we urge a partnership between human and machine creativity when called for, seeing in this collaboration a means to maximize both partners’ creative strengths.
Meredith Broussard, N. Diakopoulos, Andrea L. Guzman et al.
heart, journalism is about telling stories about the human condition. How can we, as scholars and practitioners, do better at centering humans in our sociotechnical discourse about AI?
Bayram B., Leventi N., Vodenicharova A. et al.
Artificial intelligence (AI) is reshaping healthcare by enhancing diagnostic precision, treatment personalization, and overall patient care. By leveraging technologies such as machine learning, deep learning, natural language processing, and computer vision, AI enables faster and more accurate decision-making, supports drug discovery and development, and facilitates remote patient monitoring. Beyond improving clinical outcomes, AI also contributes to holistic well-being by addressing physical, mental, social, occupational, and environmental health. Wearable AI devices promote proactive health management, virtual assistants improve mental health accessibility, and predictive analytics enable early intervention for disease prevention. However, the integration of AI in healthcare presents challenges, including data privacy concerns, algorithmic bias, and the need for transparency and trust. Ensuring the responsible and equitable deployment of AI requires robust ethical guidelines, interdisciplinary collaboration, and policies that safeguard patient rights while maximizing the technology’s benefits. By exploring both the transformative potential and inherent challenges of AI, this paper aims to highlight the critical role of AI in shaping the future of healthcare and human well-being.
Stanić Miloš, Galić Borislav
Society is undergoing rapid transformation, posing significant challenges to legal systems worldwide. A central aspect of this transformation is the development of artificial intelligence (AI). At the same time, the right to a healthy environment, guaranteed by the constitution worldwide, is a fundamental human right and concerns all citizens, because everyone affects the state of the environment. The authors in this paper, after introducing the concept of artificial intelligence itself, first deal with the current normative state of the art in this area, both at the level of international public law and at the level of domestic legal orders. After that, the importance of environmental protection, the legal framework for its protection, and the norms regarding the use of artificial intelligence in environmental protection are presented, with an appropriate conclusion.
Nan Qiu, Benjamin Becker
The integration of artificial intelligence (AI) into mental health and psychiatry is transforming the diagnosis and treatment of mental disorders, including major depressive disorder (MDD). While the initial and promising applications span diagnostic screening and therapeutic chatbots, these first-wave technologies do not directly address brain changes or treat MDD. Noninvasive Brain Stimulation (NIBS) holds tremendous promise to address the biological heterogeneity of MDD, but is currently hindered by highly variable outcomes. Therefore, we posit that the synergistic integration of AI with NIBS represents the most promising path to address these difficulties. Importantly, the frontier for AI in depression treatment lies in a paradigm shift: from empirical trial-and-error to data-driven, personalized precision interventions. We argue for a paradigm shift away from AI roles in mental health (e.g., chatbots, diagnostics) toward its deep integration as the core engine for personalized, circuit-based neuromodulation. We highlight the key opportunities this fusion creates: identifying patient-specific neural targets through predictive modeling, developing adaptive closed-loop therapies, and leveraging brain digital twins for in silico simulation and protocol optimization. While significant challenges in data standardization, model interpretability, and clinical validation remain, the fusion of AI and NIBS heralds an era of psychiatry that is predictive, personalized, and precise.
Prathyush P. Poduval, Hamza Errahmouni Barkam, Xiangjian Liu et al.
Hyperdimensional Computing (HDC) is a neurally inspired computing paradigm that leverages lightweight, high-dimensional operations to emulate key brain functions. Recent advances in HDC have primarily targeted two domains: learning, where the goal is to extract and generalize patterns for tasks such as classification, and cognitive computation, which requires accurate information retrieval for human-like reasoning. Although state-of-the-art HDC methods achieve strong performance in both areas, they lack a principled understanding of the fundamentally different requirements imposed by learning vs. cognition. In particular, existing works provide limited guidance on designing encoding methods that generate optimal hyperdimensional representations for these distinct tasks. In this study, we proposed the first universal hyperdimensional encoding method that dynamically adapts to the needs of both learning and cognitive computation. Our approach is based on neural-symbolic techniques that assign random complex hypervectors to atomic bases (e.g., alphabet definitions) and then apply algebraic operations in the high-dimensional hyperspace to control the correlation structure among encoded data points. Through theoretical analysis, we show that learning tasks benefit from correlated representations to maximize memorization and generalization capacity, whereas cognitive tasks require orthogonal, highly separable representations to enable accurate decoding and reasoning. We further derived a separation metric that quantifies this trade-off and validated it empirically across image classification and decoding tasks. Our results demonstrate that tuning the encoder to increase correlation improves classification accuracy from 65% to 95%, while maximizing separation enhances decoding accuracy from 85% to 100%. These findings provide the first systematic framework for designing hyperdimensional encoders that unify learning and cognition under a single, theoretically grounded representation model.
Thi Thanh Thao Tran
The use of different generative AIs such as OpenAI’s ChatGPT, Microsoft Copilot, or Google’s Gemini has been implemented and studied in different aspects of language education. However, exploring how the combination of teacher-generated feedback and AI-generated feedback influences student revision practices in EFL academic writing remains largely unexplored. To fill in the gap, this preliminary study investigates the impact of two forms of feedback, including teacher-generated feedback and AI-generated feedback, as well as the orders in which the two types of feedback have been executed, that is, teacher-generated feedback before AI-generated feedback (TGF-AIGF) or AI-generated feedback before teacher-generated feedback (AIGF-TGF), on EFL students’ writing revision practices in a 15-week course with fourteen Vietnamese undergraduates. Using Gemini as an AI-generated feedback tool, the study analyzed student revisions in four essays, focusing on local (grammar and vocabulary) and global (content and organization) aspects. Findings revealed that AI-generated feedback consistently resulted in higher revision frequencies compared to teacher-generated feedback alone, as it provided specific, actionable, and comprehensive suggestions. The integration of teacher- and AI-generated feedback yielded the highest revision frequencies, demonstrating complementary strengths, including AI-generated feedback that addressed surface-level issues, while teacher-generated feedback focused on higher-order concerns. Although no statistically significant differences were found between the two orders in which the two types of feedback have been executed, the AIGF-TGF order showed a slightly greater quantity of revisions made by students, allowing AI-generated feedback to scaffold surface-level revisions before teacher-generated feedback addressed global issues. These results highlight the potential of combining AI- and teacher-generated feedback to enhance writing revisions and provide pedagogical insights for integrating AI tools into academic writing courses.
Pablo Catota-Ocapana, Cesar Minaya-Andino, Paul Astudillo et al.
In recent years, agriculture has significantly evolved with the integration of technology, enabling the development of new cultivation techniques that respond to the growing demand for food and the need to conserve natural resources. In this context, we conducted a comprehensive review of models of intelligent control for managing nutrients in hydroponic systems by analyzing studies from the last five years. The selection of articles was based on the guidelines of PRISMA and research questions, focusing on control techniques based on fuzzy Logic, Artificial Intelligence and artificial Vision. These models are essential to automatically adjust the concentrations of nutrients, adapting to the needs of the plants at each stage of their growth. The review results highlight essential advances but also identify significant challenges, such as the need for precise sensors, the management of large volumes of data, and adapting the models to different crops and conditions. Despite these challenges, the benefits include a more efficient use of nutrients, a reduction in the consumption of water, and increased crop yields. Continuous research in this field is essential to improve the sustainability and productivity of hydroponic systems, offering new opportunities for agriculture in the future. The findings of this review provide a solid basis for evaluating the effectiveness of the control models and their application in real agricultural scenarios.
Claire Anderson, Lydia Niemi, Naoko Arakawa et al.
Objectives This qualitative study explored public and prescriber awareness of pharmaceutical pollution in the water environment and eco-directed sustainable prescribing (EDSP) as a mitigation strategy to reduce the environmental impact of prescribing in Scotland.Design Focus groups explored prescriber and public perceptions of the topic. Common questions were asked through semistructured facilitation. Focus groups were digitally recorded and transcribed verbatim using an artificial intelligence system, then anonymised and thematically analysed using NVivo software. Data were iteratively analysed using the one sheet of paper technique.Setting Public focus groups were held in-person (Inverness, Scotland, April 2023), and prescriber focus groups were held virtually (MS Teams, August 2023).Participants Nine public representatives and 17 NHS Scotland prescribers participated in one of four focus groups. Purposive and opportunistic sampling approaches were used to recruit participants through social media and other channels (ie, community groups, professional emails, general practitioner and hospital flyers). Prescriber representatives registered interest through an online survey to gather information about their professional background. Responses were reviewed to ensure representation of a mixture of medical backgrounds, experience, sectors and health boards.Results There is growing awareness among the public and healthcare professionals of pharmaceutical pollution in the environment, but further education is required on the drivers, potential effects and possible interventions. Suggestions for more sustainable healthcare included public health awareness campaigns, better provision for pharmacy take-back schemes, clear medicine/packaging labelling, regular medicines reviews and more considered patient-centred care. From the prescriber perspective, EDSP resonated well with current sustainability initiatives (eg, Realistic Medicine, switching to dry-powder inhalers), but barriers to EDSP included lack of knowledge, confidence, time and resources to implement changes. Although the public representatives were generally open to the concept of EDSP, this decision required weighing pros/cons considering personal health choices, information accessibility and transparency, and trust in and time with prescribers.Conclusions This study identified new insights from prescribers and the public related to the concept of, and barriers to, EDSP in Scotland, as well as perspectives regarding knowledge support tools and information communication. Cross-sector and transdisciplinary collaborative approaches are needed to address the challenges identified here. Nonetheless, EDSP merits further exploration in developing more sustainable, appropriate and effective healthcare which contributes to improved public and planetary health.
Huibo Yang, Mengxuan Hu, Amoreena Most et al.
BackgroundLarge language models (LLMs) have demonstrated impressive performance on medical licensing and diagnosis-related exams. However, comparative evaluations to optimize LLM performance and ability in the domain of comprehensive medication management (CMM) are lacking. The purpose of this evaluation was to test various LLMs performance optimization strategies and performance on critical care pharmacotherapy questions used in the assessment of Doctor of Pharmacy students.MethodsIn a comparative analysis using 219 multiple-choice pharmacotherapy questions, five LLMs (GPT-3.5, GPT-4, Claude 2, Llama2-7b and 2-13b) were evaluated. Each LLM was queried five times to evaluate the primary outcome of accuracy (i.e., correctness). Secondary outcomes included variance, the impact of prompt engineering techniques (e.g., chain-of-thought, CoT) and training of a customized GPT on performance, and comparison to third year doctor of pharmacy students on knowledge recall vs. knowledge application questions. Accuracy and variance were compared with student’s t-test to compare performance under different model settings.ResultsChatGPT-4 exhibited the highest accuracy (71.6%), while Llama2-13b had the lowest variance (0.070). All LLMs performed more accurately on knowledge recall vs. knowledge application questions (e.g., ChatGPT-4: 87% vs. 67%). When applied to ChatGPT-4, few-shot CoT across five runs improved accuracy (77.4% vs. 71.5%) with no effect on variance. Self-consistency and the custom-trained GPT demonstrated similar accuracy to ChatGPT-4 with few-shot CoT. Overall pharmacy student accuracy was 81%, compared to an optimal overall LLM accuracy of 73%. Comparing question types, six of the LLMs demonstrated equivalent or higher accuracy than pharmacy students on knowledge recall questions (e.g., self-consistency vs. students: 93% vs. 84%), but pharmacy students achieved higher accuracy than all LLMs on knowledge application questions (e.g., self-consistency vs. students: 68% vs. 80%).ConclusionChatGPT-4 was the most accurate LLM on critical care pharmacy questions and few-shot CoT improved accuracy the most. Average student accuracy was similar to LLMs overall, and higher on knowledge application questions. These findings support the need for future assessment of customized training for the type of output needed. Reliance on LLMs is only supported with recall-based questions.
Nicki Lentz-Nielsen, Lars Maaløe, Pascal Madeleine et al.
<b>Background:</b> Chronic obstructive pulmonary disease (COPD) is projected to be the third-leading cause of death by 2030. Traditional spirometry for the monitoring of the forced expiratory volume in one second (FEV1) can provoke discomfort and anxiety. This study aimed to validate AI models using daily audio recordings as an alternative for FEV1 estimation in home settings. <b>Methods</b>: Twenty-three participants with moderate to severe COPD recorded daily audio readings of standardized texts and measured their FEV1 using spirometry over nine months. Participants also recorded biomarkers (heart rate, temperature, oxygen saturation) via tablet application. Various machine learning models were trained using acoustic features extracted from 2053 recordings, with K-nearest neighbor, random forest, XGBoost, and linear models evaluated using 10-fold cross-validation. <b>Results:</b> The K-nearest neighbors model achieved a root mean square error of 174 mL/s on the validation data. The limit of agreement (LoA) ranged from −333.21 to 347.26 mL/s. Despite an error range of −1252 to 1435 mL/s, most predictions fell within the LoA, indicating good performance in estimating the FEV1. <b>Conclusions</b>: The predictive model showed promising results, with a narrower LoA compared to traditional unsupervised spirometry methods. The AI models effectively used audio to predict the FEV1, suggesting a viable non-invasive approach for COPD monitoring that could enhance patient comfort and accessibility in home settings.
Mohammed Assiri, Mahmoud M. Selim
Abstract Sign language (SL) is the linguistics of speech and hearing-impaired individuals. The hand gesture is the primary model employed in SL by speech and hearing-challenged people to talk with themselves and ordinary persons. At present, hand gesture detection plays a vital part, and it is commonly employed in numerous applications worldwide. Hand gesture detection systems can aid in transmission between machines and humans by aiding these sets of people. Machine learning (ML) is a subdivision of artificial intelligence (AI), which concentrates on the growth of a method. The main challenge in hand gesture detection is that machines do not directly understand human language. A standard medium is required to facilitate communication between humans and machines. Hand gesture recognition (GR) serves as this medium, enabling commands for computer interaction that specifically benefit hearing-impaired and elderly individuals. This study proposes a Gesture Recognition for Hearing Impaired People Using an Ensemble of Deep Learning Models with Improving Beluga Whale Optimization (GRHIP-EDLIBWO) model. The main intention of the GRHIP-EDLIBWO model framework for GR is to assist as a valuable tool for developing accessible communication systems for hearing-impaired individuals. To accomplish that, the GRHIP-EDLIBWO method initially performs image preprocessing using a Sobel filter (SF) to enhance edge detection and extract critical gesture features. For the feature extraction process, the squeeze-and-excitation capsule network (SE-CapsNet) effectively captures spatial hierarchies and complex relationships within gesture patterns. In addition, an ensemble of classification processes, such as bidirectional gated recurrent unit (BiGRU), Variational Autoencoder (VAE), and bidirectional long short-term memory (BiLSTM) technique, is employed. Finally, the improved beluga whale optimization (IBWO) method is implemented for the hyperparameter tuning of the three ensemble models. To achieve a robust classification result with the GRHIP-EDLIBWO approach, extensive simulations are conducted on an Indian SL (ISL) dataset. The performance validation of the GRHIP-EDLIBWO approach portrayed a superior accuracy value of 98.72% over existing models.
R. Kapoor, Stephen Walters, L. Al-Aswad
Artificial intelligence (AI) is a branch of computer science that deals with the development of algorithms that seek to simulate human intelligence. We provide an overview of the basic principles in AI that are essential to the understanding of AI and its application in health care. We also present a descriptive analysis of the current state of AI in various fields of medicine, especially ophthalmology. Finally, we review the potential limitations and challenges that come along with the development and implementation of this new technology that will likely play a major role in clinical medicine in the near future.
Ross Gruetzemacher, Jess Whittlestone
Recently the concept of transformative AI (TAI) has begun to receive attention in the AI policy space. TAI is often framed as an alternative formulation to notions of strong AI (e.g. artificial general intelligence or superintelligence) and reflects increasing consensus that advanced AI which does not fit these definitions may nonetheless have extreme and long-lasting impacts on society. However, the term TAI is poorly defined and often used ambiguously. Some use the notion of TAI to describe levels of societal transformation associated with previous 'general purpose technologies' (GPTs) such as electricity or the internal combustion engine. Others use the term to refer to more drastic levels of transformation comparable to the agricultural or industrial revolutions. The notion has also been used much more loosely, with some implying that current AI systems are already having a transformative impact on society. This paper unpacks and analyses the notion of TAI, proposing a distinction between narrowly transformative AI (NTAI), TAI and radically transformative AI (RTAI), roughly corresponding to associated levels of societal change. We describe some relevant dimensions associated with each and discuss what kinds of advances in capabilities they might require. We further consider the relationship between TAI and RTAI and whether we should necessarily expect a period of TAI to precede the emergence of RTAI. This analysis is important as it can help guide discussions among AI policy researchers about how to allocate resources towards mitigating the most extreme impacts of AI and it can bring attention to negative TAI scenarios that are currently neglected.
F. Kunz, A. Stellzig-Eisenhauer, F. Zeman et al.
Hanna Pihlman, Jere Linden, Kaarlo Paakinaho et al.
Improving bone-graft substitutes and expanding their use in orthopedic and spinal surgery leads to shorter surgical times, fewer complications, and less pain among patients both in human and veterinary medicine. This study compared an elastic porous β-tricalcium phosphate/poly(L-lactide-co-ε-caprolactone) (β-TCP/PLCL) copolymer scaffold (composite scaffold) and a commercially available β-TCP/PLCL bone-graft substitute (chronOS Strip) in a rabbit calvarial defect. A bilateral, 12-mm circular defect was created in the parietal bones of 12 rabbits. Both graft materials were soaked in bone marrow aspirate before implantation, and the usability of the material was recorded during surgery. After a follow-up time of 24 ( n = 5) and 48 ( n = 7) weeks, artificial intelligence- (AI-) assisted micro-CT imaging was used to evaluate the bone formation and β-TCP distribution. Bone formation, implant material decomposition, and tissue reactions were further investigated through histopathology and AI-assisted histomorphometric analyses. Both materials supported tissue ingrowth and vascularization and modest 10%–16% new bone formation through the implant. In both materials the degradation advanced during the follow-up time, but there was implant material visible 48 weeks after implantation. Typical long term foreign body reaction with histiocytes, giant cells and lymphocytes, was seen in both materials being more pronounced in composite scaffold. The benefit of the new composite scaffold was its superior usability during surgery.
Cheng-Tang Pan, Rahul Kumar, Zhi-Hong Wen et al.
The challenges of respiratory infections persist as a global health crisis, placing substantial stress on healthcare infrastructures and necessitating ongoing investigation into efficacious treatment modalities. The persistent challenge of respiratory infections, including COVID-19, underscores the critical need for enhanced diagnostic methodologies to support early treatment interventions. This study introduces an innovative two-stage data analytics framework that leverages deep learning algorithms through a strategic combinatorial fusion technique, aimed at refining the accuracy of early-stage diagnosis of such infections. Utilizing a comprehensive dataset compiled from publicly available lung X-ray images, the research employs advanced pre-trained deep learning models to navigate the complexities of disease classification, addressing inherent data imbalances through methodical validation processes. The core contribution of this work lies in its novel application of combinatorial fusion, integrating select models to significantly elevate diagnostic precision. This approach not only showcases the adaptability and strength of deep learning in navigating the intricacies of medical imaging but also marks a significant step forward in the utilization of artificial intelligence to improve outcomes in healthcare diagnostics. The study’s findings illuminate the path toward leveraging technological advancements in enhancing diagnostic accuracies, ultimately contributing to the timely and effective treatment of respiratory diseases.
Halaman 26 dari 178117