Yaping Zang, Fengjiao Zhang, Chong‐an Di et al.
Hasil untuk "artificial intelligence"
Menampilkan 20 dari ~3566569 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
M. Raza, A. Khosravi
R. Michalski, Tom M Mitchell, Jack Mostow et al.
Stuart J. Russell, Dan Dewey, Max Tegmark
Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.
John McCarthy, M. Minsky, N. Rochester et al.
Stuart Russell, Peter Norvig
Eugene Charniak, D. McDermott
A. Bundy
Yunhe Pan
Abstract With the popularization of the Internet, permeation of sensor networks, emergence of big data, increase in size of the information community, and interlinking and fusion of data and information throughout human society, physical space, and cyberspace, the information environment related to the current development of artificial intelligence (AI) has profoundly changed. AI faces important adjustments, and scientific foundations are confronted with new breakthroughs, as AI enters a new stage: AI 2.0. This paper briefly reviews the 60-year developmental history of AI, analyzes the external environment promoting the formation of AI 2.0 along with changes in goals, and describes both the beginning of the technology and the core idea behind AI 2.0 development. Furthermore, based on combined social demands and the information environment that exists in relation to Chinese development, suggestions on the development of AI 2.0 are given.
Michael J. Timms
L. D. Raedt, K. Kersting, Sriraam Natarajan et al.
Kai-Ze Liau, Heru Agus Santoso
Recommender systems have existed for decades, shaping how people consume digital content, receive information, and engage in day-to-day activities, among others. Undoubtably, recommender systems also play a crucial role in e-commerce applications as well, with industry players like Amazon, AliBaba, eBay using recommender systems within their ecosystems to give suitable and value-driven insights. However, recommender systems face some main concerns such as data sparsity, cold-start problems and so on. As a result, research is currently ongoing to solve these issues and provide high-quality recommendations to consumers. This review aims to identify prevailing gaps surrounding these issues by analysing existing research on generative Artificial Intelligence (AI) recommender systems within an e-commerce context. It explores the underlying framework of common generative AI techniques such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Transformers, diffusion models and so on. VAEs and Transformers hold great potential within e-commerce as noted by most researchers due to their ease of training and qualitative generations. This review intends to enhance recommender systems better to improve the quality of life of digital users, providing better recommendations in e-commerce as well as maximizing the value of stakeholders. It also includes potential future work for researchers to advance existing knowledge in this sector.
Wenxi Guo, Haiquan Chen, Shuyun Peng
With the rapid development of artificial intelligence, the Internet of Things, and related technologies, human-smart object relationships have become increasingly diversified. As smart object become deeply embedded in human society, this has given rise to emerging ethical issues—particularly human-smart object attachment—whose characteristics and influencing pathways remain unclear. This study focuses on the context of wearable smart devices and adopts a two-stage mixed-methods approach: First, based on assemblage theory and existing literature, we construct a three-phase theoretical framework encompassing human-smart object assemblage formation, experience, and attachment. Subsequently, using grounded theory, we conduct in-depth interviews with users of wearable smart objects and employ a three-tier coding process to clarify the conceptualization, typology, and formation pathways of human-smart object attachment. The findings reveal that human-smart object attachment is essentially a psychological bond formed through “self-extension” and “self-expansion,” facilitated by human-smart object capability synergy. It encompasses cognitive, affective, and conative dimensions and is influenced by three key factors: the user, the smart object, and the interaction process. Furthermore, the study explores the impact of human-smart object attachment on user attitudes and behaviors. As a unique phenomenon, the complexity of human-smart object attachment calls for HCI scholars to adopt multidisciplinary perspectives to investigate its mechanisms and effects. Such insights can assist enterprises and communities in developing technology products that better align with user needs.
Guangsen He, Hyun Soo Choi
Abstract This research presents a stacked ensemble approach that employs artificial intelligence (AI) techniques to predict the outcomes of NBA games. Several machine learning algorithms were utilized, including Naïve Bayes, AdaBoost, Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), XGBoost, Decision Tree, and Logistic Regression. The best-performing models were selected to serve as the base learners in the ensemble architecture. To improve the model’s interpretability and transparency, SHAP was used to clarify its decision-making process. The model was trained and evaluated using publicly available NBA datasets from 2021–2022,2022–2023, and 2023–2024. Experimental results indicate that the proposed ensemble approach is practical in predicting game outcomes. Furthermore, the SHAP analysis provides valuable insights into the underlying predictive mechanisms, offering actionable information for coaches and analysts.
Luana COSĂCESCU
The demands of controlling when meeting cutting-edge technology are quite high given its underlying principles, its prospective character, flexibility, but also the desire for transparency, ethics, and responsibility. Through controllers (expert accountants), in their roles as collaborators, reminders, relationship managers of top management, smart technologies will be truly put to good use as business intelligence tools, as trusted allies (digital assistants, AI copilots, AI generative chatbots, interactive dashboards with AI inserts). Of course, there will be obstacles, a certain amount of distrust related to the “black boxes” regarding creation, operation, possible reactions. Hence the multiplication of searches to find something safer, with fewer unknowns regarding the purpose, risk levels, possible discriminations. This is how we arrived at XAI — explainable artificial intelligence, but also at HITL — complex models in which human judgment is integrated. The two systems also have their limits (especially regarding the balance between accuracy and explainability), but it is certain that the degree of trust, openness, and understanding of users (towards algorithms, models, artificial intelligence in general) through these tools will further increase. Basically, both tools suggest the same thing: if employees are directly involved and helped to understand something from the arguments, from the behavior of machines (whether it is about machine learning models, neural networks, or deep learning), then there will be an interactive collaboration between specialists and machines that is particularly beneficial to each productive or functional segment, but also to the entire organization.
Sumaya Mustafa, Mariwan Hama Saeed
Abstract Artificial intelligence (AI) models have demonstrated significant success in classifying various types of text. However, the complex nature of these models often complicates the interpretability of their classifications. To address these challenges and to enhance explainability, this study proposes a novel approach to text classification leveraging natural language processing (NLP) techniques and explainable AI (XAI) methods. Text preprocessing steps were essential for improving the quality of text analysis. This was gained by eliminating elements that contribute minimal semantic value. To achieve robust performance and mitigate the risk of overfitting, repeated stratified K-Fold cross-validation was utilized. Furthermore, the synthetic minority oversampling technique (SMOTE) was employed to address dataset imbalance issues. In the classification phase, nine machine learning models and hybrid/multi-model approaches were employed. To validate the explainability of the classifications, the local interpretable model-agnostic explanations (LIME) framework was utilized. The study utilized two datasets containing texts from domains such as sports, medicine, entertainment, politics, technology, and business. Empirical evaluations demonstrated the effectiveness of the proposed approach. The proposed hybrid model achieved exceptional performance across key metrics, including accuracy, precision, recall, and F1-score. The proposed hybrid model achieved results of up to 99% accuracy. This work can be used for various text analysis applications.
Daniel Wang, BA, Bonnie Sklar, MD, James Tian, MD et al.
Objective: We developed a novel slit-lamp photography (SLP) generative adversarial network (GAN) model using limited data to supplement and improve the performance of an artificial intelligence (AI)–based microbial keratitis (MK) screening model. Design: Cross-sectional study. Subjects: Slit-lamp photographs of 67 healthy and 36 MK eyes were prospectively and retrospectively collected at a tertiary care ophthalmology clinic at a large academic institution. Methods: We trained the GAN model StyleGAN2-ADA on healthy and MK SLPs to generate synthetic images. To assess synthetic image quality, we performed a visual Turing test. Three cornea fellows tested their ability to identify 20 images each of (1) real healthy, (2) real diseased, (3) synthetic healthy, and (4) synthetic diseased. We also used Kernel Inception Distance (KID) to quantitatively measure realism and variation of synthetic images. Using the same dataset used to train the GAN model, we trained 2 DenseNet121 AI models to grade SLP images as healthy or MK with (1) only real images and (2) real supplemented with GAN-generated images. Main Outcome Measures: Classification performance of MK screening models trained with only real images compared to a model trained with both limited real and supplemented synthetic GAN images. Results: For the visual Turing test, the fellows on average rated synthetic images as good quality (83.3% ± 12.0% of images), and synthetic and real images were found to depict pertinent anatomy and pathology for accurate classification (96.3% ± 2.19% of images). These experts could distinguish between real and synthetic images (accuracy: 92.5% ± 9.01%). Analysis of KID score for synthetic images indicated realism and variation. The MK screening model trained on both limited real and supplemented synthetic data (area under the receiver–operator characteristic curve: 0.93, bootstrapping 95% CI: 0.77–1.0) outperformed the model trained with only real data (area under the receiver–operator characteristic curve: 0.76, 95% CI: 0.50–1.0), with an improvement of 0.17 (95% CI: 0–0.4; 2-tailed t test P = 0.076). Conclusions: Artificial intelligence–based MK classification may be improved by supplementation of limited real training data with synthetic data generated by GANs. Financial Disclosure(s): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Heran Guan, Rossilah Binti Jamil
With the development of artificial intelligence technology, university lecturers are experiencing a series of reactions that may occur after changing traditional teaching methods. This study uses 26 Chinese university lecturers as a sample, based on the grounded theory of qualitative research, and uses NVivo 14 and fsQCA 3.0 to explore the differentiated antecedent configuration pathways of job burnout and job insecurity among university lecturers of different genders under the influence of AI. The study found that male university lecturers with weaker adaptability to AI tend to develop stronger negative awareness of AI, making them more prone to job burnout. In contrast, female university lecturers with stronger adaptability to AI are more likely to develop positive awareness of AI, yet they also experience higher levels of job burnout. Male lecturers who are highly adaptable to AI but have negative awareness of it are more likely to feel insecure about their jobs for job insecurity. Nevertheless, the effect of female lecturers' adaptability and awareness of AI on their job insecurity seems minimal. Based on the different configuration pathways formed by the antecedent conditional variables, this study explains the combination of factors that significantly affect lecturers’ job insecurity and job burnout to help universities pay attention to and take effective strategies to alleviate the negative impact of these factors, reasonably allocate limited resources, and assist university lecturers of different genders to understand and manage job insecurity and burnout more rationally.
Alejandro Lopez-Montes, Fereshteh Yousefirizi, Yizhou Chen et al.
KEY WORDS: Artificial Intelligence (AI), Theranostics, Dosimetry, Radiopharmaceutical Therapy (RPT), Patient-friendly dosimetry KEY POINTS - The rapid evolution of radiopharmaceutical therapy (RPT) highlights the growing need for personalized and patient-centered dosimetry. - Artificial Intelligence (AI) offers solutions to the key limitations in current dosimetry calculations. - The main advances on AI for simplified dosimetry toward patient-friendly RPT are reviewed. - Future directions on the role of AI in RPT dosimetry are discussed.
Henrik Nolte, Miriam Rateike, Michèle Finck
The EU Artificial Intelligence Act (AIA) establishes different legal principles for different types of AI systems. While prior work has sought to clarify some of these principles, little attention has been paid to robustness and cybersecurity. This paper aims to fill this gap. We identify legal challenges and shortcomings in provisions related to robustness and cybersecurity for high-risk AI systems(Art. 15 AIA) and general-purpose AI models (Art. 55 AIA). We show that robustness and cybersecurity demand resilience against performance disruptions. Furthermore, we assess potential challenges in implementing these provisions in light of recent advancements in the machine learning (ML) literature. Our analysis informs efforts to develop harmonized standards, guidelines by the European Commission, as well as benchmarks and measurement methodologies under Art. 15(2) AIA. With this, we seek to bridge the gap between legal terminology and ML research, fostering a better alignment between research and implementation efforts.
Halaman 30 dari 178329