The false hope of current approaches to explainable artificial intelligence in health care.
M. Ghassemi, Luke Oakden-Rayner, Andrew Beam
The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. We provide an overview of current explainability techniques and highlight how various failure cases can cause problems for decision making for individual patients. In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
J. Amann, A. Blasimme, E. Vayena
et al.
Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
1430 sitasi
en
Psychology, Medicine
Photonics for artificial intelligence and neuromorphic computing
B. Shastri, A. Tait, T. F. D. Lima
et al.
Research in photonic computing has flourished due to the proliferation of optoelectronic components on photonic integration platforms. Photonic integrated circuits have enabled ultrafast artificial neural networks, providing a framework for a new class of information processing machines. Algorithms running on such hardware have the potential to address the growing demand for machine learning and artificial intelligence in areas such as medical diagnosis, telecommunications, and high-performance and scientific computing. In parallel, the development of neuromorphic electronics has highlighted challenges in that domain, particularly related to processor latency. Neuromorphic photonics offers sub-nanosecond latencies, providing a complementary opportunity to extend the domain of artificial intelligence. Here, we review recent advances in integrated photonic neuromorphic systems, discuss current and future challenges, and outline the advances in science and technology needed to meet those challenges. Photonics offers an attractive platform for implementing neuromorphic computing due to its low latency, multiplexing capabilities and integrated on-chip technology.
1524 sitasi
en
Computer Science, Physics
Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing
Zhi Zhou, Xu Chen, En Li
et al.
With the breakthroughs in deep learning, the recent years have witnessed a booming of artificial intelligence (AI) applications and services, spanning from personal assistant to recommendation systems to video/audio surveillance. More recently, with the proliferation of mobile computing and Internet of Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions bytes of data at the network edge. Driving by this trend, there is an urgent need to push the AI frontiers to the network edge so as to fully unleash the potential of the edge big data. To meet this demand, edge computing, an emerging paradigm that pushes computing tasks and services from the network core to the network edge, has been widely recognized as a promising solution. The resulted new interdiscipline, edge AI or edge intelligence (EI), is beginning to receive a tremendous amount of interest. However, research on EI is still in its infancy stage, and a dedicated venue for exchanging the recent advances of EI is highly desired by both the computer system and AI communities. To this end, we conduct a comprehensive survey of the recent research efforts on EI. Specifically, we first review the background and motivation for AI running at the network edge. We then provide an overview of the overarching architectures, frameworks, and emerging key technologies for deep learning model toward training/inference at the network edge. Finally, we discuss future research opportunities on EI. We believe that this survey will elicit escalating attentions, stimulate fruitful discussions, and inspire further research ideas on EI.
1745 sitasi
en
Computer Science
DARPA's Explainable Artificial Intelligence (XAI) Program
D. Gunning, D. Aha
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
1574 sitasi
en
Computer Science
A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence
M. Haenlein, A. Kaplan
This introduction to this special issue discusses artificial intelligence (AI), commonly defined as “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” It summarizes seven articles published in this special issue that present a wide variety of perspectives on AI, authored by several of the world’s leading experts and specialists in AI. It concludes by offering a comprehensive outlook on the future of AI, drawing on micro-, meso-, and macro-perspectives.
1750 sitasi
en
Computer Science
How artificial intelligence will change the future of marketing
T. Davenport, Abhijit Guha, Dhruv Grewal
et al.
In the future, artificial intelligence (AI) is likely to substantially change both marketing strategies and customer behaviors. Building from not only extant research but also extensive interactions with practice, the authors propose a multidimensional framework for understanding the impact of AI involving intelligence levels, task types, and whether AI is embedded in a robot. Prior research typically addresses a subset of these dimensions; this paper integrates all three into a single framework. Next, the authors propose a research agenda that addresses not only how marketing strategies and customer behaviors will change in the future, but also highlights important policy questions relating to privacy, bias and ethics. Finally, the authors suggest AI will be more effective if it augments (rather than replaces) human managers.
1661 sitasi
en
Computer Science
Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility?
Nithesh Naik, B. Hameed, Dasharathraj K. Shetty
et al.
The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment. Concerns about newer digital technologies becoming a new source of inaccuracy and data breaches have arisen as a result of its use. Mistakes in the procedure or protocol in the field of healthcare can have devastating consequences for the patient who is the victim of the error. Because patients come into contact with physicians at moments in their lives when they are most vulnerable, it is crucial to remember this. Currently, there are no well-defined regulations in place to address the legal and ethical issues that may arise due to the use of artificial intelligence in healthcare settings. This review attempts to address these pertinent issues highlighting the need for algorithmic transparency, privacy, and protection of all the beneficiaries involved and cybersecurity of associated vulnerabilities.
A Proposal For The Dartmouth Summer Research Project On Artificial Intelligence
J. McCarthy, Dartmouth College, M. Minsky
et al.
Trustworthy Artificial Intelligence: A Review
Davinder Kaur, Suleyman Uslu, Kaley J. Rittichier
et al.
Artificial intelligence (AI) and algorithmic decision making are having a profound impact on our daily lives. These systems are vastly used in different high-stakes applications like healthcare, business, government, education, and justice, moving us toward a more algorithmic society. However, despite so many advantages of these systems, they sometimes directly or indirectly cause harm to the users and society. Therefore, it has become essential to make these systems safe, reliable, and trustworthy. Several requirements, such as fairness, explainability, accountability, reliability, and acceptance, have been proposed in this direction to make these systems trustworthy. This survey analyzes all of these different requirements through the lens of the literature. It provides an overview of different approaches that can help mitigate AI risks and increase trust and acceptance of the systems by utilizing the users and society. It also discusses existing strategies for validating and verifying these systems and the current standardization efforts for trustworthy AI. Finally, we present a holistic view of the recent advancements in trustworthy AI to help the interested researchers grasp the crucial facets of the topic efficiently and offer possible future research directions.
564 sitasi
en
Computer Science
A Legal Study on the UNESCO’s ‘the Recommendation on the Ethics of Artificial Intelligence’
S. Hong
Artificial Intelligence for the Metaverse: A Survey
Thien Huynh-The, Viet Quoc Pham, Xuan-Qui Pham
et al.
Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.
508 sitasi
en
Computer Science
Artificial Intelligence and Jobs: Evidence from Online Vacancies
D. Acemoglu, David H. Autor, J. Hazell
et al.
We study the impact of artificial intelligence (AI) on labor markets using establishment-level data on the near universe of online vacancies in the United States from 2010 onward. There is rapid growth in AI-related vacancies over 2010–18 that is driven by establishments whose workers engage in tasks compatible with AI’s current capabilities. As these AI-exposed establishments adopt AI, they simultaneously reduce hiring in non-AI positions and change the skill requirements of remaining postings. While visible at the establishment level, the aggregate impacts of AI-labor substitution on employment and wage growth in more exposed occupations and industries is currently too small to be detectable.
Principles of Artificial Intelligence
N. Nilsson
4022 sitasi
en
Computer Science
Artificial Intelligence in Education: AIEd for Personalised Learning Pathways
Olga Tapalova, N. Zhiyenbayeva, D. Gura
Artificial intelligence is the driving force of change focusing on the needs and demands of the student. The research explores Artificial Intelligence in Education (AIEd) for building personalised learning systems for students. The research investigates and proposes a framework for AIEd: social networking sites and chatbots, expert systems for education, intelligent mentors and agents, machine learning, personalised educational systems and virtual educational environments. These technologies help educators to develop and introduce personalised approaches to master new knowledge and develop professional competencies. The research presents a case study of AIEd implementation in education. The scholars conducted the experiment in educational establishments using artificial intelligence in the curriculum. The scholars surveyed 184 second-year students of the Institute of Pedagogy and Psychology at the Abay Kazakh National Pedagogical University and the Kuban State Technological University to collect the data. The scholars considered the collective group discussions regarding the application of artificial intelligence in education to improve the effectiveness of learning. The research identified key advantages to creating personalised learning pathways such as access to training in 24/7 mode, training in virtual contexts, adaptation of educational content to personal needs of students, real-time and regular feedback, improvements in the educational process and mental stimulations. The proposed education paradigm reflects the increasing role of artificial intelligence in socio-economic life, the social and ethical concerns artificial intelligence may pose to humanity and its role in the digitalisation of education. The current article may be used as a theoretical framework for many educational institutions planning to exploit the capabilities of artificial intelligence in their adaptation to personalized learning.
Artificial intelligence and the changing sources of competitive advantage
Sebastian Krakowski, J. Luger, Sebastian Raisch
Research Summary: We apply a resource-based view to investigate how the adoption of Artificial Intelligence (AI) affects competitive capabilities and performance. Following prior work on using chess as a controlled setting for studying competitive interactions, we compare the same players ’ capabilities and performance across conventional, centaur, and engine chess tournaments. Our analysis shows that AI adoption triggers interrelated sub-stitution and complementation dynamics, which make humans ’ traditional competitive capabilities obsolete, while creating new sources of persistent heterogeneity when humans interact with chess engines. These novel human-machine capabilities are unrelated, or even negatively related, to traditional capabilities. We contribute an integrated view of substitution and complementation, which identifies AI as the driver of these dynamics and explains how they jointly shift the sources of competitive advantage. Managerial Summary: AI-based technologies increasingly substitute and complement humans in managerial tasks such as decision making. We investigate how such change affects the sources of competitive advantage. AI-based engines ’ adoption in chess allows us to investigate competitive capabilities and performance in human, AI, and hybrid settings. We find that neither humans nor AI
Ethical principles for artificial intelligence in K-12 education
C. Adams, Patti Pente, G. Lemermeyer
et al.
Advances in Artificial Intelligence in Education (AIED) are providing teachers with a wealth of new tools and smart services to facilitate student learning. Meanwhile, growing public concern over the potentially harmful societal effects of AI has prompted the publication of a flurry of AI ethics guidelines and policy documents authored by national and international government agencies, academic consortia and industrial stakeholders. AI ethics policy guidance specific to children and K-12 education has lagged behind; this scene is swiftly changing. In this paper, we examine the ethical principles currently informing AI ethics policy development for children and K-12 education. To accomplish this, we located four recent and globally relevant Artificial Intelligence in K-12 Education (AIEdK-12) ethics guideline statements; we then performed a content analysis of these documents using eleven AI ethics principles identified by Jobin et al. (2019). We found that these AIEdK-12 ethics guidelines employed many of the core principles already employed in non-AIEdK-12 documents — Transparency; Justice and Fairness; Non-maleficence; Responsibility; Privacy; Beneficence; Freedom & Autonomy — and were sometimes adapted for children. We further identified four new ethical principles being employed that are unique to K-12 education, specifically: Pedagogical Appropriateness; Children ’ s Rights; AI Literacy; and Teacher Well-being. Our analysis also calls for a decolonized “ humanized posthuman ” ethic able to address the intensifying human-AI collaborative environment in classrooms, and able to
161 sitasi
en
Computer Science
Artificial intelligence and increasing misinformation
S. Monteith, T. Glenn, J. Geddes
et al.
Summary With the recent advances in artificial intelligence (AI), patients are increasingly exposed to misleading medical information. Generative AI models, including large language models such as ChatGPT, create and modify text, images, audio and video information based on training data. Commercial use of generative AI is expanding rapidly and the public will routinely receive messages created by generative AI. However, generative AI models may be unreliable, routinely make errors and widely spread misinformation. Misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice. Psychiatrists need to recognise that patients may receive misinformation online, including about medicine and psychiatry.
Artificial intelligence in dentistry—A review
H. Ding, Jiamin Wu, Wuyuan Zhao
et al.
Artificial Intelligence (AI) is the ability of machines to perform tasks that normally require human intelligence. AI is not a new term, the concept of AI can be dated back to 1950. However, it has not become a practical tool until two decades ago. Owing to the rapid development of three cornerstones of current AI technology—big data (coming through digital devices), computational power, and AI algorithm—in the past two decades, AI applications have been started to provide convenience to people's lives. In dentistry, AI has been adopted in all dental disciplines, i.e., operative dentistry, periodontics, orthodontics, oral and maxillofacial surgery, and prosthodontics. The majority of the AI applications in dentistry go to the diagnosis based on radiographic or optical images, while other tasks are not as applicable as image-based tasks mainly due to the constraints of data availability, data uniformity, and computational power for handling 3D data. Evidence-based dentistry (EBD) is regarded as the gold standard for the decision-making of dental professionals, while AI machine learning (ML) models learn from human expertise. ML can be seen as another valuable tool to assist dental professionals in multiple stages of clinical cases. This review narrated the history and classification of AI, summarised AI applications in dentistry, discussed the relationship between EBD and ML, and aimed to help dental professionals to understand AI as a tool better to assist their routine work with improved efficiency.
Artificial Intelligence Enabled Project Management: A Systematic Literature Review
Ianire Taboada, Abouzar Daneshpajouh, N. Toledo
et al.
In the Industry 5.0 era, companies are leveraging the potential of cutting-edge technologies such as artificial intelligence for more efficient and green human-centric production. In a similar approach, project management would benefit from artificial intelligence in order to achieve project goals by improving project performance, and consequently, reaching higher sustainable success. In this context, this paper examines the role of artificial intelligence in emerging project management through a systematic literature review; the applications of AI techniques in the project management performance domains are presented. The results show that the number of influential publications on artificial intelligence-enabled project management has increased significantly over the last decade. The findings indicate that artificial intelligence, predominantly machine learning, can be considerably useful in the management of construction and IT projects; it is notably encouraging for enhancing the planning, measurement, and uncertainty performance domains by providing promising forecasting and decision-making capabilities.