Sebastian Raisch, Sebastian Krakowski
Taking three recent business books on artificial intelligence (AI) as a starting point, we explore the automation and augmentation concepts in the management domain. Whereas automation implies that...
Menampilkan 20 dari ~3552555 hasil · dari DOAJ, CrossRef, Semantic Scholar
Sebastian Raisch, Sebastian Krakowski
Taking three recent business books on artificial intelligence (AI) as a starting point, we explore the automation and augmentation concepts in the management domain. Whereas automation implies that...
Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite et al.
The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards. Artificial intelligence (AI) is becoming more and more common in people’s lives. Here, the authors use an expert elicitation method to understand how AI may affect the achievement of the Sustainable Development Goals.
Nikhil Sharma, Regina Wang
Xuesong Zhai, Xiaoyan Chu, C. Chai et al.
This study provided a content analysis of studies aiming to disclose how artificial intelligence (AI) has been applied to the education sector and explore the potential research trends and challenges of AI in education. A total of 100 papers including 63 empirical papers (74 studies) and 37 analytic papers were selected from the education and educational research category of Social Sciences Citation Index database from 2010 to 2020. The content analysis showed that the research questions could be classified into development layer (classification, matching, recommendation, and deep learning), application layer (feedback, reasoning, and adaptive learning), and integration layer (affection computing, role-playing, immersive learning, and gamification). Moreover, four research trends, including Internet of Things, swarm intelligence, deep learning, and neuroscience, as well as an assessment of AI in education, were suggested for further investigation. However, we also proposed the challenges in education may be caused by AI with regard to inappropriate use of AI techniques, changing roles of teachers and students, as well as social and ethical issues. The results provide insights into an overview of the AI used for education domain, which helps to strengthen the theoretical foundation of AI in education and provides a promising channel for educators and AI engineers to carry out further collaborative research.
Mohsen Soori, B. Arezoo, Roza Dastres
Selin Akgun, Christine Greenhow
Artificial intelligence (AI) is a field of study that combines the applications of machine learning, algorithm productions, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students’ learning, automated assessment systems to aid teachers, and facial recognition systems to generate insights about learners’ behaviors. Despite the potential benefits of AI to support students’ learning experiences and teachers’ practices, the ethical and societal drawbacks of these systems are rarely fully considered in K-12 educational contexts. The ethical challenges of AI in education must be identified and introduced to teachers and students. To address these issues, this paper (1) briefly defines AI through the concepts of machine learning and algorithms; (2) introduces applications of AI in educational settings and benefits of AI systems to support students’ learning processes; (3) describes ethical challenges and dilemmas of using AI in education; and (4) addresses the teaching and understanding of AI by providing recommended instructional resources from two providers—i.e., the Massachusetts Institute of Technology’s (MIT) Media Lab and Code.org. The article aims to help practitioners reap the benefits and navigate ethical challenges of integrating AI in K-12 classrooms, while also introducing instructional resources that teachers can use to advance K-12 students’ understanding of AI and ethics.
Kate Crawford
ATLAS OF AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford. New Haven, CT: Yale University Press, 2021. 336 pages. Hardcover; $28.00. ISBN: 9780300209570. *Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence is Kate Crawford's analysis of the state of the AI industry. A central idea of her book is the importance of redefining Artificial Intelligence (AI). She states, "I've argued that there is much at stake in how we define AI, what its boundaries are, and who determines them: it shapes what can be seen and contested" (p. 217). *My own definition of AI goes something like this: I imagine a future where I'm sitting in a cafe drinking coffee with my friends, but in this future, one of my friends is a robot, who like me is trying to make a living in this world. A future where humans and robots live in harmony. Crawford views this definition as mythological: "These mythologies are particularly strong in the field of artificial intelligence, where the belief that human intelligence can be formalized and reproduced by machines has been axiomatic since the mid-twentieth century" (p. 5). I do not know if my definition of artificial intelligence can come true, but I am enjoying the process of building, experimenting, and dreaming. *In her book, she asks me to consider that I may be unknowingly participating, as she states, in "a material product of colonialism, with its patterns of extraction, conflict, and environmental destruction" (p. 38). The book's subtitle illuminates the purpose of the book: specifically, the power, politics, and planetary costs of usurping artificial intelligence. Of course, this is not exactly Crawford's subtitle, and this is where I both agree and disagree with her. The book's subtitle is actually Power, Politics, and the Planetary Costs of Artificial Intelligence. In my opinion, AI is more the canary in the coal mine. We can use the canary to detect the poisonous gases, but we cannot blame the canary for the poisonous gas. It risks missing the point. Is AI itself to be feared? Should we no longer teach or learn AI? Or is this more about how we discern responsible use and direction for AI technology? *There is another author who speaks to similar issues. In Weapons of Math Destruction, Cathy O'Neil states it this way, "If we had been clear-headed, we all would have taken a step back at this point to figure out how math had been misused ... But instead ... new mathematical techniques were hotter than ever ... A computer program could speed through thousands of resumes or loan applications in a second or two and sort them into neat lists, with the most promising candidates on top" (p. 13). *Both Crawford and O'Neil point to human flaws that often lead to well-intentioned software developers creating code that results in unfair and discriminatory decisions. AI models encode unintended human biases that may not evaluate candidates as fairly as we would expect, yet there is a widespread notion that we can trust the algorithm. For example, the last time you registered an account on a website, did you click the checkbox confirming that "yes, I read the disclaimer" even though you did not? When we click "yes" we are accepting this disclaimer and placing trust in the software. Business owners place trust in software when they use it to make predictions. Engineers place trust in their algorithms when they write software without rigorous testing protocols. I am just as guilty. *Crawford suggests that AI is often used in ways that are harmful. In the Atlas of AI we are given a tour of how technology is damaging our world: strip mining, labor injustice, the misuse of personal data, issues of state and power, to name a few of the concerns Crawford raises. The reality is that AI is built upon existing infrastructure. For example, Facebook, Instagram, YouTube, Amazon, TikTok have been collecting our information for profit even before AI became important to them. The data centers, CPU houses, and worldwide network infrastructure were already in place to meet consumer demand and geopolitics. But it is true that AI brings new technologies to the table, such as automated face recognition and decision tools to compare prospective employment applicants with diverse databases and employee monitoring tools that can make automatic recommendations. Governments, militaries, and intelligence agencies have taken notice. As invasion of privacy and social justice concerns emerge, Crawford calls us to consider these issues carefully. *Reading Crawford's words pricked my conscience, convicting me to reconsider my erroneous ways. For big tech to exist, to supply what we demand, it needs resources. She walks us through the many resources the technology industry needs to provide what we want, and AI is the "new kid on the block." This book is not about AI, per se; it is instead about the side effects of poor business/research practices, opportunist behavior, power politics, and how these behaviors not only exploit our planet but also unjustly affect marginalized people. The AI industry is simply a new example of this reality: data mining, low wages to lower costs, foreign workers with fewer rights, strip mining, relying on coal and oil for electricity (although some tech companies have made strides to improve sustainability). This sounds more like a parable about the sins of the tech industry than a critique about the dangers of AI. *Could the machine learning community, like the inventors of dynamite who wanted to simply help railroads excavate tunnels, be unintentionally causing harm? Should we, as a community, be on the lookout for these potential harms? Do we have a moral responsibility? Maybe the technology sector needs to look more inwardly to ensure that process efficiency and cost savings are not elevated as most important. *I did not agree with everything that Crawford classified as AI, but I do agree that as a community we are responsible for our actions. If there are injustices, then this should be important to us. In particular, as people of faith, we should heed the call of Micah 6:8 to act justly in this world, and this includes how we use AI. *Reviewed by Joseph Vybihal, Professor of Computer Science, McGill University, Montreal, PQ H3A 0G4.
Andy Nguyen, H. Ngo, Yvonne Hong et al.
The advancement of artificial intelligence in education (AIED) has the potential to transform the educational landscape and influence the role of all involved stakeholders. In recent years, the applications of AIED have been gradually adopted to progress our understanding of students’ learning and enhance learning performance and experience. However, the adoption of AIED has led to increasing ethical risks and concerns regarding several aspects such as personal data and learner autonomy. Despite the recent announcement of guidelines for ethical and trustworthy AIED, the debate revolves around the key principles underpinning ethical AIED. This paper aims to explore whether there is a global consensus on ethical AIED by mapping and analyzing international organizations’ current policies and guidelines. In this paper, we first introduce the opportunities offered by AI in education and potential ethical issues. Then, thematic analysis was conducted to conceptualize and establish a set of ethical principles by examining and synthesizing relevant ethical policies and guidelines for AIED. We discuss each principle and associated implications for relevant educational stakeholders, including students, teachers, technology developers, policymakers, and institutional decision-makers. The proposed set of ethical principles is expected to serve as a framework to inform and guide educational stakeholders in the development and deployment of ethical and trustworthy AIED as well as catalyze future development of related impact studies in the field.
C. Haug, J. Drazen
Markus Langer, Daniel Oster, Timo Speith et al.
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders' desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.
Feridun Kaya, F. Aydın, A. Schepman et al.
Abstract The present study adapted the General Attitudes toward Artificial Intelligence Scale (GAAIS) to Turkish and investigated the impact of personality traits, artificial intelligence anxiety, and demographics on attitudes toward artificial intelligence. The sample consisted of 259 female (74%) and 91 male (26%) individuals aged between 18 and 51 (Mean = 24.23). Measures taken were demographics, the Ten-Item Personality Inventory, the Artificial Intelligence Anxiety Scale, and the General Attitudes toward Artificial Intelligence Scale. The Turkish GAAIS had good validity and reliability. Hierarchical Multiple Linear Regression Analyses showed that positive attitudes toward artificial intelligence were significantly predicted by the level of computer use (β = 0.139, p = 0.013), level of knowledge about artificial intelligence (β = 0.119, p = 0.029), and AI learning anxiety (β = −0.172, p = 0.004). Negative attitudes toward artificial intelligence were significantly predicted by agreeableness (β = 0.120, p = 0.019), AI configuration anxiety (β = −0.379, p < 0.001), and AI learning anxiety (β = −0.211, p < 0.001). Personality traits, AI anxiety, and demographics play important roles in attitudes toward AI. Results are discussed in light of the previous research and theoretical explanations.
Yaganteeswarudu Akkem, S. K. Biswas, Aruna Varanasi
Zhihan Lv
The advent of the metaverse presents a paradigm shift in how we interact with digital environments. Generative AI techniques offer immense potential for enriching these virtual worlds by autonomously creating diverse and immersive content. This research paper proposes a comprehensive methodology for leveraging generative AI in the metaverse. We explore various techniques such as generative adversarial networks (GANs), variational auto encoders (VAEs), and reinforcement learning (RL) to generate virtual environments, characters, objects, textures, and narratives. Our methodology encompasses data collection, preprocessing, model training, evaluation, and integration into metaverse platforms. We also discuss ethical considerations and potential challenges associated with deploying generative AI in the metaverse.
M. Simion, Christoph Kelp
This paper develops an account of trustworthy AI. Its central idea is that whether AIs are trustworthy is a matter of whether they live up to their function-based obligations. We argue that this account serves to advance the literature in a couple of important ways. First, it serves to provide a rationale for why a range of properties that are widely assumed in the scientific literature, as well as in policy, to be required of trustworthy AI, such as safety, justice, and explainability, are properties (often) instantiated by trustworthy AI. Second, we connect the discussion on trustworthy AI in policy, industry, and the sciences with the philosophical discussion of trustworthiness. We argue that extant accounts of trustworthiness in the philosophy literature cannot make proper sense of trustworthy AI and that our account compares favourably with its competitors on this front.
Lin Chen, Zhonghao Chen, Yubing Zhang et al.
Climate change is a major threat already causing system damage to urban and natural systems, and inducing global economic losses of over $500 billion. These issues may be partly solved by artificial intelligence because artificial intelligence integrates internet resources to make prompt suggestions based on accurate climate change predictions. Here we review recent research and applications of artificial intelligence in mitigating the adverse effects of climate change, with a focus on energy efficiency, carbon sequestration and storage, weather and renewable energy forecasting, grid management, building design, transportation, precision agriculture, industrial processes, reducing deforestation, and resilient cities. We found that enhancing energy efficiency can significantly contribute to reducing the impact of climate change. Smart manufacturing can reduce energy consumption, waste, and carbon emissions by 30–50% and, in particular, can reduce energy consumption in buildings by 30–50%. About 70% of the global natural gas industry utilizes artificial intelligence technologies to enhance the accuracy and reliability of weather forecasts. Combining smart grids with artificial intelligence can optimize the efficiency of power systems, thereby reducing electricity bills by 10–20%. Intelligent transportation systems can reduce carbon dioxide emissions by approximately 60%. Moreover, the management of natural resources and the design of resilient cities through the application of artificial intelligence can further promote sustainability.
T. Bradshaw, Zachary Huemann, Junjie Hu et al.
Artificial intelligence (AI) is being increasingly used to automate and improve technologies within the field of medical imaging. A critical step in the development of an AI algorithm is estimating its prediction error through cross-validation (CV). The use of CV can help prevent overoptimism in AI algorithms and can mitigate certain biases associated with hyperparameter tuning and algorithm selection. This article introduces the principles of CV and provides a practical guide on the use of CV for AI algorithm development in medical imaging. Different CV techniques are described, as well as their advantages and disadvantages under different scenarios. Common pitfalls in prediction error estimation and guidance on how to avoid them are also discussed. Keywords: Education, Research Design, Technical Aspects, Statistics, Supervised Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2023.
Jun Zhu, Hang Su, Bo Zhang
Changhao Xu, Samuel A. Solomon, Wei Gao
C. Adams, Patti Pente, G. Lemermeyer et al.
Advances in Artificial Intelligence in Education (AIED) are providing teachers with a wealth of new tools and smart services to facilitate student learning. Meanwhile, growing public concern over the potentially harmful societal effects of AI has prompted the publication of a flurry of AI ethics guidelines and policy documents authored by national and international government agencies, academic consortia and industrial stakeholders. AI ethics policy guidance specific to children and K-12 education has lagged behind; this scene is swiftly changing. In this paper, we examine the ethical principles currently informing AI ethics policy development for children and K-12 education. To accomplish this, we located four recent and globally relevant Artificial Intelligence in K-12 Education (AIEdK-12) ethics guideline statements; we then performed a content analysis of these documents using eleven AI ethics principles identified by Jobin et al. (2019). We found that these AIEdK-12 ethics guidelines employed many of the core principles already employed in non-AIEdK-12 documents — Transparency; Justice and Fairness; Non-maleficence; Responsibility; Privacy; Beneficence; Freedom & Autonomy — and were sometimes adapted for children. We further identified four new ethical principles being employed that are unique to K-12 education, specifically: Pedagogical Appropriateness; Children ’ s Rights; AI Literacy; and Teacher Well-being. Our analysis also calls for a decolonized “ humanized posthuman ” ethic able to address the intensifying human-AI collaborative environment in classrooms, and able to
Gilles E. Gignac, Eva T. Szodorai
Achieving a widely accepted definition of human intelligence has been challenging, a situation mirrored by the diverse definitions of artificial intelligence in computer science. By critically examining published definitions, highlighting both consistencies and inconsistencies, this paper proposes a refined nomenclature that harmonizes conceptualizations across the two disciplines. Abstract and operational definitions for human and artificial intelligence are proposed that emphasize maximal capacity for completing novel goals successfully through respective perceptual-cognitive and computational processes. Additionally, support for considering intelligence, both human and artificial, as consistent with a multidimensional model of capabilities is provided. The implications of current practices in artificial intelligence training and testing are also described, as they can be expected to lead to artificial achievement or expertise rather than artificial intelligence. Paralleling psychometrics, ‘AI metrics ’ is suggested as a needed computer science discipline that acknowledges the importance of test reliability and validity, as well as standardized measurement procedures in artificial system evaluations. Drawing parallels with human general intelligence, artificial general intelligence (AGI) is described as a reflection of the shared variance in artificial system performances. We conclude that current evidence more greatly supports the observation of artificial achievement and expertise over artificial intelligence. However, interdisciplinary collaborations, based on common understandings of the nature of intelligence, as well as sound measurement practices, could facilitate scientific innovations that help bridge the gap between artificial and human-like intelligence.
Halaman 5 dari 177628