M. Komorowski, L. Celi, O. Badawi et al.
Hasil untuk "artificial intelligence"
Menampilkan 20 dari ~3561963 hasil · dari CrossRef, DOAJ, Semantic Scholar
Marium Malik, M. Tariq, Maira Kamran et al.
D. Hassabis, D. Kumaran, C. Summerfield et al.
Kirtan Jha, Aalap Doshi, Pooja Patel et al.
Abstract Agriculture automation is the main concern and emerging subject for every country. The world population is increasing at a very fast rate and with increase in population the need for food increases briskly. Traditional methods used by farmers aren't sufficient enough to serve the increasing demand and so they have to hamper the soil by using harmful pesticides in an intensified manner. This affects the agricultural practice a lot and in the end the land remains barren with no fertility. This paper talks about different automation practices like IOT, Wireless Communications, Machine learning and Artificial Intelligence, Deep learning. There are some areas which are causing the problems to agriculture field like crop diseases, lack of storage management, pesticide control, weed management, lack of irrigation and water management and all this problems can be solved by above mentioned different techniques. Today, there is an urgent need to decipher the issues like use of harmful pesticides, controlled irrigation, control on pollution and effects of environment in agricultural practice. Automation of farming practices has proved to increase the gain from the soil and also has strengthened the soil fertility. This paper surveys the work of many researchers to get a brief overview about the current implementation of automation in agriculture. The paper also discusses a proposed system which can be implemented in botanical farm for flower and leaf identification and watering using IOT.
R. Challen, J. Denny, M. Pitt et al.
In medicine, artificial intelligence (AI) research is becoming increasingly focused on applying machine learning (ML) techniques to complex problems, and so allowing computers to make predictions from large amounts of patient data, by learning their own associations.1 Estimates of the impact of AI on the wider economy globally vary wildly, with a recent report suggesting a 14% effect on global gross domestic product by 2030, half of which coming from productivity improvements.2 These predictions create political appetite for the rapid development of the AI industry,3 and healthcare is a priority area where this technology has yet to be exploited.2 3 The digital health revolution described by Duggal et al 4 is already in full swing with the potential to ‘disrupt’ healthcare. Health AI research has demonstrated some impressive results,5–10 but its clinical value has not yet been realised, hindered partly by a lack of a clear understanding of how to quantify benefit or ensure patient safety, and increasing concerns about the ethical and medico-legal impact.11 This analysis is written with the dual aim of helping clinical safety professionals to critically appraise current medical AI research from a quality and safety perspective, and supporting research and development in AI by highlighting some of the clinical safety questions that must be considered if medical application of these exciting technologies is to be successful. Clinical decision support systems (DSS) are in widespread use in medicine and have had most impact providing guidance on the safe prescription of medicines,12 guideline adherence, simple risk screening13 or prognostic scoring.14 These systems use predefined rules, which have predictable behaviour and are usually shown to reduce clinical error,12 although sometimes inadvertently introduce safety issues themselves.15 16 Rules-based systems have also been developed to address diagnostic uncertainty17–19 …
Shuiguang Deng, Hailiang Zhao, Jianwei Yin et al.
Along with the rapid developments in communication technologies and the surge in the use of mobile devices, a brand-new computation paradigm, edge computing, is surging in popularity. Meanwhile, the artificial intelligence (AI) applications are thriving with the breakthroughs in deep learning and the many improvements in hardware architectures. Billions of data bytes, generated at the network edge, put massive demands on data processing and structural optimization. Thus, there exists a strong demand to integrate edge computing and AI, which gives birth to edge intelligence. In this article, we divide edge intelligence into AI for edge (intelligence-enabled edge computing) and AI on edge (artificial intelligence on edge). The former focuses on providing more optimal solutions to key problems in edge computing with the help of popular and effective AI technologies while the latter studies how to carry out the entire process of building AI models, i.e., model training and inference, on the edge. This article provides insights into this new interdisciplinary field from a broader perspective. It discusses the core concepts and the research roadmap, which should provide the necessary background for potential future research initiatives in edge intelligence.
Xin Yang, Yifei Wang, Ryan Byrne et al.
Artificial intelligence (AI), and, in particular, deep learning as a subcategory of AI, provides opportunities for the discovery and development of innovative drugs. Various machine learning approaches have recently (re)emerged, some of which may be considered instances of domain-specific AI which have been successfully employed for drug discovery and design. This review provides a comprehensive portrayal of these machine learning techniques and of their applications in medicinal chemistry. After introducing the basic principles, alongside some application notes, of the various machine learning algorithms, the current state-of-the art of AI-assisted pharmaceutical discovery is discussed, including applications in structure- and ligand-based virtual screening, de novo drug design, physicochemical and pharmacokinetic property prediction, drug repurposing, and related aspects. Finally, several challenges and limitations of the current methods are summarized, with a view to potential future directions for AI-assisted drug discovery and design.
S. Graham, C. Depp, Ellen E. Lee et al.
Kit-Kay Mak, M. Pichika
Artificial intelligence (AI) uses personified knowledge and learns from the solutions it produces to address not only specific but also complex problems. Remarkable improvements in computational power coupled with advancements in AI technology could be utilised to revolutionise the drug development process. At present, the pharmaceutical industry is facing challenges in sustaining their drug development programmes because of increased R&D costs and reduced efficiency. In this review, we discuss the major causes of attrition rates in new drug approvals, the possible ways that AI can improve the efficiency of the drug development process and collaboration of pharmaceutical industry giants with AI-powered drug discovery firms.
P. Schneider, W. Walters, A. Plowright et al.
Onur Asan, A. E. Bayrak, Avishek Choudhury
Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable—though imperfect—clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians’ use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.
Y. Shrestha, Shiko M. Ben-Menahem, G. von Krogh
How does organizational decision-making change with the advent of artificial intelligence (AI)-based decision-making algorithms? This article identifies the idiosyncrasies of human and AI-based decision making along five key contingency factors: specificity of the decision search space, interpretability of the decision-making process and outcome, size of the alternative set, decision-making speed, and replicability. Based on a comparison of human and AI-based decision making along these dimensions, the article builds a novel framework outlining how both modes of decision making may be combined to optimally benefit the quality of organizational decision making. The framework presents three structural categories in which decisions of organizational members can be combined with AI-based decisions: full human to AI delegation; hybrid—human-to-AI and AI-to-human—sequential decision making; and aggregated human–AI decision making.
M. Frank, David Autor, James Bessen et al.
Rapid advances in artificial intelligence (AI) and automation technologies have the potential to significantly disrupt labor markets. While AI and automation can augment the productivity of some workers, they can replace the work done by others and will likely transform almost all occupations at least to some degree. Rising automation is happening in a period of growing economic inequality, raising fears of mass technological unemployment and a renewed call for policy efforts to address the consequences of technological change. In this paper we discuss the barriers that inhibit scientists from measuring the effects of AI and automation on the future of work. These barriers include the lack of high-quality data about the nature of work (e.g., the dynamic requirements of occupations), lack of empirically informed models of key microlevel processes (e.g., skill substitution and human–machine complementarity), and insufficient understanding of how cognitive technologies interact with broader economic dynamics and institutional mechanisms (e.g., urban migration and international trade policy). Overcoming these barriers requires improvements in the longitudinal and spatial resolution of data, as well as refinements to data on workplace skills. These improvements will enable multidisciplinary research to quantitatively monitor and predict the complex evolution of work in tandem with technological progress. Finally, given the fundamental uncertainty in predicting technological change, we recommend developing a decision framework that focuses on resilience to unexpected scenarios in addition to general equilibrium behavior.
E. Bonabeau, M. Dorigo, G. Theraulaz
M. Negnevitsky
S. Harrer, Pratik Shah, B. Antony et al.
Clinical trials consume the latter half of the 10 to 15 year, 1.5-2.0 billion USD, development cycle for bringing a single new drug to market. Hence, a failed trial sinks not only the investment into the trial itself but also the preclinical development costs, rendering the loss per failed clinical trial at 800 million to 1.4 billion USD. Suboptimal patient cohort selection and recruiting techniques, paired with the inability to monitor patients effectively during trials, are two of the main causes for high trial failure rates: only one of 10 compounds entering a clinical trial reaches the market. We explain how recent advances in artificial intelligence (AI) can be used to reshape key steps of clinical trial design towards increasing trial success rates.
Qian Zhang, Jie Lu, Yaochu Jin
Recommender systems provide personalized service support to users by learning their previous behaviors and predicting their current preferences for particular products. Artificial intelligence (AI), particularly computational intelligence and machine learning methods and algorithms, has been naturally applied in the development of recommender systems to improve prediction accuracy and solve data sparsity and cold start problems. This position paper systematically discusses the basic methodologies and prevailing techniques in recommender systems and how AI can effectively improve the technological development and application of recommender systems. The paper not only reviews cutting-edge theoretical and practical contributions, but also identifies current research issues and indicates new research directions. It carefully surveys various issues related to recommender systems that use AI, and also reviews the improvements made to these systems through the use of such AI approaches as fuzzy techniques, transfer learning, genetic algorithms, evolutionary algorithms, neural networks and deep learning, and active learning. The observations in this paper will directly support researchers and professionals to better understand current developments and new directions in the field of recommender systems using AI.
M. Busuioc
Abstract Artificial intelligence (AI) algorithms govern in subtle yet fundamental ways the way we live and are transforming our societies. The promise of efficient, low‐cost, or “neutral” solutions harnessing the potential of big data has led public bodies to adopt algorithmic systems in the provision of public services. As AI algorithms have permeated high‐stakes aspects of our public existence—from hiring and education decisions to the governmental use of enforcement powers (policing) or liberty‐restricting decisions (bail and sentencing)—this necessarily raises important accountability questions: What accountability challenges do AI algorithmic systems bring with them, and how can we safeguard accountability in algorithmic decision‐making? Drawing on a decidedly public administration perspective, and given the current challenges that have thus far become manifest in the field, we critically reflect on and map out in a conceptually guided manner the implications of these systems, and the limitations they pose, for public accountability.
Wenjuan Sun, P. Bocchini, Brian D. Davison
Halaman 17 dari 178099