Anthony A. Bibus
Hasil untuk "Ethics"
Menampilkan 20 dari ~997738 hasil · dari DOAJ, Semantic Scholar, CrossRef
T. Hueber
M. Fascia
“Business” has two meanings. A “business” is an entity that offers a good or service for sale, typically with the goal of making a profit. Wal-Mart and Toyota are businesses. “Business” can also mean the activity of exchange. An individual does business with Toyota when she exchanges some of her money for one of its cars. So “business ethics” includes the study of the ethics of the entities that offer (and often produce) goods and services for sale, as well as the ethics of exchange and activities connected with exchange (e.g., advertising). Philosophers have long been interested in these subjects. Aristotle worried about the effects of commerce on character, while Aquinas wrote on profit and prices. Smith and Marx thought deeply about the organization of the process of production. Business ethics in its current incarnation traces its roots to the 1970s and 1980s, when a group of moral philosophers applied ethical theories to business activity. A number of business ethics journals were created around this time, and business ethics became a familiar course in philosophy departments. Common topics of inquiry were and continue to be the purpose of the firm, corporate governance, corporate moral agency, rights and duties at work, fairness in pay and pricing, the limits of markets, marketing ethics, supply chain ethics, and corporate political activity. Not long after philosophers reinvigorated the field, social scientists entered it (and in fact had been working on related issues the whole time). They have increasingly pulled the field, and its academic courses, into business schools. This article concentrates on the philosophical or normative side of business ethics, but it also says something about the descriptive or social scientific side when they overlap.
J. Morley, C. Machado, C. Burr et al.
This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched to support the following research question: how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be 'ethically mindful? A series of screening stages were carried out-for example, removing articles that focused on digital health in general (e.g. data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy)-yielding a total of 156 papers that were included in the review. We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at six levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms. We argue that if action is not swiftly taken in this regard, a new 'AI winter' could occur due to chilling effects related to a loss of public trust in the benefits of AI for health care.
E. Durkheim
Preface by H.N. Kubali Preface to the Second Edition by Bryan S. Turner Introduction by Georges Davy 1. Professional Ethics, 2. Professional Ethics (continued), 3. Professional Ethics (End), 4. Civic Morals - Definition of the State, 5. Civic Morals (continued) - Relation of the State and the Individual, 6. Civic Morals (continued) - The state of the Individual - Patriotism, 7. Civic Morals (continued) - Form of the State - Democracy, 8. Civic Morals (continued) - Form of the State - Democracy 9. Form of the State - Democracy, 10. Duties in General, Independent of any Social Grouping - Homicide, 11. The Rule Prohibiting Attacks on Property, 12. The Right of Property, 13. The Right of Property (continued), 14. The Right of Property (continued), 15. The Right of Contract, 16. Morals of Contractual Relations (continued), 17. The Right of Contract (end), 18. Morals of Contractual Relations (end), Index
J. Morley, L. Floridi, Libby Kinsey et al.
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741–742, 1960. https://doi.org/10.1126/science.132.3429.741; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
M. Rothbard
In recent years, libertarian impulses have increasingly influenced national and economic debates, from welfare reform to efforts to curtail affirmative action. Murray N. Rothbard's classic The Ethics of Liberty stands as one of the most rigorous and philosophically sophisticated expositions of the libertarian political position. What distinguishes Rothbard's book is the manner in which it roots the case for freedom in the concept of natural rights and applies it to a host of practical problems. An economist by profession, Rothbard here proves himself equally at home with philosophy. And while his conclusions are radical-that a social order that strictly adheres to the rights of private property must exclude the institutionalized violence inherent in the state-his applications of libertarian principles prove surprisingly practical for a host of social dilemmas, solutions to which have eluded alternative traditions. The Ethics of Liberty authoritatively established the anarcho-capitalist economic system as the most viable and the only principled option for a social order based on freedom. This edition is newly indexed and includes a new introduction that takes special note of the Robert Nozick-Rothbard controversies.
Paulina Ochoa Espejo
M. Guillemin, L. Gillam
S. Hunt, Scott J. Vitell
B. Bass, Paul Steidlmeier
D. Madison
Acknowledgments 1. Introduction to Critical Ethnography: Theory and Method Positionality and Shades of Ethnography Dialogue and the Other The Method and Theory Nexus Summary Warm-Ups Suggested Readings 2. Methods: "Do I Really Need a Method?" A Method ... or Deep Hanging-Out "Who Am I?" Starting Where You Are "Who Else Has Written About My Topic?" Being a Part of an Interpretive Community The Power of Purpose: Bracketing Your Subject Preparing for the Field: The Research Design and Lay Summary Interviewing and Field Techniques Formulating Questions Extra Tips for Formulating Questions Attributes of the Interviewer and Building Rapport Coding and Logging Data Warm-Ups Suggested Readings 3. Three Stories: Case Studies in Critical Ethnography Case One: Local Activism in West Africa Case Two: Secrets of Sexuality and Personal Narrative Case Three: Community Theatre Conflicts and Organization Warm-Ups Suggested Readings 4. Ethics Defining Ethics Critical Ethnography and the Ethics of Reason, the Greater Good, and the Other Maria Lugones: Contemporary Ethics, Ethnography, and Loving Perception Warm-Ups Suggested Readings 5. Methods and Ethics Codes of Ethics for Fieldwork Extending the Codes Warm-Ups Suggested Readings 6. Methods and Application: Three Case Studies in Ethical Dilemmas Case One: Local Activism in West Africa Case Two: Secrets of Sexuality and Personal Narrative Case Three: Community Theatre Conflicts and Organization Warm-Ups Suggested Readings 7. Performance Ethnography Foundational Concepts in Performance and Social Theory The Performance Interventions of Dwight Conquergood Staging Ethnography and the Performance of Possibilities Warm-Ups Suggested Readings 8. It's Time to Write: Writing as Performance Getting Started: In Search of the Muse The Anxiety of Writing: Wild Mind and Monkey Mind Writing as Performance and Performance as Writing Warm-Ups Suggested Readings 9. The Case Studies Case One: Staging Cultural Performance Case Two: Oral History and Performance Case Three: The Fieldwork of Social Drama and Communitas Warm-Ups Suggested Readings References Index About the Author
L. Hosmer
Abeba Birhane
Summary It has become trivial to point out that algorithmic systems increasingly pervade the social sphere. Improved efficiency—the hallmark of these systems—drives their mass integration into day-to-day life. However, as a robust body of research in the area of algorithmic injustice shows, algorithmic systems, especially when used to sort and predict social outcomes, are not only inadequate but also perpetuate harm. In particular, a persistent and recurrent trend within the literature indicates that society's most vulnerable are disproportionally impacted. When algorithmic injustice and harm are brought to the fore, most of the solutions on offer (1) revolve around technical solutions and (2) do not center disproportionally impacted communities. This paper proposes a fundamental shift—from rational to relational—in thinking about personhood, data, justice, and everything in between, and places ethics as something that goes above and beyond technical solutions. Outlining the idea of ethics built on the foundations of relationality, this paper calls for a rethinking of justice and ethics as a set of broad, contingent, and fluid concepts and down-to-earth practices that are best viewed as a habit and not a mere methodology for data science. As such, this paper mainly offers critical examinations and reflection and not “solutions.”
M. Ryan
One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them.
David Leslie
A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms. Innovations in AI are already leaving a mark on government by improving the provision of essential social goods and services from healthcare, education, and transportation to food supply, energy, and environmental management. These bounties are likely just the start. The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. This guide, written for department and delivery leads in the UK public sector and adopted by the British Government in its publication, 'Using AI in the Public Sector,' identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. It stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. It also highlights the need for algorithmically supported outcomes to be interpretable by their users and made understandable to decision subjects in clear, non-technical, and accessible ways. Finally, it builds out a vision of human-centred and context-sensitive implementation that gives a central role to communication, evidence-based reasoning, situational awareness, and moral justifiability.
J. Borenstein, A. Howard
Artificial Intelligence (AI) is reshaping the world in profound ways; some of its impacts are certainly beneficial but widespread and lasting harms can result from the technology as well. The integration of AI into various aspects of human life is underway, and the complex ethical concerns emerging from the design, deployment, and use of the technology serves as a reminder that it is time to revisit what future developers and designers, along with professionals, are learning when it comes to AI. It is of paramount importance to train future members of the AI community, and other stakeholders as well, to reflect on the ways in which AI might impact people’s lives and to embrace their responsibilities to enhance its benefits while mitigating its potential harms. This could occur in part through the fuller and more systematic inclusion of AI ethics into the curriculum. In this paper, we briefly describe different approaches to AI ethics and offer a set of recommendations related to AI ethics pedagogy.
M. Sturt, Margaret Hobling
Daniel S. Schiff
Adam P Wagner, Nicholas J Simmonds, Susan C Charman et al.
Introduction Yoga is an emerging exercise choice for people with cystic fibrosis (CF), but evidence of its effect in this population is scarce, with a recent systematic review advocating for further research. Yoga Outcomes Get Assessed in CF (YOGA-CF) is a real-world multicentre randomised controlled trial (RCT) investigating a bespoke CF-specific online 12-week yoga intervention, vers usual care, to determine effectiveness for adults with CF.Methods and analysis A multicentre RCT of adults with CF across the UK. Participants are randomised to usual care or a 12-week online bespoke yoga programme with an expectation of two classes completed weekly. Assessments of lung function, 1 min sit-to-stand, the Cystic Fibrosis Questionnaire-Revised (CFQ-R) and other trial questionnaires are completed preintervention and postintervention (0 and 12 weeks) and after 12 weeks of follow-up (week 24). The primary outcome is the difference in respiratory-related quality of life measured using the CFQ-R before and after yoga/control. Sample size was calculated based on detecting a minimally clinically important difference of 4 for the CFQ-R respiratory domain, with power of 80% and 5% significance level (total target, n=314).Ethics and dissemination Ethics approval gained from the South Yorkshire and Humber Research Ethics Committee (REC) (reference: 23/YH/0270, project ID 303898). Dissemination to involve direct participant feedback and lay webinar, scientific conference presentation and publication in a peer-reviewed journal.Trial registration number NCT06120465.
Halaman 2 dari 49887