Malik D. McCluskey
Hasil untuk "Ethics"
Menampilkan 20 dari ~624411 hasil · dari arXiv, DOAJ, Semantic Scholar
Aristotle F. H Peters
J. Cocks
M. Carrigan, A. Attalla
Virginia Dignum
Stephen Darwall
Jacob Metcalf, E. Moss, D. Boyd
F. D. de Bakker, A. Rasche, S. Ponte
ABSTRACT: Although the literature on multi-stakeholder initiatives for sustainability has grown in recent years, it is scattered across several academic fields, making it hard to ascertain how individual disciplines, such as business ethics, can further contribute to the debate. Based on an extensive review of the literature on certification and principle-based MSIs for sustainability (n = 293 articles), we show that the scholarly debate rests on three broad themes (the “3Is”): the input into creating and governing MSIs; the institutionalization of MSIs; and the impact that relevant initiatives create. While our discussion reveals the theoretical underpinnings of the 3Is, it also shows that a number of research challenges related to business ethics remain unaddressed. We unpack these challenges and suggest how scholars can utilize theoretical insights in business ethics to push the boundaries of the field. Finally, we also discuss what business ethics research can gain from theory development in the MSI field.
E. Durkheim
It is widely recognized that professionals such as doctors, nurses, engineers, and teachers have duties that go far beyond those of ordinary citizens, but there is much disagreement as to why they have such duties. In Professional Ethics: A Trust-Based Approach, Terrence Kelly argues that such duties come from the unique trust that professionals must invite, develop, and honor from those they serve. Without trust, professional practice would be significantly impoverished—both ethically and instrumentally— and the autonomy enjoyed by many professions would evaporate. Professionals therefore have good reasons to be “effectively trustworthy”— that is, to develop the virtues necessary to be responsive to the vulnerability of those they serve; and effectively communicate that responsiveness to others. Being effectively trustworthy requires a commitment by professionals as individual practitioners and as members of ethical communities committed to building a culture of trust. Such communities can, and should, design virtue-based professional education that promotes trustworthy character formation, and articulate an ethical vision of the trustworthy professional that has real credibility in the practical conditions of profession. Because of the importance of trust, professional communities also have good reasons to develop conduct standards, such as those regarding conflict of interest, that promote professional trustworthiness in both fact and appearance.
Emre Kazim, Adriano Soares Koshiyama
Summary Artificial intelligence (AI) ethics is a field that has emerged as a response to the growing concern regarding the impact of AI. It can be read as a nascent field and as a subset of the wider field of digital ethics, which addresses concerns raised by the development and deployment of new digital technologies, such as AI, big data analytics, and blockchain technologies. The principle aim of this article is to provide a high-level conceptual discussion of the field by way of introducing basic concepts and sketching approaches and central themes in AI ethics. The first part introduces concepts by noting what is being referred to by “AI” and “ethics”, etc.; the second part explores some predecessors to AI ethics, namely engineering ethics, philosophy of technology, and science and technology studies; the third part discusses three current approaches to AI ethics namely, principles, processes, and ethical consciousness; and finally, the fourth part discusses central themes in translating ethics in to engineering practice. We conclude by summarizing and noting the inherent interdisciplinary future directions and debates in AI ethics.
Kristi S. Lekies
The debate whether university education should be “free” seems misconstrued. Even in a system without tuition fees, someone will have to foot the bill. This paper argues that from the viewpoint of justice, a strong case can be made in higher education for adopting the beneficiary pays principle
P. Brey
Carlos Gómez-Vírseda, Yves De Maeseneer, C. Gastmans
Respect for autonomy is a key concept in contemporary bioethics and end-of-life ethics in particular. Despite this status, an individualistic interpretation of autonomy is being challenged from the perspective of different theoretical traditions. Many authors claim that the principle of respect for autonomy needs to be reconceptualised starting from a relational viewpoint. Along these lines, the notion of relational autonomy is attracting increasing attention in medical ethics. Yet, others argue that relational autonomy needs further clarification in order to be adequately operationalised for medical practice. To this end, we examined the meaning, foundations, and uses of relational autonomy in the specific literature of end-of-life care ethics. Using PRESS and PRISMA procedures, we conducted a systematic review of argument-based ethics publications in 8 major databases of biomedical, philosophy, and theology literature that focused on relational autonomy in end-of-life care. Full articles were screened. All included articles were critically appraised, and a synthesis was produced. Fifty publications met our inclusion criteria. Twenty-eight articles were published in the last 5 years; publications were originating from 18 different countries. Results are organized according to: (a) an individualistic interpretation of autonomy; (b) critiques of this individualistic interpretation of autonomy; (c) relational autonomy as theoretically conceptualised; (d) relational autonomy as applied to clinical practice and moral judgment in end-of-life situations. Three main conclusions were reached. First, literature on relational autonomy tends to be more a ‘reaction against’ an individualistic interpretation of autonomy rather than be a positive concept itself. Dichotomic thinking can be overcome by a deeper development of the philosophical foundations of autonomy. Second, relational autonomy is a rich and complex concept, formulated in complementary ways from different philosophical sources. New dialogue among traditionally divergent standpoints will clarify the meaning. Third, our analysis stresses the need for dialogical developments in decision making in end-of-life situations. Integration of these three elements will likely lead to a clearer conceptualisation of relational autonomy in end-of-life care ethics. This should in turn lead to better decision-making in real-life situations.
Eline de Jong
The ethics of emerging technologies faces an anticipation dilemma: engaging too early risks overly speculative concerns, while engaging too late may forfeit the chance to shape a technology's trajectory. Despite various methods to address this challenge, no framework exists to assess their suitability across different stages of technological development. This paper proposes such a framework. I conceptualise two main ethical approaches: outcomes-oriented ethics, which assesses the potential consequences of a technology's materialisation, and meaning-oriented ethics, which examines how (social) meaning is attributed to a technology. I argue that the strengths and limitations of outcomes- and meaning-oriented ethics depend on the uncertainties surrounding a technology, which shift as it matures. To capture this evolution, I introduce the concept of ethics readiness: the readiness of a technology to undergo detailed ethical scrutiny. Building on the widely known Technology Readiness Levels (TRLs), I propose Ethics Readiness Levels (ERLs) to illustrate how the suitability of ethical approaches evolves with a technology's development. At lower ERLs, where uncertainties are most pronounced, meaning-oriented ethics proves more effective, while at higher ERLs, as impacts become clearer, outcomes-oriented ethics gains relevance. By linking Ethics Readiness to Technology Readiness, this framework underscores that the appropriateness of ethical approaches evolves alongside technological maturity, ensuring scrutiny remains grounded and relevant. Finally, I demonstrate the practical value of this framework by applying it to quantum technologies, showing how Ethics Readiness can guide effective ethical engagement.
Emanuele Ratti
There is an overwhelming abundance of works in AI Ethics. This growth is chaotic because of how sudden it is, its volume, and its multidisciplinary nature. This makes difficult to keep track of debates, and to systematically characterize goals, research questions, methods, and expertise required by AI ethicists. In this article, I show that the relation between AI and ethics can be characterized in at least three ways, which correspond to three well-represented kinds of AI ethics: ethics and AI; ethics in AI; ethics of AI. I elucidate the features of these three kinds of AI Ethics, characterize their research questions, and identify the kind of expertise that each kind needs. I also show how certain criticisms to AI ethics are misplaced, as being done from the point of view of one kind of AI ethics, to another kind with different goals. All in all, this work sheds light on the nature of AI ethics, and sets the groundwork for more informed discussions about the scope, methods, and training of AI ethicists.
Georgy Ishmaev
This chapter explores three key questions in blockchain ethics. First, it situates blockchain ethics within the broader field of technology ethics, outlining its goals and guiding principles. Second, it examines the unique ethical challenges of blockchain applications, including permissionless systems, incentive mechanisms, and privacy concerns. Key obstacles, such as conceptual modeling and information asymmetries, are identified as critical issues. Finally, the chapter argues that blockchain ethics should be approached as an engineering discipline, emphasizing the analysis and design of trade-offs in complex systems.
Paula Helm, Selin Gerlek
Mainstream AI ethics, with its reliance on top-down, principle-driven frameworks, fails to account for the situated realities of diverse communities affected by AI (Artificial Intelligence). Critics have argued that AI ethics frequently serves corporate interests through practices of 'ethics washing', operating more as a tool for public relations than as a means of preventing harm or advancing the common good. As a result, growing scepticism among critical scholars has cast the field as complicit in sustaining harmful systems rather than challenging or transforming them. In response, this paper adopts a Science and Technology Studies (STS) perspective to critically interrogate the field of AI ethics. It hence applies the same analytic tools STS has long directed at disciplines such as biology, medicine, and statistics to ethics. This perspective reveals a core tension between vertical (top-down, principle-based) and horizontal (risk-mitigating, implementation-oriented) approaches to ethics. By tracing how these models have shaped the discourse, we show how both fall short in addressing the complexities of AI as a socio-technical assemblage, embedded in practice and entangled with power. To move beyond these limitations, we propose a threefold reorientation of AI ethics. First, we call for a shift in foundations: from top-down abstraction to empirical grounding. Second, we advocate for pluralisation: moving beyond Western-centric frameworks toward a multiplicity of onto-epistemic perspectives. Finally, we outline strategies for reconfiguring AI ethics as a transformative force, moving from narrow paradigms of risk mitigation toward co-creating technologies of hope.
Steph Grohmann
In biomedical science, review by a Research Ethics Committee (REC) is an indispensable way of protecting human subjects from harm. However, in social science and the humanities, mandatory ethics compliance has long been met with scepticism as biomedical models of ethics can map poorly onto methodologies involving complex socio-political and cultural considerations. As a result, tailored ethics training and support as well as access to RECs with the necessary expertise is lacking in some areas, including parts of Europe and low- and middle-income countries. This paper suggests that Generative AI can meaningfully contribute to closing these gaps, illustrating this claim by presenting EthicAlly, a proof-of-concept prototype for an AI-powered ethics support system for social science and humanities researchers. Drawing on constitutional AI technology and a collaborative prompt development methodology, EthicAlly provides structured ethics assessment that incorporates both universal ethics principles and contextual and interpretive considerations relevant to most social science research. In supporting researchers in ethical research design and preparation for REC submission, this kind of system can also contribute to easing the burden on institutional RECs, without attempting to automate or replace human ethical oversight.
Erlend Eriksen, Ronak Rajani, Sahrai Saeed et al.
Objectives The primary objectives were to identify the predictors of new permanent pacemaker implantation in patients with aortic stenosis (AS) undergoing transcatheter aortic valve implantation (TAVI). The secondary objectives were to investigate the temporal changes in permanent pacemaker implantation following TAVI and its impact on long-term prognosis.Design Prospective observational cohort study of patients with AS undergoing TAVI.Setting Single-centre study conducted at a tertiary hospital in Western Norway between 2012 and 2019.Participants Among 600 consecutive patients with severe AS who were treated with TAVI, 52 patients with permanent pacemaker prior to TAVI were excluded. The remaining 548 patients were included in the present study.Baseline measures An evaluation of baseline risk factors, 12-lead ECG and echocardiography.Primary outcome measures The need for a new pacemaker implantation ≤30 days following TAVI and all-cause death.Results The mean age was 80.6±6.7 years, and 50% were males. Among the 548 eligible patients, 173 (31.6%) underwent pacemaker implantation ≤30 days following TAVI, evenly distributed between females and males (29.6% vs 33.6%, p=0.317), with higher implant rates at low-volume phase (2012–2015) and lower implant rates at high-volume phase (2016–2019) (45.8% vs 23.9%, p<0.001). On multivariable analysis, an abnormal electrocardiogram (OR 1.73; 95% CI 1.14 to 2.63, p=0.010), right bundle branch block (OR 2.23; 95% CI 1.09 to 4.59, p=0.028) and atrial fibrillation (OR 1.89; 95% CI 1.24 to 2.88, p=0.003) at baseline were strong predictors of pacemaker implantation. The type of bioprosthesis, but not size, was associated with permanent pacemaker implantation (mechanically expandable valves OR 3.48, 95% CI 2.16 to 5.59; balloon-expandable valves OR 0.07, 95% CI 0.02 to 0.29, both p<0.001)—irrespective of age and sex. During a median follow-up of 60.4 months (range 3–131 months), permanent pacemaker implantation following TAVI was not associated with all-cause mortality (HR 0.89; 95% CI 0.69 to 1.16, p=0.403).Conclusions In the current study, the rates of permanent pacemaker implantation following TAVI decreased substantially from the early low-volume phase to the late high-volume phase. An abnormal baseline ECG, right bundle branch block, atrial fibrillation and bioprosthesis selection remained important predictors of permanent pacemaker implantation. Permanent pacemaker implantation following TAVI had no impact on short or long-term survival.Ethics and dissemination The Regional Committees for Medical and Health Research Ethics (approval number: REK vest 33814/2019) and the Institutional Data Protection Services approved the study protocol. The dissemination of study findings was through peer-reviewed publication, presentation at national and international scientific meetings and conferences.Trial registration number NCT04417829.
Natalie Garrett, Nathan Beard, Casey Fiesler
Even as public pressure mounts for technology companies to consider societal impacts of products, industries and governments in the AI race are demanding technical talent. To meet this demand, universities clamor to add technical artificial intelligence (AI) and machine learning (ML) courses into computing curriculum-but how are societal and ethical considerations part of this landscape? We explore two pathways for ethics content in AI education: (1) standalone AI ethics courses, and (2) integrating ethics into technical AI courses. For both pathways, we ask: What is being taught? As we train computer scientists who will build and deploy AI tools, how are we training them to consider the consequences of their work? In this exploratory work, we qualitatively analyzed 31 standalone AI ethics classes from 22 U.S. universities and 20 AI/ML technical courses from 12 U.S. universities to understand which ethics-related topics instructors include in courses. We identify and categorize topics in AI ethics education, share notable practices, and note omissions. Our analysis will help AI educators identify what topics should be taught and create scaffolding for developing future AI ethics education.
Halaman 5 dari 31221