A. Jonsen, M. Siegler, W. Winslade
Hasil untuk "Ethics"
Menampilkan 20 dari ~999174 hasil · dari arXiv, DOAJ, Semantic Scholar, CrossRef
D. Randall, Maria F. Fernandes
Jane Bennett
R. Phillips
S. Valentine, Gary M. Fleischman
M. Slote
G. Stricker
Marco Autili, Gianluca Filippone, Mashal Afzal Memon et al.
Self-adaptive systems increasingly operate in close interaction with humans, often sharing the same physical or virtual environments and making decisions with ethical implications at runtime. Current approaches typically encode ethics as fixed, rule-based constraints or as a single chosen ethical theory embedded at design time. This overlooks a fundamental property of human-system interaction settings: ethical preferences vary across individuals and groups, evolve with context, and may conflict, while still needing to remain within a legally and regulatorily defined hard-ethics envelope (e.g., safety and compliance constraints). This paper advocates a shift from static ethical rules to runtime ethical reasoning for self-adaptive systems, where ethical preferences are treated as runtime requirements that must be elicited, represented, and continuously revised as stakeholders and situations change. We argue that satisfying such requirements demands explicit ethics-based negotiation to manage ethical trade-offs among multiple humans who interact with, are represented by, or are affected by a system. We identify key challenges, ethical uncertainty, conflicts among ethical values (including human, societal, and environmental drivers), and multi-dimensional/multi-party/multi-driver negotiation, and outline research directions and questions toward ethically self-adaptive systems.
P. Alderson, V. Morrow
Introduction Defining Some Terms Research Ethics The Purpose of This Book: Starting from Uncertainty and the Question Format Researchers as Insiders or Outsiders The Contents of This Book PART ONE: THE PLANNING STAGES Planning the Research: Purpose and Methods Two Basic Questions Questions about Purpose and Methods Is the Research Worth Doing? Do Theories Matter? Do Viewpoints Matter? Do Methods Matter? Three Phases in Growing Awareness of Research Ethics Three Ethics Frameworks for Assessing Research Uncertainty - The Basis of Ethical Research Summary of Questions Assessing Harms and Benefits Harms Benefits Risk, Cost, Harm and Benefit Assessments Confusion in Risk-Benefit Assessments Risk of Distress or Humiliation Summary of Questions Respect for Rights: Privacy and Confidentiality Legal Rights to Confidentiality Opt-in or Opt-out Access Practical Respect Privacy Rights Data Protection Act 1998 Confidentiality or Acknowledgement? Intimacy between Strangers: Research Interviews Ethics and the Internet Respecting Local Values Privacy and Encouraging Freely Given Responses in Face-to-Face Contact Does Traditional Ethics Cover Modern Research Experiences and Relationships? Summary of Questions Designing Research: Selection and Participation Framing the Topics and Extent of the Research Combining Respect, Inclusion and Protection Does Traditional Ethics Cover Social Exclusion? Images and Symbols Beyond Inclusion to Participation: Children and Young People as Researchers UN-Related Work With Young People Respecting Young Researchers' Own Qualities Summary Of Questions Money Matters: Contracts, Funding Research and Paying Participants Planning, Budgeting and Research Agendas Ethics and Funding Sources Carbon Costs Ethics and Contracts Freedom to Publish Paying Young Researchers and Participants Payments in Context Summary of Questions Reviewing Aims and Methods: Ethics Guidance and Committees Review and Revision of Research Aims and Methods Does Social Research Need Research Ethics Committees? Recent Experiences with Research Ethics Committees International Standards A National Social Research Ethics Forum? Summary of Questions PART TWO: THE DATA COLLECTING STAGE Information Spoken and Written Information Research Information Leaflets Leaflet Layout Examples of Research Information Leaflets Leaflets in Other Languages Information in Semi-Literate Societies Relevant Research? Two-Way Information Exchanged Throughout the Research Study Summary of Questions Consent Consent and Rights The Meaning of Consent Consent to Open-Ended Research Assent Consent and the Law Consent by and for Children and Young People Double Standards Complications in Parental Consent Defining and Assessing Competence to Consent Levels of Involvement in Decision Making Respecting Consent and Refusal Consent to Longitudinal Research Consent and Secondary Data Analysis International Standards of Consent Research and International Contexts Why Respect Children's Consent? General Questions about Children's Consent Summary of Questions PART THREE: THE WRITING, REPORTING AND FOLLOW-UP STAGES Disseminating and Implementing the Findings Involving Children in Data Analysis Dissemination: Getting to the Heart of Debate and Change Dissemination and Implementation: Children, Young People and Adults Working Together for Change Problems with Dissemination Creative Ways Round the Problems Dissemination and the News Media Critical Readers and Viewers Underlying Attitudes to Children and The 3 Ps Summary of Questions The Impact on Children What Collective Impact Can Research Have on Children and Young People? Reviewing the Impact of Research on Children Positive Images Summary of Questions Conclusion Ways Forward for Individuals and Teams Questions that Cannot be Solved by Individuals Alone The Need for Social Research Ethics Authorities Summary of National Policy Is the Research Worth Doing? And Finally References And Index
Weina Jin, Elise Li Zheng, Ghassan Hamarneh
The operationalization of ethics in the technical practices of artificial intelligence (AI) is facing significant challenges. To address the problem of ineffective implementation of AI ethics, we present our diagnosis, analysis, and interventional recommendations from a unique perspective of the real-world implementation of AI ethics through explainable AI (XAI) techniques. We first describe the phenomenon (i.e., the "symptoms") of ineffective implementation of AI ethics in explainable AI using four empirical cases. From the "symptoms", we diagnose the root cause (i.e., the "disease") being the dysfunction and imbalance of power structures in the sociotechnical system of AI. The power structures are dominated by unjust and unchecked power that does not represent the benefits and interests of the public and the most impacted communities, and cannot be countervailed by ethical power. Based on the understanding of power mechanisms, we propose three interventional recommendations to tackle the root cause, including: 1) Making power explicable and checked, 2) Reframing the narratives and assumptions of AI and AI ethics to check unjust power and reflect the values and benefits of the public, and 3) Uniting the efforts of ethical and scientific conduct of AI to encode ethical values as technical standards, norms, and methods, including conducting critical examinations and limitation analyses of AI technical practices. We hope that our diagnosis and interventional recommendations can be a useful input to the AI community and civil society's ongoing discussion and implementation of ethics in AI for ethical and responsible AI practice.
Andreas Happe, Jürgen Cito
Large Language Models (LLMs) have rapidly evolved over the past few years and are currently evaluated for their efficacy within the domain of offensive cyber-security. While initial forays showcase the potential of LLMs to enhance security research, they also raise critical ethical concerns regarding the dual-use of offensive security tooling. This paper analyzes a set of papers that leverage LLMs for offensive security, focusing on how ethical considerations are expressed and justified in their work. The goal is to assess the culture of AI in offensive security research regarding ethics communication, highlighting trends, best practices, and gaps in current discourse. We provide insights into how the academic community navigates the fine line between innovation and ethical responsibility. Particularly, our results show that 13 of 15 reviewed prototypes (86.6\%) mentioned ethical considerations and are thus aware of the potential dual-use of their research. Main motivation given for the research was allowing broader access to penetration-testing as well as preparing defenders for AI-guided attackers.
Jenny Thain, Jennifer Arnold, Amit X Garg et al.
Objective Patients receiving haemodialysis are at very high risk of fragility fracture, yet there are no proven treatments for fracture prevention. We will advance a pilot study on the feasibility of a large, pragmatic, randomised controlled trial (RCT) of denosumab for fragility fracture prevention in haemodialysis.Trial design PRevEnting FracturEs in REnal Disease-1 is a pragmatic, open-label, pilot study of an RCT of a denosumab care pathway embedded in routine care haemodialysis centres.Methods We will recruit at least 60 participants at high risk of fracture from at least 6 haemodialysis centres in Ontario, Canada. They must be aged 40 years or older, have access to provincial drug coverage, have appropriate baseline calcium and parathyroid hormone levels and be deemed suitable for denosumab by their kidney care provider. Participants will be randomised 1:1 to denosumab (with supports to mitigate hypocalcaemia) versus usual care using block randomisation by a central statistician (computer-generated sequence). Primary outcomes include recruitment feasibility and adherence. Secondary outcomes include safety (hypocalcaemia) and participant satisfaction with our protocol and processes. Study investigators and data analysts will be blind to treatment allocation.We will present results descriptively. The trial was approved by Clinical Trials Ontario and local research ethics boards across study sites.Results Primary and secondary outcomes will be published on trial completion.Conclusions This pilot will inform the feasibility of conducting a large-scale, efficiently run, pragmatic RCT to test whether a denosumab care pathway safely reduces the risk of fragility fracture in patients receiving haemodialysis. Results have the potential to transform fracture care in real-world patients with kidney and metabolic bone disease.Trial registration number NCT05096195.
Ramon Odebunmi
The Russo-Ukraine and Palestine-Israeli conflicts are one of the most devastating geo-political conflicts in the 21st century. From human rights perspective, the alleged violations of international law in the ongoing war in Ukraine and Palestine respectively are not only considered as abhorrent, but also raises significant concerns about the legitimacy of Russia and Israel’s aggression against Ukraine and Palestine respectively. The laws of war are means to an end to achieve legitimacy by showing respect for rule of law and abiding by universal ethical and moral principles. This paper therefore argues that Russia and Israel’s use of force against Ukraine and Palestine respectively violates international humanitarian law or law of wars on the basis of Article 2(4) and Article 51 of the UN Charter. This paper seeks to determine the legitimacy and illegitimacy of the military aggression of Russia and Israel in the context of international humanitarian law. To achieve this objective, the paper adopts the qualitative method of inquiry. This paper adopts the Just War theory to strengthen the argument of this paper by interrogating the legitimacy and illegitimacy of Russia and Israel’s use of force against Ukraine and Palestine as well as the conduct of the war. The paper concludes that Russia and Israel’s use of military force negates the universal principle of morality and ethics as their actions seriously negates the provisions of Article 2(4) and Article 51 of the UN charter. Israel’s use of force in Gaza is disproportionate, while Russia’s conduct of the war violates the prohibition of the use of force as provided by international humanitarian law.
Sara Sablone, Pietro Refolo, Rossana Cecchi
R. Braidotti
J. Mandal, A. Halder, S. Parija
Sebastian Lehuede
Research and activism have increasingly denounced the problematic environmental record of the infrastructure and value chain underpinning Artificial Intelligence (AI). Water-intensive data centres, polluting mineral extraction and e-waste dumping are incontrovertibly part of AI's footprint. In this article, I turn to areas affected by AI-fuelled environmental harm and identify an ethics of resistance emerging from local activists, which I term 'elemental ethics'. Elemental ethics interrogates the AI value chain's problematic relationship with the elements that make up the world, critiques the undermining of local and ancestral approaches to nature and reveals the vital and quotidian harms engendered by so-called intelligent systems. While this ethics is emerging from grassroots and Indigenous groups, it echoes recent calls from environmental philosophy to reconnect with the environment via the elements. In empirical terms, this article looks at groups in Chile resisting a Google data centre project in Santiago and lithium extraction (used for rechargeable batteries) in Lickan Antay Indigenous territory, Atacama Desert. As I show, elemental ethics can complement top-down, utilitarian and quantitative approaches to AI ethics and sustainable AI as well as interrogate whose lived experience and well-being counts in debates on AI extinction.
Olumide Adisa, Enio Alterman Blay, Yasaman Asgari et al.
Complexity science, despite its broad scope and potential impact, has not kept pace with fields like artificial intelligence, biotechnology and social sciences in addressing ethical concerns. The field lacks a comprehensive ethical framework, leaving us, as a community, vulnerable to ethical challenges and dilemmas. Other areas have gone through similar experiences and created, with discussions and working groups, their guides, policies and recommendations. Therefore, here we highlight the critical absence of formal guidelines, dedicated ethical committees, and widespread discussions on ethics within the complexity science community. Drawing on insights from the disciplines mentioned earlier, we propose a roadmap to enhance ethical awareness and action. Our recommendations include (i) initiating supportive mechanisms to develop ethical guidelines specific to complex systems research, (ii) creating open-access resources, and (iii) fostering inclusive dialogues to ensure that complexity science can responsibly tackle societal challenges and achieve a more inclusive environment. By initiating this dialogue, we aim to encourage a necessary shift in how ethics is integrated into complexity research, positioning the field to address contemporary challenges more effectively.
Conrad Sanderson, Emma Schleiger, David Douglas et al.
While the operationalisation of high-level AI ethics principles into practical AI/ML systems has made progress, there is still a theory-practice gap in managing tensions between the underlying AI ethics aspects. We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex. The approaches differ in the types of considered context, scope, methods for measuring contexts, and degree of justification. None of the approaches is likely to be appropriate for all organisations, systems, or applications. To address this, we propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions. The proposed framework aims to facilitate the implementation of well-rounded AI/ML systems that are appropriate for potential regulatory requirements.
Halaman 12 dari 49959