Hasil untuk "Ethics"

Menampilkan 20 dari ~998631 hasil · dari arXiv, DOAJ, Semantic Scholar, CrossRef

JSON API
S2 Open Access 2019
On the ethics of algorithmic decision-making in healthcare

Thomas Grote, Philipp Berens

In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.

376 sitasi en Medicine, Psychology
S2 Open Access 2019
The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions

Jess Whittlestone, Rune Nyrup, A. Alexandrova et al.

The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.

316 sitasi en Computer Science
S2 Open Access 2019
Ethics and entrepreneurship: A bibliometric study and literature review

Christine Vallaster, S. Kraus, José M. Merigó Lindahl et al.

The entrepreneurship literature pays increasing attention to the ethical aspects of the field. However, only a fragmented understanding is known about how the context influences the ethical judgment of entrepreneurs. We argue that individual socio-cultural background, organizational and societal context shape entrepreneurial ethical judgment. In our article, we contribute to contemporary literature by carving out the intersections between Ethics and Entrepreneurship. We do this by employing a two-step research approach: 1) We use bibliometric techniques to analyze 719 contributions in Business and Economics research and present a comprehensive contextual picture of ethics in entrepreneurship research by a analyzing the 30 most relevant foundation articles. 2) A subsequent content analysis of the 50 most relevant academic contributions was carried with an enlarged database out to augment these findings, detailing ethics and entrepreneurship research on an individual, organizational and societal level of analyses. By comparing the two analyses, this paper concludes by outlining possible avenues for future research.

300 sitasi en Sociology
S2 Open Access 2020
Cloud Ethics

Louise Amoore

This ambitious work is a rich and complex response to the ascent of machinelearning algorithms. More specifically, it is an effort to examine the philosophically sophisticated nature of this rising power. It hardly needs to be said that the work is timely, as computational algorithms occupy increasingly central positions not only in the economies of production and distribution but also in the economies of information, a function that extends to the management of social life in general and to the exercise of state security, surveillance, and policing powers in particular. Amoore’s exposition begins, in fact, with a quietly harrowing account of the partnership between Geofeedia, a ‘location-based analytics platform,’ and the Baltimore Police Department during the period following the death of Freddie Gray, a young black man who suffered fatal injuries while in police custody in 2015. In the civil unrest that followed, ‘terabytes of images, video, audio, text, and biometric and geospatial data from the protests of the people of Baltimore were rendered as inputs to the deep learning algorithms’ (p. 3). Many protesters were arrested or detained without charges based solely on the presumed authority of the predictive algorithm, which had ‘learned how to recognize what a protest is, what a gathering of people in the city might mean’ (pp. 3–4). The book is organized in three main sections, preceded by an introduction in which the author sets out her approach to the ethicopolitics of algorithms and distinguishes it from more familiar public calls for vigilance: calls to divest algorithms of their racial biases, for example, or to make them more transparent so that those responsible for their creation can be held accountable. While Amoore recognizes the palpable threats against persons and against rights identified in these calls for action, she seeks to identify effects working at a more fundamental and esoteric level. ‘In short,’ she writes, ‘what matters is not primarily the identification and regulation of algorithmic wrongs, but more significantly how algorithms are implicated in new regimes of verification, new forms of identifying a wrong or of truth telling in the world’ (pp. 5–6).

228 sitasi en Computer Science
S2 Open Access 2019
Business ethics, corporate social responsibility, and brand attitudes: An exploratory study

O. C. Ferrella, D. Harrison, L. Ferrell et al.

Abstract It is important to understand the relative importance of business ethics and social responsibility in determining brand attitudes. However, there has been a failure in prior research to differentiate between attitudes toward business ethics and CSR. This research reviews customer-brand research related to business ethics and social responsibility and conducts a study to evaluate customer attitudes. Four scenarios offer variations in company behaviors related to positive and negative conduct of customer social responsibility and business ethics. Study findings from a panel of 351 respondents provide new insights related to a customer's expectations and perceptions of company CSR and business ethics behavior. We conclude that although CSR attitudes remain important, customers value business ethics as a critical behavior in their perceptions of brand attitudes.

258 sitasi en Psychology
S2 Open Access 2019
From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy

Elettra Bietti

The word 'ethics' is under siege in technology policy circles. Weaponized in support of deregulation, self-regulation or handsoff governance, "ethics" is increasingly identified with technology companies' self-regulatory efforts and with shallow appearances of ethical behavior. So-called "ethics washing" by tech companies is on the rise, prompting criticism and scrutiny from scholars and the tech community at large. In parallel to the growth of ethics washing, its condemnation has led to a tendency to engage in "ethics bashing." This consists in the trivialization of ethics and moral philosophy now understood as discrete tools or pre-formed social structures such as ethics boards, self-governance schemes or stakeholder groups. The misunderstandings underlying ethics bashing are at least threefold: (a) philosophy and "ethics" are seen as a communications strategy and as a form of instrumentalized cover-up or façade for unethical behavior, (b) philosophy is understood in opposition and as alternative to political representation and social organizing and (c) the role and importance of moral philosophy is downplayed and portrayed as mere "ivory tower" intellectualization of complex problems that need to be dealt with in practice. This paper argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of "ethics washing," or by scholars and policy-makers in the form of "ethics bashing." Grappling with the role of philosophy and ethics requires moving beyond both tendencies and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. In other words, we must resist narrow reductivism of moral philosophy as instrumentalized performance and renew our faith in its intrinsic moral value as a mode of knowledgeseeking and inquiry. Far from mandating a self-regulatory scheme or a given governance structure, moral philosophy in fact facilitates the questioning and reconsideration of any given practice, situating it within a complex web of legal, political and economic institutions. Moral philosophy indeed can shed new light on human practices by adding needed perspective, explaining the relationship between technology and other worthy goals, situating technology within the human, the social, the political. It has become urgent to start considering technology ethics also from within and not only from outside of ethics.

248 sitasi en Computer Science, Political Science
arXiv Open Access 2026
Building the ethical AI framework of the future: from philosophy to practice

Jasper Kyle Catapang

Artificial intelligence pipelines -- spanning data collection, model training, deployment, and post-deployment monitoring -- concentrate ethical risks that intensify with multimodal and agentic systems. Existing governance instruments, including the EU AI Act, the IEEE 7000 series, and the NIST AI Risk Management Framework, provide high-level guidance but often lack enforceable, end-to-end operational controls. This paper presents an ethics-by-design control architecture that embeds consequentialist, deontological, and virtue-ethical reasoning into stage-specific enforcement mechanisms across the AI lifecycle. The framework implements a triple-gate structure at each lifecycle stage: Metric gates (quantitative performance and safety thresholds), Governance gates (legal, rights, and procedural compliance), and Eco gates (carbon and water budgets and sustainability constraints). It specifies measurable trigger conditions, escalation paths, audit artefacts, and mappings to EU AI Act obligations and NIST RMF functions, enabling integration with existing MLOps and CI/CD pipelines. Illustrative examples from large language model pipelines demonstrate how gate-based controls can surface and constrain technical, social, and environmental risks prior to release and during runtime. The framework is accompanied by a preregistered evaluation protocol that defines ex ante success criteria and assessment procedures, enabling falsifiable evaluation of gate effectiveness. By translating normative commitments into enforceable and testable controls, the framework provides a practical basis for operational AI governance across organizational contexts, jurisdictions, and deployment scales.

en cs.CY, cs.AI
arXiv Open Access 2026
Shaping the Digital Future of ErUM Research: Sustainability & Ethics

Luca Di Bella, Jan Bürger, Markus Demleitner et al.

This workshop report from "Shaping the Digital Future of ErUM Research: Sustainability & Ethics" (Aachen, 2025) reviews progress on sustainability measures in data-intensive ErUM-Data research since the 2023 call-to-action on resource-aware research. It evaluates short-, medium-, and long-term actions around monitoring and reducing CO2 emissions, improving data and software FAIRness, optimizing workflows and computing infrastructures, and aligning operations with low-carbon energy availability, including concepts such as "breathing" computing centers, long-term data storage strategies, and software efficiency certification. The report stresses the need for systematic teaching, training, mentoring, and new support formats to establish sustainable coding and computing practices, particularly among students and early-career researchers, and highlights the importance of dedicated steering and funding instruments to embed sustainability in project planning. Ethical discussions focus on the transformative use of AI in ErUM-Data, addressing autonomy, bias, transparency, explainability, attribution of responsibility, and the risk of deskilling, while reaffirming that accountability for scientific outcomes remains with human researchers. Finally, the report emphasizes that sustainable transformation requires not only technical measures but also targeted awareness-building, communication strategies, incentives, and community-driven initiatives to move from awareness to action and to integrate sustainability and ethics into everyday scientific practice.

en physics.comp-ph, astro-ph.IM
S2 Open Access 2020
AI ethics should not remain toothless! A call to bring back the teeth of ethics

Anaïs Rességuier, Rowena Rodrigues

Ethics has powerful teeth, but these are barely being used in the ethics of AI today – it is no wonder the ethics of AI is then blamed for having no teeth. This article argues that ‘ethics’ in the current AI ethics field is largely ineffective, trapped in an ‘ethical principles’ approach and as such particularly prone to manipulation, especially by industry actors. Using ethics as a substitute for law risks its abuse and misuse. This significantly limits what ethics can achieve and is a great loss to the AI field and its impacts on individuals and society. This article discusses these risks and then highlights the teeth of ethics and the essential value they can – and should – bring to AI ethics now.

189 sitasi en Philosophy, Computer Science
S2 Open Access 2021
Ethics of AI: A Systematic Literature Review of Principles and Challenges

A. Khan, Sher Badshah, Peng Liang et al.

Ethics in AI becomes a global topic of interest for both policymakers and academic researchers. In the last few years, various research organizations, lawyers, think tankers, and regulatory bodies get involved in developing AI ethics guidelines and principles. However, there is still debate about the implications of these principles. We conducted a systematic literature review (SLR) study to investigate the agreement on the significance of AI principles and identify the challenging factors that could negatively impact the adoption of AI ethics principles. The results reveal that the global convergence set consists of 22 ethical principles and 15 challenges. Transparency, privacy, accountability and fairness are identified as the most common AI ethics principles. Similarly, lack of ethical knowledge and vague principles are reported as the significant challenges for considering ethics in AI. The findings of this study are the preliminary inputs for proposing a maturity model that assesses the ethical capabilities of AI systems and provides best practices for further improvements.

145 sitasi en Computer Science
arXiv Open Access 2025
Ethical Classification of Non-Coding Contributions in Open-Source Projects via Large Language Models

Sergio Cobos, Javier Luis Cánovas Izquierdo

The development of Open-Source Software (OSS) is not only a technical challenge, but also a social one due to the diverse mixture of contributors. To this aim, social-coding platforms, such as GitHub, provide the infrastructure needed to host and develop the code, but also the support for enabling the community's collaboration, which is driven by non-coding contributions, such as issues (i.e., change proposals or bug reports) or comments to existing contributions. As with any other social endeavor, this development process faces ethical challenges, which may put at risk the project's sustainability. To foster a productive and positive environment, OSS projects are increasingly deploying codes of conduct, which define rules to ensure a respectful and inclusive participatory environment, with the Contributor Covenant being the main model to follow. However, monitoring and enforcing these codes of conduct is a challenging task, due to the limitations of current approaches. In this paper, we propose an approach to classify the ethical quality of non-coding contributions in OSS projects by relying on Large Language Models (LLM), a promising technology for text classification tasks. We defined a set of ethical metrics based on the Contributor Covenant and developed a classification approach to assess ethical behavior in OSS non-coding contributions, using prompt engineering to guide the model's output.

en cs.SE
arXiv Open Access 2025
Development of Application-Specific Large Language Models to Facilitate Research Ethics Review

Sebastian Porsdam Mann, Joel Seah Jiehao, Stephen R. Latham et al.

Institutional review boards (IRBs) play a crucial role in ensuring the ethical conduct of human subjects research, but face challenges including inconsistency, delays, and inefficiencies. We propose the development and implementation of application-specific large language models (LLMs) to facilitate IRB review processes. These IRB-specific LLMs would be fine-tuned on IRB-specific literature and institutional datasets, and equipped with retrieval capabilities to access up-to-date, context-relevant information. We outline potential applications, including pre-review screening, preliminary analysis, consistency checking, and decision support. While addressing concerns about accuracy, context sensitivity, and human oversight, we acknowledge remaining challenges such as over-reliance on AI and the need for transparency. By enhancing the efficiency and quality of ethical review while maintaining human judgment in critical decisions, IRB-specific LLMs offer a promising tool to improve research oversight. We call for pilot studies to evaluate the feasibility and impact of this approach.

en cs.CL, cs.CY
arXiv Open Access 2025
Beyond Algorethics: Addressing the Ethical and Anthropological Challenges of AI Recommender Systems

Octavian M. Machidon

This paper examines the ethical and anthropological challenges posed by AI-driven recommender systems (RSs), which increasingly shape digital environments and social interactions. By curating personalized content, RSs do not merely reflect user preferences but actively construct experiences across social media, entertainment platforms, and e-commerce. Their influence raises concerns over privacy, autonomy, and mental well-being, while existing approaches such as "algorethics" - the effort to embed ethical principles into algorithmic design - remain insufficient. RSs inherently reduce human complexity to quantifiable profiles, exploit user vulnerabilities, and prioritize engagement over well-being. The paper advances a three-dimensional framework for human-centered RSs, integrating policies and regulation, interdisciplinary research, and education. These strategies are mutually reinforcing: research provides evidence for policy, policy enables safeguards and standards, and education equips users to engage critically. By connecting ethical reflection with governance and digital literacy, the paper argues that RSs can be reoriented to enhance autonomy and dignity rather than undermine them.

en cs.CY, cs.AI
arXiv Open Access 2025
Integration of AI in STEM Education, Addressing Ethical Challenges in K-12 Settings

Shaouna Shoaib Lodhi, Shoaib Lodhi

The rapid integration of Artificial Intelligence (AI) into K-12 STEM education presents transformative opportunities alongside significant ethical challenges. While AI-powered tools such as Intelligent Tutoring Systems (ITS), automated assessments, and predictive analytics enhance personalized learning and operational efficiency, they also risk perpetuating algorithmic bias, eroding student privacy, and exacerbating educational inequities. This paper examines the dual-edged impact of AI in STEM classrooms, analyzing its benefits (e.g., adaptive learning, real-time feedback) and drawbacks (e.g., surveillance risks, pedagogical limitations) through an ethical lens. We identify critical gaps in current AI education research, particularly the lack of subject-specific frameworks for responsible integration and propose a three-phased implementation roadmap paired with a tiered professional development model for educators. Our framework emphasizes equity-centered design, combining technical AI literacy with ethical reasoning to foster critical engagement among students. Key recommendations include mandatory bias audits, low-resource adaptation strategies, and policy alignment to ensure AI serves as a tool for inclusive, human-centered STEM education. By bridging theory and practice, this work advances a research-backed approach to AI integration that prioritizes pedagogical integrity, equity, and student agency in an increasingly algorithmic world. Keywords: Artificial Intelligence, STEM education, algorithmic bias, ethical AI, K-12 pedagogy, equity in education

en cs.CY
arXiv Open Access 2025
Toward Ethical AI Through Bayesian Uncertainty in Neural Question Answering

Riccardo Di Sipio

We explore Bayesian reasoning as a means to quantify uncertainty in neural networks for question answering. Starting with a multilayer perceptron on the Iris dataset, we show how posterior inference conveys confidence in predictions. We then extend this to language models, applying Bayesian inference first to a frozen head and finally to LoRA-adapted transformers, evaluated on the CommonsenseQA benchmark. Rather than aiming for state-of-the-art accuracy, we compare Laplace approximations against maximum a posteriori (MAP) estimates to highlight uncertainty calibration and selective prediction. This allows models to abstain when confidence is low. An ``I don't know'' response not only improves interpretability but also illustrates how Bayesian methods can contribute to more responsible and ethical deployment of neural question-answering systems.

Halaman 4 dari 49932