Daryl Koehn
Hasil untuk "Ethics"
Menampilkan 20 dari ~456759 hasil · dari CrossRef, arXiv, DOAJ
Melissa Wilfley, Mengting Ai, Madelyn Rose Sanfilippo
Introduction. AI Ethics is framed distinctly across actors and stakeholder groups. We report results from a case study of OpenAI analysing ethical AI discourse. Method. Research addressed: How has OpenAI's public discourse leveraged 'ethics', 'safety', 'alignment' and adjacent related concepts over time, and what does discourse signal about framing in practice? A structured corpus, differentiating between communication for a general audience and communication with an academic audience, was assembled from public documentation. Analysis. Qualitative content analysis of ethical themes combined inductively derived and deductively applied codes. Quantitative analysis leveraged computational content analysis methods via NLP to model topics and quantify changes in rhetoric over time. Visualizations report aggregate results. For reproducible results, we have released our code at https://github.com/famous-blue-raincoat/AI_Ethics_Discourse. Results. Results indicate that safety and risk discourse dominate OpenAI's public communication and documentation, without applying academic and advocacy ethics frameworks or vocabularies. Conclusions. Implications for governance are presented, along with discussion of ethics-washing practices in industry.
Ruta Serpytyte
The fields of HCI and Participatory design have been turning to care ethics as a suitable ethos to approach current polycrisis with. Similar calls for relationality can be witnessed in public administration research and practice, albeit its current logic being built on privatisation and marketisation of services, managerialism and customer-focus; all of which are challenging to combine with care ethics. In this paper I use collaging technique to visually reflect on new ways for public services to adopt and (care-fully) scale participatory design approaches, and how do feminist care ethics fit in the design of public services, where there is a strong presence of neoliberalism.
Bayram B., Leventi N., Vodenicharova A. et al.
Artificial intelligence (AI) is reshaping healthcare by enhancing diagnostic precision, treatment personalization, and overall patient care. By leveraging technologies such as machine learning, deep learning, natural language processing, and computer vision, AI enables faster and more accurate decision-making, supports drug discovery and development, and facilitates remote patient monitoring. Beyond improving clinical outcomes, AI also contributes to holistic well-being by addressing physical, mental, social, occupational, and environmental health. Wearable AI devices promote proactive health management, virtual assistants improve mental health accessibility, and predictive analytics enable early intervention for disease prevention. However, the integration of AI in healthcare presents challenges, including data privacy concerns, algorithmic bias, and the need for transparency and trust. Ensuring the responsible and equitable deployment of AI requires robust ethical guidelines, interdisciplinary collaboration, and policies that safeguard patient rights while maximizing the technology’s benefits. By exploring both the transformative potential and inherent challenges of AI, this paper aims to highlight the critical role of AI in shaping the future of healthcare and human well-being.
Xóchitl De San Jorge Cárdenas, Monserrat Armenta Reséndiz
En los últimos años se han realizado cuestionamientos éticos sobre el consumo de drogas y las políticas de atención, por lo que resulta necesario hacer una reflexión analítica sobre algunos de los conflictos identificados. El objetivo de este artículo es explorar los conflictos bioéticos que se identifican en el campo de la reducción de la demanda de drogas. Se agruparon los distintos conflictos éticos en diversas categorías —el consumo de drogas y la autonomía, legalización versus prohibición, responsabilidad social, prevención, tratamiento y reducción de daños— y se analizaron a la luz de algunas teorías éticas. Esta revisión preliminar pone de manifiesto la necesidad urgente de abordar los dilemas presentes en todos los niveles de esta cuestión desde una perspectiva multidisciplinar, que van desde la sensibilización de la población, la capacitación del personal de salud, la generación de políticas públicas y de programas de atención hasta la aprobación de iniciativas legales que contribuyan al cuidado de la salud de las personas consumidoras de sustancias psicoactivas.
Julian F. Müller
Jacob Onyango, Gift-Noelle Wango, Nicky Okeyo et al.
Despite their vulnerability, adolescents are often excluded from health research due to ethical concerns about research with minors, especially in low-income regions like Sub-Saharan Africa. We enrolled adolescent girls aged 15–17 years and caregivers of girls of the same age. Using a 25-question Comprehension Score Sheet, we applied a quantitative approach to compare the comprehension of informed consent of 33 adolescent girls and 41 caregivers of adolescent girls aged 15–17 years. The assessments were audio-recorded and reviewed for quality check. The results showed that adolescent girls were significantly better than caregivers in comprehending informed consent information overall and specifically on study procedures, voluntarism and study purpose. This suggests that adolescents can understand informed consent information at the same level as or better than caregivers who are entrusted with providing permission for adolescents to participate in research.
Alan R. Vincelette
Protection of the environment and its life forms has become a significant concern among philosophers and theologians alike in recent years. There is disagreement, however, over the best way to formulate the grounds of this concern. Some philosophers and theologians favor an instrumental or anthropocentric approach, claiming that adequate preservation of wildlife is warranted solely on the basis of benefits provided to humans, whether couched in terms of the satisfaction of material, medicinal, recreational, or psychological needs. Others claim that wild nature should be preserved for its own sake, due to its life forms possessing intrinsic value. How best to articulate and defend the intrinsic value of wildlife, however, has been much disputed. This paper first compares the adequacy of anthropocentric and non-anthropocentric approaches to environmental ethics. It concludes that a non-anthropocentric theory of the intrinsic value of living creatures is best suited to motivate care for and action on behalf of the environment, and, in addition, most accurately reflects the basis of human concern for the environment. This paper next goes on to examine the philosophical underpinnings required for a theory of the intrinsic value of nature. It argues that an objective account of the intrinsic value of nature, founded on some form of <i>non-naturalist ethics</i> or <i>minimal theism</i>, seems necessary to account for the intrinsic value of nature (in contrast with a purely subjective or naturalist approach). In particular, a sacramental view of nature wherein creation issues from a creator who is goodness itself seems ideal for grounding the intrinsic value of wildlife, along with motivating humans to contribute energy and resources to their conservation and even to sacrifice some of their interests in order to do so. This being the case, rather than being a hindrance to environmental ethics, religion, if properly formulated, can be a most helpful ally.
O. N. Arunkumar, D. Divya, Chandan
Unlocking the power of sustainable growth, Environmental, Social, and Governance (ESG) principles are redefining the future of responsible investment and corporate excellence. ESG regulations ensure that organizations maintain sustainable development and improve non-monetary metrics, such as stakeholders’ engagement, customer satisfaction, market acceptability, societal ethics, and values. Higher ESG scores demonstrate commitment towards responsible business practices and indicate higher market value for companies, which are valid for all sectors, including IT. However, existing literature reveals that IT sector companies pay less attention to planning their operations to make them more sustainable. Therefore, IT firms must identify methods and practices to maintain high ESG scores to achieve sustainable growth. The current study leads the readers into a new area of ESG through the help of an advanced method, DEA. DEA (Data Envelopment Analysis) methodology has been used to identify the decision units’ relative efficiency scores and helps identify peers and followers based on ESG scores. The study reveals that among the selected IT firms using the output-oriented strategy, 56.25% experience increasing returns to scale, 18.75 per cent experience decreasing returns to scale, and the remaining 25.00 per cent report constant returns to scale. This indicates that most IT industry firms can generate greater output change in proportion to the input change.
Jina Li, Tianyu Liang, Yi Hou et al.
Abstract Background Although the Screen for Child Anxiety Related Emotional Disorders (SCARED) is a widely used tool for assessing anxiety, its 41-item format makes it a time-intensive method for identifying children and adolescents at high risk of anxiety. This study aims to develop an optimized version of the SCARED for Chinese children and adolescents using a novel machine learning approach, Fast and Accurate Interpretable Risk Scores (FasterRisk), to improve the efficiency of prediction and intervention. Method The full version of the SCARED scale and sociodemographic information were given to 8,315 children and adolescents aged 4–9 years in Henan Province, China. The FasterRisk model was utilized to select the optimal items for constructing the Chinese version of SCARED, and receiver operating characteristic (ROC) curves were employed to determine the optimal cutoff scores. Results The results showed that a 5-item Chinese version of the SCARED accurately reproduced full SCARED scores. By evaluating the performance of risk scoring models containing 1 to 8 items, the 5-item model showed the best performance in AUC (0.96), and other performance indicators, with high prediction accuracy (R²= 0.82). Under the condition of an equal number of items, the AUC value of the newly developed 5-item Chinese version of the SCARED (0.96) surpassed that of the existing SCARED-5 (0.92), with the optimal cutoff score determined to be 14. Conclusion The 5-item Chinese version of the SCARED is a reliable self-report tool that aids users with limited time and resources in assessing anxiety among children and adolescents in China. Trial registration Ethical approval in this study was approved by the Ethics Committee for Social Development and Public Policy at Beijing Normal University (SSDPP-HSC20230014).
Alexander Martin Mussgnug
Recent research illustrates how AI can be developed and deployed in a manner detached from the concrete social context of application. By abstracting from the contexts of AI application, practitioners also disengage from the distinct normative structures that govern them. Building upon Helen Nissenbaum's framework of contextual integrity, I illustrate how disregard for contextual norms can threaten the integrity of a context with often decisive ethical implications. I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, I question the current narrow prioritization in AI ethics of moral innovation over moral preservation. Engaging also with emerging foundation models, I advocate for a moderately conservative approach to the ethics of AI that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures.
Ruizhe Zhang, Haitao Li, Yueyue Wu et al.
In recent years, the utilization of large language models for natural language dialogue has gained momentum, leading to their widespread adoption across various domains. However, their universal competence in addressing challenges specific to specialized fields such as law remains a subject of scrutiny. The incorporation of legal ethics into the model has been overlooked by researchers. We asserts that rigorous ethic evaluation is essential to ensure the effective integration of large language models in legal domains, emphasizing the need to assess domain-specific proficiency and domain-specific ethic. To address this, we propose a novelty evaluation methodology, utilizing authentic legal cases to evaluate the fundamental language abilities, specialized legal knowledge and legal robustness of large language models (LLMs). The findings from our comprehensive evaluation contribute significantly to the academic discourse surrounding the suitability and performance of large language models in legal domains.
Muhammed Ugur, Raghavendra Pradyumna Pothukuchi, Abhishek Bhattacharjee
Brain-computer interfaces (BCIs) connect biological neurons in the brain with external systems like prosthetics and computers. They are increasingly incorporating processing capabilities to analyze and stimulate neural activity, and consequently, pose unique design challenges related to ethics, law, and policy. For the first time, this paper articulates how ethical, legal, and policy considerations can shape BCI architecture design, and how the decisions that architects make constrain or expand the ethical, legal, and policy frameworks that can be applied to them.
Edward Y. Chang
This paper explores the integration of human-like emotions and ethical considerations into Large Language Models (LLMs). We first model eight fundamental human emotions, presented as opposing pairs, and employ collaborative LLMs to reinterpret and express these emotions across a spectrum of intensity. Our focus extends to embedding a latent ethical dimension within LLMs, guided by a novel self-supervised learning algorithm with human feedback (SSHF). This approach enables LLMs to perform self-evaluations and adjustments concerning ethical guidelines, enhancing their capability to generate content that is not only emotionally resonant but also ethically aligned. The methodologies and case studies presented herein illustrate the potential of LLMs to transcend mere text and image generation, venturing into the realms of empathetic interaction and principled decision-making, thereby setting a new precedent in the development of emotionally aware and ethically conscious AI systems.
Rochelle E. Tractenberg
Artificial Intelligence (AI) is a field that utilizes computing and often, data and statistics, intensively together to solve problems or make predictions. AI has been evolving with literally unbelievable speed over the past few years, and this has led to an increase in social, cultural, industrial, scientific, and governmental concerns about the ethical development and use of AI systems worldwide. The ASA has issued a statement on ethical statistical practice and AI (ASA, 2024), which echoes similar statements from other groups. Here we discuss the support for ethical statistical practice and ethical AI that has been established in long-standing human rights law and ethical practice standards for computing and statistics. There are multiple sources of support for ethical statistical practice and ethical AI deriving from these source documents, which are critical for strengthening the operationalization of the "Statement on Ethical AI for Statistics Practitioners". These resources are explicated for interested readers to utilize to guide their development and use of AI in, and through, their statistical practice.
Sri Marfuati, Hikmah Fitriani, Mustika Weni et al.
Background: With 10 million cases around the world, pulmonary tuberculosis (TB) has been classified as a highly contagious disease and mostly affecting low and middle countries. Having the second highest incident cases in West Java of Indonesia, Cirebon becomes a challenging city in order to reduce the number of TB cases in the country. Aims: This study aims to identify the patients’ knowledge and treatment phases, and how the two factors encourage patients to comply with their medication. Methods : This cross-sectional observational study was conducted among 91 new pulmonary tuberculosis patients at the Cirebon City Community Lung Health Centre, selected using random sampling. Not only respondent characteristics, but also data on the patients' knowledge levels, treatment phases, and medication adherence were collected using a questionnaire and medical records. To assess the relationship between these variables, the collected data was then analyzed using the Spearman Correlation test. Ethical clearance was obtained from the Health Research Ethics Commission, and informed consent was gathered from all participants. Results: This study reveals the most updated characteristics of the Tuberculosis patients at the Cirebon City Community Lung Health Center aged 15-64 years old with treatment duration ranged 1-6 months. The majority have insufficient knowledge about tuberculosis (45.1%), and 75.8% of patients adhered to their prescribed medication regimen, regardless of their knowledge level. The data indicates a significant positive correlation between knowledge level and medication adherence (p = 0.015), with 95% of patients with good knowledge adhering to treatment compared to only 34% with poor knowledge. Furthermore, there is a significant relationship between adherence and treatment duration (p = 0.002), as 85% of patients who adhered to treatment did so for more than two months. Conclusion: The study shows that patients with better knowledge of tuberculosis are more likely to stick to their medication, which also leads to longer treatment durations. Given the high incidence of TB in the region, these findings suggest the need for targeted educational programs to enhance patients' understanding of TB, thereby improving adherence to treatment protocols. Received: 20 May 2024, Reviewed: 09 June 2024, Revised: 26 August 2024, Accepted: 30 August 2024.
Yuhao Kang, Qianheng Zhang, Robert Roth
The rapid advancement of artificial intelligence (AI) such as the emergence of large language models including ChatGPT and DALLE 2 has brought both opportunities for improving productivity and raised ethical concerns. This paper investigates the ethics of using artificial intelligence (AI) in cartography, with a particular focus on the generation of maps using DALLE 2. To accomplish this, we first create an open-sourced dataset that includes synthetic (AI-generated) and real-world (human-designed) maps at multiple scales with a variety settings. We subsequently examine four potential ethical concerns that may arise from the characteristics of DALLE 2 generated maps, namely inaccuracies, misleading information, unanticipated features, and reproducibility. We then develop a deep learning-based ethical examination system that identifies those AI-generated maps. Our research emphasizes the importance of ethical considerations in the development and use of AI techniques in cartography, contributing to the growing body of work on trustworthy maps. We aim to raise public awareness of the potential risks associated with AI-generated maps and support the development of ethical guidelines for their future use.
Jakob Stenseke
Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative ethics through the lens of computational complexity. First, we introduce computational complexity for the uninitiated reader and discuss how the complexity of ethical problems can be framed within Marr's three levels of analysis. We then study a range of ethical problems based on consequentialism, deontology, and virtue ethics, with the aim of elucidating the complexity associated with the problems themselves (e.g., due to combinatorics, uncertainty, strategic dynamics), the computational methods employed (e.g., probability, logic, learning), and the available resources (e.g., time, knowledge, learning). The results indicate that most problems the normative frameworks pose lead to tractability issues in every category analyzed. Our investigation also provides several insights about the computational nature of normative ethics, including the differences between rule- and outcome-based moral strategies, and the implementation-variance with regard to moral resources. We then discuss the consequences complexity results have for the prospect of moral machines in virtue of the trade-off between optimality and efficiency. Finally, we elucidate how computational complexity can be used to inform both philosophical and cognitive-psychological research on human morality by advancing the Moral Tractability Thesis (MTT).
Kerianne L. Hobbs, Bernard Li
Designing a safe, trusted, and ethical AI may be practically impossible; however, designing AI with safe, trusted, and ethical use in mind is possible and necessary in safety and mission-critical domains like aerospace. Safe, trusted, and ethical use of AI are often used interchangeably; however, a system can be safely used but not trusted or ethical, have a trusted use that is not safe or ethical, and have an ethical use that is not safe or trusted. This manuscript serves as a primer to illuminate the nuanced differences between these concepts, with a specific focus on applications of Human-AI teaming in aerospace system control, where humans may be in, on, or out-of-the-loop of decision-making.
Tsvetelina Hristova, Liam Magee, Emma Kearney
Data sharing partnerships are increasingly an imperative for research institutions and, at the same time, a challenge for established models of data governance and ethical research oversight. We analyse four cases of data partnership involving academic institutions and examine the role afforded to the research partner in negotiating the relationship between risk, value, trust and ethics. Within this terrain, far from being a restraint on financialisation, the instrumentation of ethics forms part of the wider mobilisation of infrastructure for the realisation of profit in the big data economy. Under what we term `combinatorial data governance' academic structures for the management of research ethics are instrumentalised as organisational functions that serve to mitigate reputational damage and societal distrust. In the alternative model of `experimental data governance' researchers propose frameworks and instruments for the rethinking of data ethics and the risks associated with it - a model that is promising but limited in its practical application.
Halaman 16 dari 22838