Hasil untuk "Ethics"

Menampilkan 20 dari ~999536 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar

JSON API
arXiv Open Access 2026
Student views in AI Ethics and Social Impact

Tudor-Dan Mihoc, Manuela-Andreea Petrescu, Emilia-Loredana Pop

An investigation, from a gender perspective, of how students view the ethical implications and societal effects of artificial intelligence is conducted, examining concepts that could have a big influence on how artificial intelligence may be taught in the future. For this, we conducted a survey on a cohort of 230 second year computer science students to reveal their opinions. The results revealed that AI, from the students' perspective, will significantly impact daily life, particularly in areas such as medicine, education, or media. Men are more aware of potential changes in Computer Science, autonomous driving, image and video processing, and chatbot usage, while women mention more the impact on social media. Both men and women perceive potential threats in the same manner, with men more aware of war, AI controlled drones, terrain recognition, and information war. Women seem to have a stronger tendency towards ethical considerations and helping others.

en cs.CY, cs.AI
arXiv Open Access 2026
AI in Education Beyond Learning Outcomes: Cognition, Agency, Emotion, and Ethics

Lucile Favero, Juan Antonio Pérez-Ortiz, Tanja Käser et al.

Artificial intelligence (AI) is rapidly being integrated into educational contexts, promising personalized support and increased efficiency. However, growing evidence suggests that the uncritical adoption of AI may produce unintended harms that extend beyond individual learning outcomes to affect broader societal goals. This paper examines the societal implications of AI in education through an integrative framework with four interrelated dimensions: cognition, agency, emotional well-being, and ethics. Drawing on research from education, cognitive science, psychology, and ethics, we synthesize existing evidence to show how AI-driven cognitive offloading, diminished learner agency, emotional disengagement, and surveillance-oriented practices can mutually reinforce one another. We argue that these dynamics risk undermining critical thinking, intellectual autonomy, emotional resilience, and trust, capacities that are foundational both for effective learning and also for democratic participation and informed civic engagement. Moreover, AI's impact is contingent on design and governance: pedagogically aligned, ethically grounded, and human-centered AI systems can scaffold effortful reasoning, support learner agency, and preserve meaningful social interaction. By integrating fragmented strands of prior research into a unified framework, this paper advances the discourse on responsible AI in education and offers actionable implications for educators, designers, and institutions. Ultimately, the paper contends that the central challenge is not whether AI should be used in education, but how it can be designed and governed to support learning while safeguarding the social and civic purposes of education.

en cs.HC
arXiv Open Access 2026
FATe of Bots: Ethical Considerations of Social Bot Detection

Lynnette Hui Xian Ng, Ethan Pan, Michael Miller Yoder et al.

A growing suite of research illustrates the negative impact of social media bots in amplifying harmful information with widespread social implications. Social bot detection algorithms have been developed to help identify these bot agents efficiently. While such algorithms can help mitigate the harmful effects of social media bots, they operate within complex socio-technical systems that include users and organizations. As such, ethical considerations are key while developing and deploying these bot detection algorithms, especially at scales as massive as social media ecosystems. In this article, we examine the ethical implications for social bot detection systems through three pillars: training datasets, algorithm development, and the use of bot agents. We do so by surveying the training datasets of existing bot detection algorithms, evaluating existing bot detection datasets, and drawing on discussions of user experiences of people being detected as bots. This examination is grounded in the FATe framework, which examines Fairness, Accountability, and Transparency in consideration of tech ethics. We then elaborate on the challenges that researchers face in addressing ethical issues with bot detection and provide recommendations for research directions. We aim for this preliminary discussion to inspire more responsible and equitable approaches towards improving the social media bot detection landscape.

en cs.CY
DOAJ Open Access 2025
Examining the role of staff and team communication in reducing seclusion, restraint and forced tranquilisation in acute inpatient mental health settings: protocol for the Communication and Restraint Reduction (CaRR) study

Janet E Anderson, Rose McCabe, Mary Lavelle et al.

Introduction Over 100 000 service users are admitted to acute mental health wards annually, many involuntarily. Wards are under incredible pressure due to high bed occupancy rates and staff shortages. In a recent survey, over 80% of mental health nurses reported experiencing aggression and violence within their role. National and international policy dictates that mental health ward staff manage incidents of aggression and violence using communication, known as de-escalation. However, de-escalation practice is variable, and there is little empirical evidence to underpin training. As such, there is still a reliance on more restrictive practices, including seclusion and physical restraint.Aim The aim of this study is to identify the communication and organisational factors that characterise effective management of service users’ behaviour and distress in acute adult inpatient mental health wards, reducing the reliance on more restrictive practices (eg, seclusion and restraint).Methods and analysis This observational study will be conducted on mental health wards in England. It will be comprised of three work packages (WPs).A microanalysis of communication during de-escalation incidents from Body Worn Camera footage on wards (n=64), to identify staff communication practices that lead to effective management of service users’ distress.Ethnographic observations of ward routine practice, alongside interviews and questionnaires with staff and service users, to examine how challenging behaviour is anticipated, planned for and responded to on wards, and staff experiences and perceptions of this process.Triangulation of the findings from WPs 1 and 2 to examine the relationship between approaches to aggression management and staff communication, exploring the similarities and differences within and between wards.Ethics and dissemination Ethical approval for sites in England has been granted by the Wales Research Ethics Committee 3, REF 22/WA/0066. Findings will be disseminated through peer-reviewed journals, scientific conferences and service user and clinical networks.

DOAJ Open Access 2025
Knowledge and Recommendations of Stakeholders Regarding Ethical Oversight of Data Science Health Research: Protocol for a Qualitative Study

Clement Adebamowo, Adeola Akintola, Oluchi C Maduka et al.

BackgroundData science health research (DSHR) uses novel computational methods and high-performance computing to analyze big data from conventional and nonconventional health and related sources to generate novel insights and communications. DSHR creates assets but generates ethical, legal, and social challenges. Key gaps in current ethical oversight of DSHR include blurred boundaries between research and nonresearch data use, inadequate protection of data donors, power imbalances that risk extractive research practices, algorithmic biases, and regulatory inadequacies. Nigeria, a typical low- and middle-income country with rapidly expanding DSHR, exemplifies this environment and concerns. ObjectiveThis study will elicit answers from Nigerian DSHR stakeholders and contribute to understanding the ethical, legal, and social implications (ELSI) of DSHR and developing novel ethical oversight frameworks. MethodsBetween October 2024 and January 2025, we conducted Key Informant Interviews with 65 stakeholders of 87 individuals. The Key Informant Interview guide comprised 11 construct-based question domains addressing awareness of policies and laws, ethical oversight processes, ELSI considerations in policy development, experiences addressing DSHR challenges, organizational and procedural frameworks, ideal oversight components, stakeholder roles, research impact on ethics and policy, regulatory influences on research practices, equity-enhancing policies, and balanced regulations. The interviews lasted 60-90 minutes and were transcribed. We analyzed the transcripts using a hybrid deductive-inductive approach. A priori codes derived from research objectives provided the analytical framework while allowing for the identification of emergent concepts. The iterative 3-level coding process involved initial code generation, evaluation, and refinement, with codes grouped into thematic families and semantic networks representing hierarchical concept relationships. Query tools and Boolean operators were used to interrogate the codes to extract findings. ResultsOf 87 invited individuals, 22 (25%) were unable to participate. The 65 participants (age: mean 47.9, SD 7.9 years; 50/65, 77% male) included data science health researchers (25/65, 39%), biomedical researchers (17/65, 26%), Health Research Ethics Committee members (12/65, 19%), and policymakers (11/65, 17%). Most held doctoral degrees (38/65, 57%) and were affiliated with academic institutions (45/65, 69%) and government organizations (26/65, 40%), and had received general research ethics training (50/65, 77%). However, only 12% (8/65) had received predominantly short-duration ethics-specific DSHR training, while 92% (60/65) acknowledged the need for specialized DSHR ethics education. As of January 2025, the interview transcripts have been generated, with checking completed, with qualitative analysis scheduled for completion by March 2025 and completion of primary manuscripts by the end of 2025. ConclusionsThis study will generate stakeholder-informed recommendations for ethical oversight of DSHR that address issues relating to broad consent, ELSI, data ownership, benefit-sharing, and donor protection in resource-limited settings. Our findings will inform global DSHR and research ethics communities on the development of contextually appropriate oversight mechanisms that promote equitable partnerships, co-ownership, and tiered data governance. International Registered Report Identifier (IRRID)DERR1-10.2196/78557

Medicine, Computer applications to medicine. Medical informatics
DOAJ Open Access 2025
A Cross-sectional study on awareness and factors influencing use of Home Blood Pressure Monitoring

K.V. Phani Madhavi, A, I.V.Sreevaishnavi, V.V. Durga Prasad, et al.

Background Home blood pressure monitoring (HBPM), whether guided by patients or clinicians, is increasingly acknowledged as an effective method for improving blood pressure control in hypertension. Compared to clinic measurements, HBPM offers more reproducible readings and better predicts cardiovascular mortality. Additional benefits include convenience, the ability to take repeated readings over time, reduced white coat effect, and enhanced patient involvement in managing their condition. Objective To assess the awareness gap regarding home blood pressure monitoring interventions among hypertensive individuals and factors influencing it. Methods A cross-sectional study was conducted among 235 adults >18 years of age adult patients attending field practice area of tertiary care center for a period of 1 year amongst who are diagnosed as hypertensives on regular treatment for at least three months. Pré-designed, pre-tested, Pré-validated questionnaire was used. Data was collected on various parameters, including gender, education, socioeconomic status, cost of the blood pressure (BP) apparatus, awareness regarding home blood pressure monitoring (HBPM), and proficiency or lack of training in performing HBPM.The collected data were entered in excel and analyzed using Microsoft Office 365. Categorical variables were assessed for statistical significance using the chi-square test. Permission was taken from Institutional Ethics Committee. Results Among the 235 participants, 18.3% had been diagnosed with hypertension for less than six months, 16.2% for six months to one year, and the majority, 65.5%, for more than one year. In terms of healthcare preference, 30% opted for government facilities, while 70% chose private ones. Regarding home blood pressure monitoring (HBPM), 107 participants were aware of it, but only 59 reported actually using it. The most common reason for not using HBPM, cited by 38.3% was a lack of knowledge on how to operate the device. Additionally, 28.1% felt it was necessary, and 8.5% identified the cost of the equipment as a barrier. Conclusion Home blood pressure monitoring (HBPM) is an effective tool for managing hypertension, offering more reliable readings than clinic measurements and better predicting cardiovascular risk. It also provides convenience, minimizes white coat effect, allows repeated measurements, and promotes patient engagement.  

arXiv Open Access 2025
SME-TEAM: Leveraging Trust and Ethics for Secure and Responsible Use of AI and LLMs in SMEs

Iqbal H. Sarker, Helge Janicke, Ahmad Mohsin et al.

Artificial Intelligence (AI) and Large Language Models (LLMs) are revolutionizing today's business practices; however, their adoption within small and medium-sized enterprises (SMEs) raises serious trust, ethical, and technical issues. In this perspective paper, we introduce a structured, multi-phased framework, "SME-TEAM" for the secure and responsible use of these technologies in SMEs. Based on a conceptual structure of four key pillars, i.e., Data, Algorithms, Human Oversight, and Model Architecture, SME-TEAM bridges theoretical ethical principles with operational practice, enhancing AI capabilities across a wide range of applications in SMEs. Ultimately, this paper provides a structured roadmap for the adoption of these emerging technologies, positioning trust and ethics as a driving force for resilience, competitiveness, and sustainable innovation within the area of business analytics and SMEs.

en cs.LG, cs.AI
arXiv Open Access 2025
A Participatory Strategy for AI Ethics in Education and Rehabilitation grounded in the Capability Approach

Valeria Cesaroni, Eleonora Pasqua, Piercosma Bisconti et al.

AI-based technologies have significant potential to enhance inclusive education and clinical-rehabilitative contexts for children with Special Educational Needs and Disabilities. AI can enhance learning experiences, empower students, and support both teachers and rehabilitators. However, their usage presents challenges that require a systemic-ecological vision, ethical considerations, and participatory research. Therefore, research and technological development must be rooted in a strong ethical-theoretical framework. The Capability Approach - a theoretical model of disability, human vulnerability, and inclusion - offers a more relevant perspective on functionality, effectiveness, and technological adequacy in inclusive learning environments. In this paper, we propose a participatory research strategy with different stakeholders through a case study on the ARTIS Project, which develops an AI-enriched interface to support children with text comprehension difficulties. Our research strategy integrates ethical, educational, clinical, and technological expertise in designing and implementing AI-based technologies for children's learning environments through focus groups and collaborative design sessions. We believe that this holistic approach to AI adoption in education can help bridge the gap between technological innovation and ethical responsibility.

en cs.CY, cs.CL
arXiv Open Access 2025
Where's the Line? A Classroom Activity on Ethical and Constructive Use of Generative AI in Physics

Zosia Krusberg

Generative AI tools like ChatGPT are rapidly reshaping how students and instructors engage with course material -- and how they think about academic integrity. This paper presents a classroom activity designed to help physics students critically examine the ethical and educational implications of using AI in coursework. Through a structured sequence of scenario analysis, boundary-setting, and reflective discussion, with optional individual policy writing, students develop the metacognitive, ethical, and collaborative capacities needed to navigate emerging technologies thoughtfully and responsibly. Grounded in research on social constructivist learning, metacognition, and ethics education, the activity positions students as co-creators of an engaged and reflective learning environment.

en physics.ed-ph
arXiv Open Access 2025
Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI

Ankur Barthwal, Molly Campbell, Ajay Kumar Shrestha

The increasing integration of artificial intelligence (AI) in digital ecosystems has reshaped privacy dynamics, particularly for young digital citizens navigating data-driven environments. This study explores evolving privacy concerns across three key stakeholder groups-young digital citizens, parents/educators, and AI professionals-and assesses differences in data ownership, trust, transparency, parental mediation, education, and risk-benefit perceptions. Employing a grounded theory methodology, this research synthesizes insights from key participants through structured surveys, qualitative interviews, and focus groups to identify distinct privacy expectations. Young digital citizens emphasized autonomy and digital agency, while parents and educators prioritized oversight and AI literacy. AI professionals focused on balancing ethical design with system performance. The analysis revealed significant gaps in transparency and digital literacy, underscoring the need for inclusive, stakeholder-driven privacy frameworks. Drawing on comparative thematic analysis, this study introduces the Privacy-Ethics Alignment in AI (PEA-AI) model, which conceptualizes privacy decision-making as a dynamic negotiation among stakeholders. By aligning empirical findings with governance implications, this research provides a scalable foundation for adaptive, youth-centered AI privacy governance.

en cs.CY, cs.AI
arXiv Open Access 2025
AI Safety, Alignment, and Ethics (AI SAE)

Dylan Waldner

This paper grounds ethics in evolutionary biology, viewing moral norms as adaptive mechanisms that render cooperation fitness-viable under selection pressure. Current alignment approaches add ethics post hoc, treating it as an external constraint rather than embedding it as an evolutionary strategy for cooperation. The central question is whether normative architectures can be embedded directly into AI systems to sustain human--AI cooperation (symbiosis) as capabilities scale. To address this, I propose a governance--embedding--representation pipeline linking moral representation learning to system-level design and institutional governance, treating alignment as a multi-level problem spanning cognition, optimization, and oversight. I formalize moral norm representation through the moral problem space, a learnable subspace in neural representations where cooperative norms can be encoded and causally manipulated. Using sparse autoencoders, activation steering, and causal interventions, I outline a research program for engineering moral representations and embedding them into the full semantic space -- treating competing theories of morality as empirical hypotheses about representation geometry rather than philosophical positions. Governance principles leverage these learned moral representations to regulate how cooperative behaviors evolve within the AI ecosystem. Through replicator dynamics and multi-agent game theory, I model how internal representational features can shape population-level incentives by motivating the design of sanctions and subsidies structured to yield decentralized normative institutions.

en cs.CY
arXiv Open Access 2025
Practising responsibility: Ethics in NLP as a hands-on course

Malvina Nissim, Viviana Patti, Beatrice Savoldi

As Natural Language Processing (NLP) systems become more pervasive, integrating ethical considerations into NLP education has become essential. However, this presents inherent challenges in curriculum development: the field's rapid evolution from both academia and industry, and the need to foster critical thinking beyond traditional technical training. We introduce our course on Ethical Aspects in NLP and our pedagogical approach, grounded in active learning through interactive sessions, hands-on activities, and "learning by teaching" methods. Over four years, the course has been refined and adapted across different institutions, educational levels, and interdisciplinary backgrounds; it has also yielded many reusable products, both in the form of teaching materials and in the form of actual educational products aimed at diverse audiences, made by the students themselves. By sharing our approach and experience, we hope to provide inspiration for educators seeking to incorporate social impact considerations into their curricula.

en cs.CL, cs.AI
DOAJ Open Access 2024
Identifying Key Concepts of the Language of Desire and the Language of Ethics in Dialogic Literary Gatherings

Garazi López de Aguileta, Víctor Climent-Sanjuán, Adriana Aubert et al.

Given the high prevalence of gender violence among adolescents and youth, research has underscored the importance of preventing it from an early age. The literature has clarified that the prevention of gender violence requires the union of the language of desire and of ethics to promote egalitarian relationships as desirable. Nevertheless, there is a need for a more in-depth and extensive analysis of the key concepts that emerge in DLG, implemented in diverse contexts to better understand their potential as a space for the prevention of gender violence. To contribute to filling this gap, this study explores key concepts of desire and ethics that adolescents surface in DLG implemented in 5 Learning Communities have in common. To that end, 26 observations in 9 different DLG groups with students aged 10-15 and 45 interviews with students and teachers were conducted. Results show one key concept of desire and ethics in these DLG: many students reject violence and peer pressure. Implications of these findings for the prevention and overcoming of gender violence are discussed.

Theory and practice of education
DOAJ Open Access 2024
Decision-making in case of an unintended pregnancy: an overview of what is known about this complex process

Eline W. Dalmijn, Merel A. Visse, Inge van Nistelrooij

Introduction: Unintended pregnancies are a worldwide health issue, faced each year by one in 16 people, and experienced in various ways. In this study we focus on unintended pregnancies that are, at some point, experienced as unwanted because they present the pregnant person with a decision to continue or terminate the pregnancy. The aim of this study is to learn more about the decision-making process, as there is a lack of insights into how people with an unintended pregnancy reach a decision. This is caused by 1) assumptions of rationality in reproductive autonomy and decision-making, 2) the focus on pregnancy outcomes, e.g. decision-certainty and reasons and, 3) the focus on abortion in existing research, excluding 40% of people with an unintended pregnancy who continue the pregnancy. Method: We conducted a narrative literature review to examine what is known about the decision-making process and aim to provide a deeper understanding of how persons with unintended pregnancy come to a decision.Results: Our analysis demonstrates that the decision-making process regarding unintended pregnancy consists of navigating entangled layers, rather than weighing separable elements or factors. The layers that are navigated are both internal and external to the person, in which a ‘sense of knowing’ is essential in the decision-making process. Conclusion: The layers involved and complexity of the decision-making regarding unintended pregnancy show that a rational decision-making frame is inadequate and a more holistic frame is needed to capture this dynamic and personal experience.

Gynecology and obstetrics
DOAJ Open Access 2024
Ukraine’s Integration into the EU Digital Single Market

Lola Yuliya Yu. , Mykhailenko Daria H., Bolotna Oksana V. et al.

The article is aimed at studying the model of Ukraine’s integration into the Digital Single Market, analyzing the achievements and challenges of digitalization of Business-State-Community. The article examines the process of Ukraine’s integration into the Digital Single Market of the European Union, which is a strategically important stage for strengthening the position of the national economy in the context of global digital transformation. This process opens up new prospects for Ukraine, in particular, access to modern technologies, the development of electronic services and increased competitiveness in the global market. At the same time, integration into the EU digital space requires solving complex tasks, including infrastructure renewal, introduction of innovations, and adaptation of national legislation to European standards. The main benefits of this process are analyzed, such as improving access to digital markets, facilitating bilateral trade and stimulating the development of the IT sector. Particular attention is paid to the role of e-commerce as a key driver of economic growth. The article considers the opportunities provided by e-commerce to Ukrainian enterprises to enter the EU markets, as well as the positive impact of this segment on consumers due to the increase in the range of services and goods. Among the important aspects of integration, the issues of cybersecurity, which are becoming more and more relevant in the face of modern challenges, are considered. Ukraine, which is already facing persistent cyberattacks, needs to increase the level of protection of critical infrastructure, State databases and personal information of citizens. Furthermore, integration into the EU digital market includes the introduction of digital identity, which is a prerequisite for ensuring secure access to digital services. The authors underline the importance of harmonization of legislation for compliance with European standards in such key areas as personal data protection, e-commerce, digital taxation and regulation of the telecommunications market. The relevant changes are aimed at creating a favorable environment for businesses and citizens, stimulating investment and improving interaction with partners in the EU. Despite the noticeable progress in digitalization, Ukraine faces a number of challenges that hinder full integration into the Digital Single Market. In particular, these are cyber threats related to the ongoing military aggression, as well as digital ethics issues that require the development of clear rules and standards for the responsible use of technology. The problem of the digital divide between different regions of the country, which affects the availability of digital services for citizens and businesses, is considered separately. An important aspect is the support from the European Union, which includes financial, technical and expert assistance in implementing reforms and rebuilding infrastructure destroyed by the war. Without this support, it will be difficult for Ukraine to achieve rapid integration into the EU’s Digital Single Space. The article also emphasizes that success in this process depends on the coordination of actions of the government, business and international partners. Ukraine’s integration into the EU Digital Single Market is not only a strategic task, but also an important step towards ensuring economic stability, technological development and integration into the European community on the principles of transparency, innovation and security.

Finance, Economics as a science
arXiv Open Access 2024
Can We Trust AI Agents? A Case Study of an LLM-Based Multi-Agent System for Ethical AI

José Antonio Siqueira de Cerqueira, Mamia Agbese, Rebekah Rousi et al.

AI-based systems, including Large Language Models (LLM), impact millions by supporting diverse tasks but face issues like misinformation, bias, and misuse. AI ethics is crucial as new technologies and concerns emerge, but objective, practical guidance remains debated. This study examines the use of LLMs for AI ethics in practice, assessing how LLM trustworthiness-enhancing techniques affect software development in this context. Using the Design Science Research (DSR) method, we identify techniques for LLM trustworthiness: multi-agents, distinct roles, structured communication, and multiple rounds of debate. We design a multi-agent prototype LLM-MAS, where agents engage in structured discussions on real-world AI ethics issues from the AI Incident Database. We evaluate the prototype across three case scenarios using thematic analysis, hierarchical clustering, comparative (baseline) studies, and running source code. The system generates approximately 2,000 lines of code per case, compared to only 80 lines in baseline trials. Discussions reveal terms like bias detection, transparency, accountability, user consent, GDPR compliance, fairness evaluation, and EU AI Act compliance, showing this prototype ability to generate extensive source code and documentation addressing often overlooked AI ethics issues. However, practical challenges in source code integration and dependency management may limit its use by practitioners.

en cs.CY, cs.AI
DOAJ Open Access 2023
Protocol for the Tallaght University Hospital Institute for Memory and Cognition-Biobank for Research in Ageing and Neurodegeneration

Eimear Connolly, Shane Lyons, Cliona O’Farrelly et al.

Introduction Alzheimer’s disease and other dementias affect >50 million individuals globally and are characterised by broad clinical and biological heterogeneity. Cohort and biobank studies have played a critical role in advancing the understanding of disease pathophysiology and in identifying novel diagnostic and treatment approaches. However, further discovery and validation cohorts are required to clarify the real-world utility of new biomarkers, facilitate research into the development of novel therapies and advance our understanding of the clinical heterogeneity and pathobiology of neurodegenerative diseases.Methods and analysis The Tallaght University Hospital Institute for Memory and Cognition Biobank for Research in Ageing and Neurodegeneration (TIMC-BRAiN) will recruit 1000 individuals over 5 years. Participants, who are undergoing diagnostic workup in the TIMC Memory Assessment and Support Service (TIMC-MASS), will opt to donate clinical data and biological samples to a biobank. All participants will complete a detailed clinical, neuropsychological and dementia severity assessment (including Addenbrooke’s Cognitive Assessment, Repeatable Battery for Assessment of Neuropsychological Status, Clinical Dementia Rating Scale). Participants undergoing venepuncture/lumbar puncture as part of the clinical workup will be offered the opportunity to donate additional blood (serum/plasma/whole blood) and cerebrospinal fluid samples for longitudinal storage in the TIMC-BRAiN biobank. Participants are followed at 18-month intervals for repeat clinical and cognitive assessments. Anonymised clinical data and biological samples will be stored securely in a central repository and used to facilitate future studies concerned with advancing the diagnosis and treatment of neurodegenerative diseases.Ethics and dissemination Ethical approval has been granted by the St. James’s Hospital/Tallaght University Hospital Joint Research Ethics Committee (Project ID: 2159), which operates in compliance with the European Communities (Clinical Trials on Medicinal Products for Human Use) Regulations 2004 and ICH Good Clinical Practice Guidelines. Findings using TIMC-BRAiN will be published in a timely and open-access fashion.

arXiv Open Access 2023
Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

Conrad Sanderson, David Douglas, Qinghua Lu

Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems.

en cs.CY, cs.AI
arXiv Open Access 2023
Applying Standards to Advance Upstream & Downstream Ethics in Large Language Models

Jose Berengueres, Marybeth Sandell

This paper explores how AI-owners can develop safeguards for AI-generated content by drawing from established codes of conduct and ethical standards in other content-creation industries. It delves into the current state of ethical awareness on Large Language Models (LLMs). By dissecting the mechanism of content generation by LLMs, four key areas (upstream/downstream and at user prompt/answer), where safeguards could be effectively applied, are identified. A comparative analysis of these four areas follows and includes an evaluation of the existing ethical safeguards in terms of cost, effectiveness, and alignment with established industry practices. The paper's key argument is that existing IT-related ethical codes, while adequate for traditional IT engineering, are inadequate for the challenges posed by LLM-based content generation. Drawing from established practices within journalism, we propose potential standards for businesses involved in distributing and selling LLM-generated content. Finally, potential conflicts of interest between dataset curation at upstream and ethical benchmarking downstream are highlighted to underscore the need for a broader evaluation beyond mere output. This study prompts a nuanced conversation around ethical implications in this rapidly evolving field of content generation.

en cs.CY, cs.AI

Halaman 24 dari 49977