J. Horvat
Hasil untuk "Ethics"
Menampilkan 20 dari ~998462 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar
G. Healy
Erich Groos
A. Frank
Casey Fiesler, Nicholas Proferes
Social computing systems such as Twitter present new research sites that have provided billions of data points to researchers. However, the availability of public social media data has also presented ethical challenges. As the research community works to create ethical norms, we should be considering users’ concerns as well. With this in mind, we report on an exploratory survey of Twitter users’ perceptions of the use of tweets in research. Within our survey sample, few users were previously aware that their public tweets could be used by researchers, and the majority felt that researchers should not be able to use tweets without consent. However, we find that these attitudes are highly contextual, depending on factors such as how the research is conducted or disseminated, who is conducting it, and what the study is about. The findings of this study point to potential best practices for researchers conducting observation and analysis of public data.
Sanjay Kumar Singh, Jin Chen, M. Giudice et al.
In an era of increased stakeholder pressure for sustainable environmental management practices at workplace, organization should adopt and implement environmental ethics for seamless synergy amongst the needs of the business, the society, and the planet. Our study used resource-based view (RBV) and dynamic capabilities (DC) theoretical lenses to examine hypotheses derived from extant literature on the linkages amongst environmental ethics, environmental training, environmental performance and competitive advantage. Using survey questionnaire, we employed structural equation modeling (SEM) on 364 valid responses from managers to examine the hypotheses. The findings of our study will stir up researcher's curiosity to unravel the human side of environmental management and that will surely steer future researches in significant directions. Results suggest that environmental ethics influences environmental training, environmental performance and competitive advantage. We also found that environmental training to employees mediates the influence of environmental ethics on firm's environmental performance and competitive advantage. The findings of the study implies that organizational approach towards environmental ethical practices at workplace should be not be reactive but proactive with intentions to create and sustain synergy amongst the triads namely, profits, the society, and the environment. Environmental training should not be one-off event but a continuous process to beat competitions and improve environmental performance in the organization.
Andreas Tsamados, Nikita Aggarwal, Josh Cowls et al.
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
L. Floridi, M. Taddeo
CUSP aims to utilize Big Data to help study and understand urban environments. As a part of this effort, we are planning to build an inclusive data warehouse at CUSP. Our vision for this data warehouse is to hold large quantities of data from multiple sources, including personal (most likely anonymized) data about individuals. But, obtaining, housing, and protecting these data come with many challenges and questions. We hope to answer some of these questions in this working session and converge on a set of principles that will guide our data practices moving forward. Personal data is a new asset class touching all aspects of society. It is potentially as valuable a resource in the 21st century as heavily traded physical goods like oil have been in the past hundred years. However, throughout history, economic value creation has been linked to the ability to move and trade physical goods. Similarly, " data needs to move to create value. Data sitting alone on a server is like money hidden under a mattress. It is safe and secure, but largely stagnant and underutilized. " But, personal data lacks the trading rules and policy frameworks that exist for widely traded physical assets. As a result, there is little trust among the key stakeholders,-individuals, governments and the private sector,-which could undermine its long-term potential. In response to surveys, individuals generally say that they want enhanced control over their personal data, increased transparency on how it is used, and some kind of fair value in return. However, their actions are often quite different. While many say they care deeply about privacy, they share information quite widely online. They often sign up for services not knowing how their data will be protected or whether it will be shared. They rarely read the privacy policies of the organizations providing these services, which are usually written in hard-to-comprehend legal language. Companies, on the other hand, view the data they have captured or created about individuals as theirs. Data is an asset on which they have invested significant resources. They want to leverage the data to create business value, better understand the behavior of their customers and help themselves become more productive. They struggle with how to best protect all the data they now have access to, as well as trying to figure out the different regulations pertaining to its use. Governments are trying to leverage all this data …
K. Siau, Weiyu Wang
Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?
V. C. Müller
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn.
M. McCormick
The broad question asked under the heading “Ethics of Belief” is: What ought one believe? An ethics of belief attempts to uncover the norms that guide belief formation and maintenance. The dominant view among contemporary philosophers is that evidential norms do; I should always follow my evidence and only believe when the evidence is sufficient to support my belief. This view is called “evidentialism,” although, as we shall see, this term gets applied to a number of views that can be distinguished from one another. Evidentialists often cite David Hume (1999: 110) as their historic exemplar who said “a wise man … proportions his beliefs to the evidence” and thus argued against the reasonableness of believing in miracles (see Hume, David; Wisdom). Those who argue that there can be good practical reasons for believing, independent of one's evidence, can turn for inspiration to Blaise Pascal (1966: 124), who argued that the best reason to form a belief in God was a practical one, namely the possibility of avoiding eternal suffering (see Reasons; Reasons for Action, Morality and; Faith). Keywords: ethics; James, William; philosophy; Williams, Bernard; duty and obligation; knowledge; rationality; responsibility
C. Burr, M. Taddeo, L. Floridi
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.
J. Morley, Anat Elhalal, F. Garcia et al.
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’
Casey Fiesler, Natalie Garrett, Nathan Beard
As issues of technology ethics become more pervasive in the media and public discussions, there is increasing interest in what role ethics should play in computing education. Not only are there more standalone ethics classes being offered at universities, but calls for greater integration of ethics across computer science curriculum mean that a growing number of CS instructors may be including ethics as part of their courses. To both describe current trends in computing ethics coursework and to provide guidance for further ethics inclusion in computing, we present an in-depth qualitative analysis of 115 syllabi from university technology ethics courses. Our analysis contributes a snapshot of the content and goals of tech ethics classes, and recommendations for how these might be integrated across a computing curriculum.
I. Nourbakhsh
Henrik Andersson, A. Svensson, Catharina Frank et al.
Ethical problems in everyday healthcare work emerge for many reasons and constitute threats to ethical values. If these threats are not managed appropriately, there is a risk that the patient may be inflicted with moral harm or injury, while healthcare professionals are at risk of feeling moral distress. Therefore, it is essential to support the learning and development of ethical competencies among healthcare professionals and students. The aim of this study was to explore the available literature regarding ethics education that promotes ethical competence learning for healthcare professionals and students undergoing training in healthcare professions. In this integrative systematic review, literature was searched within the PubMed, CINAHL, and PsycInfo databases using the search terms ‘health personnel’, ‘students’, ‘ethics’, ‘moral’, ‘simulation’, and ‘teaching’. In total, 40 articles were selected for review. These articles included professionals from various healthcare professions and students who trained in these professions as subjects. The articles described participation in various forms of ethics education. Data were extracted and synthesised using thematic analysis. The review identified the need for support to make ethical competence learning possible, which in the long run was considered to promote the ability to manage ethical problems. Ethical competence learning was found to be helpful to healthcare professionals and students in drawing attention to ethical problems that they were not previously aware of. Dealing with ethical problems is primarily about reasoning about what is right and in the patient’s best interests, along with making decisions about what needs to be done in a specific situation. The review identified different designs and course content for ethics education to support ethical competence learning. The findings could be used to develop healthcare professionals’ and students’ readiness and capabilities to recognise as well as to respond appropriately to ethically problematic work situations.
S. McLennan, A. Fiske, Daniel W. Tigard et al.
The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this paper, we propose that an ‘embedded ethics’ approach, in which ethicists and developers together address ethical issues via an iterative and continuous process from the outset of development, could be an effective means of integrating robust ethical considerations into the practical development of medical AI.
Jacqui Ayling, Adriane P. Chapman
Bias, unfairness and lack of transparency and accountability in Artificial Intelligence (AI) systems, and the potential for the misuse of predictive models for decision-making have raised concerns about the ethical impact and unintended consequences of new technologies for society across every sector where data-driven innovation is taking place. This paper reviews the landscape of suggested ethical frameworks with a focus on those which go beyond high-level statements of principles and offer practical tools for application of these principles in the production and deployment of systems. This work provides an assessment of these practical frameworks with the lens of known best practices for impact assessment and audit of technology. We review other historical uses of risk assessments and audits and create a typology that allows us to compare current AI ethics tools to Best Practices found in previous methodologies from technology, environment, privacy, finance and engineering. We analyse current AI ethics tools and their support for diverse stakeholders and components of the AI development and deployment lifecycle as well as the types of tools used to facilitate use. From this, we identify gaps in current AI ethics tools in auditing and risk assessment that should be considered going forward.
Jakob Mökander, L. Floridi
A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing should take the form of a continuous and constructive process, approach ethical alignment from a system perspective, and be aligned with public policies and incentives for ethically desirable behaviour. Third, we identify and discuss the constraints associated with ethics-based auditing. Only by understanding and accounting for these constraints can ethics-based auditing facilitate ethical alignment of AI, while enabling society to reap the full economic and social benefits of automation.
Margaret McConnell, Alya Alsager, Plyce Fuchu et al.
Abstract Background Preterm birth is a leading cause of childhood mortality and developmental disabilities, with persistent socioeconomic disparities in incidence and outcomes. Maternal presence during prolonged neonatal intensive care unit (NICU) hospitalization is critical for preterm infant health, enabling mothers to provide breast milk, directly breastfeed, and engage in skin-to-skin care—all of which promote infant physiological stability and neurodevelopment. Low-income mothers face significant barriers to visiting the NICU and participating in caregiving due to financial burdens and the psychological impact of financial stress. This randomized controlled trial aims to evaluate the effectiveness of financial transfers in promoting maternal caregiving behaviors that directly impact preterm infant health outcomes during NICU hospitalization. Methods We will conduct a two-arm, single-blinded randomized controlled trial with 420 Medicaid-eligible mothers of infants born between 24 weeks 0 days to 34 weeks 1 day gestation across four Level 3 NICUs in Georgia and Massachusetts. Mothers in the intervention arm will receive standard of care enhanced with weekly financial transfers and will be informed that these funds are intended to help them spend more time with their infants in the NICU. All participants will be provided with a hospital-grade breast pump and educational materials on the benefits of breast milk and skin-to-skin care. Participants will complete surveys during their infant’s hospitalization and following discharge, capturing outcomes related to maternal mental and physical health, caregiving behaviors, cognitive function, financial and socioeconomic factors, infant health and growth, and perceptions of NICU care quality. Primary outcomes are the provision of breast milk and engagement in skin-to-skin care. Secondary outcomes include infant growth and health outcomes, NICU visitation, financial and socioeconomic hardship, maternal physical and mental health measures, cognitive function, and perception of NICU care quality. Discussion This study will provide evidence of the impact of financial transfers on maternal caregiving behaviors in the NICU, addressing critical gaps in our understanding of how financial stress affects low-income mothers. Findings may inform health policy, particularly regarding Medicaid coverage of non-medical services, and contribute to understanding how to address disparities in preterm infant care. Trial registration The trial was prospectively registered with the American Economic Association Trial Registry, the primary registry for academic economists conducting policy trials, on 16 April 2024 (AEARCTR-0013256). It was also registered on ClinicalTrials.gov (NCT06362798) on 10 April 2024.
Halaman 3 dari 49924