Ethics review is a foundational mechanism of modern research governance, yet contemporary systems face increasing strain as ethical risks arise as structural consequences of large-scale, interdisciplinary scientific practice. The demand for consistent and defensible decisions under heterogeneous risk profiles exposes limitations in institutional review capacity rather than in the legitimacy of ethics oversight. Recent advances in large language models (LLMs) offer new opportunities to support ethics review, but their direct application remains limited by insufficient ethical reasoning capability, weak integration with regulatory structures, and strict privacy constraints on authentic review materials. In this work, we introduce Mirror, an agentic framework for AI-assisted ethical review that integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture. At its core is EthicsLLM, a foundational model fine-tuned on EthicsQA, a specialized dataset of 41K question-chain-of-thought-answer triples distilled from authoritative ethics and regulatory corpora. EthicsLLM provides detailed normative and regulatory understanding, enabling Mirror to operate in two complementary modes. Mirror-ER (expedited Review) automates expedited review through an executable rule base that supports efficient and transparent compliance checks for minimal-risk studies. Mirror-CR (Committee Review) simulates full-board deliberation through coordinated interactions among expert agents, an ethics secretary agent, and a principal investigator agent, producing structured, committee-level assessments across ten ethical dimensions. Empirical evaluations demonstrate that Mirror significantly improves the quality, consistency, and professionalism of ethics assessments compared with strong generalist LLMs.
Svetlana Sitnicka, Muslum Mursalov, Hamdulla Mammadov
et al.
The growing deployment of artificial intelligence (AI) in banking raises critical questions about whether national-level readiness, defined by ethics, institutions, infrastructure, and governance (EIIG), translates into measurable financial performance gains. This article provides empirical evidence on the link between government AI readiness and banking sector profitability. Using an unbalanced panel dataset of 136 countries covering 2020–2024, the study integrates Return on Assets (ROA) from the IMF with the Government AI Readiness Index from Oxford Insights, which embeds EIIG dimensions of ethical frameworks, institutional quality, infrastructural robustness, and governance capacity. Data preprocessing involved applying Yeo–Johnson transformations to address non-normal distributions, and panel econometric models were estimated using both fixed and random effects, with the Hausman test guiding model selection. The results indicate that stronger national EIIG readiness has a significant positive impact on bank profitability. The fixed effects model indicates that a one-unit increase in the transformed AI readiness index is associated with a 0.524 increase in transformed ROA (p < 0.001). In contrast, the random effects specification produced a negative coefficient (β = –0.126, p < 0.01). The Hausman test (χ² = 42.98, p < 0.001) confirmed fixed effects as the consistent estimator. Robust covariance estimators (clustered by country, clustered by year, and Driscoll–Kraay) further confirmed the stability of the coefficients, which remained consistently positive and significant. Country-specific fixed effects highlight structural heterogeneity: advanced economies such as Germany (α = –4.14) and the United Kingdom (α = –4.15) exhibit structurally lower profitability, while emerging economies, including Malawi (α = +0.72), Ghana (α = –0.59), and Mozambique (α = –0.24) align more closely with or exceed global averages. These findings underscore that ethical safeguards, institutional capacities, digital infrastructures, and governance mechanisms are not peripheral but central in enabling AI readiness to deliver sustainable financial performance.
Alayt Issak, Uttkarsh Narayan, Ramya Srinivasan
et al.
Ethical theories and Generative AI (GenAI) models are dynamic concepts subject to continuous evolution. This paper investigates the visualization of ethics through a subset of GenAI models. We expand on the emerging field of Visual Ethics, using art as a form of critical inquiry and the metaphor of a kaleidoscope to invoke moral imagination. Through formative interviews with 10 ethics experts, we first establish a foundation of ethical theories. Our analysis reveals five families of ethical theories, which we then transform into images using the text-to-image (T2I) GenAI model. The resulting imagery, curated as Kaleidoscope Gallery and evaluated by the same experts, revealed eight themes that highlight how morality, society, and learned associations are central to ethical theories. We discuss implications for critically examining T2I models and present cautions and considerations. This work contributes to examining ethical theories as foundational knowledge that interrogates GenAI models as socio-technical systems.
Ensuring ethical behavior in Artificial Intelligence (AI) systems amidst their increasing ubiquity and influence is a major concern the world over. The use of formal methods in AI ethics is a possible crucial approach for specifying and verifying the ethical behavior of AI systems. This paper proposes a formalization based on deontic logic to define and evaluate the ethical behavior of AI systems, focusing on system-level specifications, contributing to this important goal. It introduces axioms and theorems to capture ethical requirements related to fairness and explainability. The formalization incorporates temporal operators to reason about the ethical behavior of AI systems over time. The authors evaluate the effectiveness of this formalization by assessing the ethics of the real-world COMPAS and loan prediction AI systems. Various ethical properties of the COMPAS and loan prediction systems are encoded using deontic logical formulas, allowing the use of an automated theorem prover to verify whether these systems satisfy the defined properties. The formal verification reveals that both systems fail to fulfill certain key ethical properties related to fairness and non-discrimination, demonstrating the effectiveness of the proposed formalization in identifying potential ethical issues in real-world AI applications.
The shift towards pluralism in global data ethics acknowledges the importance of including perspectives from the Global Majority to develop responsible data science practices that mitigate systemic harms in the current data science ecosystem. Sub-Saharan African (SSA) practitioners, in particular, are disseminating progressive data ethics principles and best practices for identifying and navigating anti-blackness and data colonialism. To center SSA voices in the global data ethics discourse, we present a framework for African data ethics informed by the thematic analysis of an interdisciplinary corpus of 50 documents. Our framework features six major principles: 1) Challenge Power Asymmetries, 2) Assert Data Self-Determination, 3) Invest in Local Data Institutions & Infrastructures, 4) Utilize Communalist Practices, 5) Center Communities on the Margins, and 6) Uphold Common Good. We compare our framework to seven particularist data ethics frameworks to find similar conceptual coverage but diverging interpretations of shared values. Finally, we discuss how African data ethics demonstrates the operational value of data ethics frameworks. Our framework highlights Sub-Saharan Africa as a pivotal site of responsible data science by promoting the practice of communalism, self-determination, and cultural preservation.
We introduce NAEL (Non-Anthropocentric Ethical Logic), a novel ethical framework for artificial agents grounded in active inference and symbolic reasoning. Departing from conventional, human-centred approaches to AI ethics, NAEL formalizes ethical behaviour as an emergent property of intelligent systems minimizing global expected free energy in dynamic, multi-agent environments. We propose a neuro-symbolic architecture to allow agents to evaluate the ethical consequences of their actions in uncertain settings. The proposed system addresses the limitations of existing ethical models by allowing agents to develop context-sensitive, adaptive, and relational ethical behaviour without presupposing anthropomorphic moral intuitions. A case study involving ethical resource distribution illustrates NAEL's dynamic balancing of self-preservation, epistemic learning, and collective welfare.
Large language models (LLMs) demonstrate significant potential in advancing medical applications, yet their capabilities in addressing medical ethics challenges remain underexplored. This paper introduces MedEthicEval, a novel benchmark designed to systematically evaluate LLMs in the domain of medical ethics. Our framework encompasses two key components: knowledge, assessing the models' grasp of medical ethics principles, and application, focusing on their ability to apply these principles across diverse scenarios. To support this benchmark, we consulted with medical ethics researchers and developed three datasets addressing distinct ethical challenges: blatant violations of medical ethics, priority dilemmas with clear inclinations, and equilibrium dilemmas without obvious resolutions. MedEthicEval serves as a critical tool for understanding LLMs' ethical reasoning in healthcare, paving the way for their responsible and effective use in medical contexts.
Background: Given the popularity of digital marketing in business today, every hospital can start utilizing it by establishing a social media account. Instagram is a social media platform that focuses on photographs. The AISAS (Attention, Interest, Search, Action, and Share) model can be used to assess the efficacy of marketing communications. Regardless, there are issues regarding legal and ethical issues. Therefore, a question emerges: How can health advertisements be effective while following ethical guidelines?
Aims: The purpose of this research is to analyze the content of healthcare advertisements on Instagram
Methods: This study uses the quantitative descriptive content analysis method. The samples are Instagram advertisements for health services. Search them using the hashtags (#) #dokterjogja, #klinikjogja, #klinikyogyakarta, and #rumahsakitjogja. Using convenience sampling, the researcher randomly scrolls and stops on Instagram. The data were assessed by two coders using a checklist to ensure objectivity. The checklist contains three indicators, in this case, the AIA (Attention, Interest, Action) indicator, and the other indicators were obtained from the Regulation of the Minister of Health of the Republic of Indonesia (PERMENKES) number 1787 of 2010
Results: The highest score of the AIA (Attention, Interest, Action) indicator is 12, and there are a total of 34 advertisements (coder 1), and 84 advertisements (coder 2) violate The Regulation of the Minister of Health (PERMENKES) No.1787/2010
Conclusion: This study finds that effective advertising is almost certain to violate the regulation. An educational health information advertisement that introduces the services provided is a good way to promote healthcare providers while remaining ethical.
Keywords: advertising, AISAS, ethics, health, social media
Computer applications to medicine. Medical informatics, Public aspects of medicine
Rahima Akther, Md. Mosharraf Hossain, Md. Kamrozzaman
et al.
Integrating new teaching technologies in education has transformed the traditional classroom environment, offering innovative methods for teaching and learning. With the rapid advancement in digital tools and platforms, educational institutions are increasingly adopting these technologies to enhance instructional practices, engage students, and improve learning outcomes. Therefore, this research aimed to investigate the higher secondary teachers’ behavioral intention towards adopting new teaching technologies in the context of professional standards and educational leadership. The study adopted a descriptive research design and utilized a quantitative research approach. A standardized questionnaire and a web-based purposive sampling method were used to collect 350 data from higher secondary teachers in Bangladesh. The data was assessed, and the hypotheses were analyzed using a process called partial least squares structural equation modeling (PLS-SEM). The PLS-SEM analysis identified a notable link among perceived ease of use, enjoyment, usefulness, and behavioral intention in the context of professional standards and educational leadership. This study is highly significant for educational institutions and policymakers that aim to enhance teacher job satisfaction and teachers’ quality. Educational technology should feature an interface that is easy to use, engage the user’s interest, and fulfill its intended function efficiently. Furthermore, it is imperative to tailor training programs to aid teachers’ assimilation of the technology. School administrators must enact professional development efforts, encourage the effective use of technology, establish support networks, and provide adaptable solutions to meet the unique needs of instructors. These variables can potentially increase the rate of acceptance and satisfaction with technology in educational institutions.
The Special Issue defines ‘housing disruptors’ as the emerging ideologies, practices, and logics capable of changing the housing system. In this paper, I argue digital engagement technologies were a housing disruptor for combining an ethic of care and technological scale in order to reimagine planning democracy and ultimately the delivery of equitable housing. First, I outline the care ethics of digital engagement and connect it to a lineage of planning theory that values deliberative participation and agonistic urban politics. I then interrogate the meaning of scale in digital engagement and how it contributes to urban democracy and justice issues. However, the structural limitations that practitioners faced in practice put into question how possible it was to apply a scale logic from the technological and business world, which sought to streamline and grow, to a planning system (that is complex) and solutions to the housing crisis (even more complex). My concluding remarks suggest that digital engagement was symbolic for the changing value principles in planning, as one that was committed to a fairer and more equitable planning system; but how successful it was (or has been) able to provide an alternative planning structure remains uncertain.
O Século XXI chegou e com ele o anúncio de uma “próxima onda”: a Inteligência Artificial (IA). Urge que ampliemos o debate sobre os limites da ciência, com ênfase na criação de mecanismos bioéticos de controle das práticas tecnológicas que se avizinham. Afinal, quais os possíveis impactos da IA nas gerações futuras? Como evitar que as pessoas socialmente desassistidas sejam ainda mais vulnerabilizadas pelo uso da IA? Este estudo, em forma de revisão crítica de literatura, tem como escopo refletir as potencialidades da IA na construção de um mundo com mais justiça e paz social. Conclui-se pela insuficiência de estudos bioéticos da IA, à luz de princípios como dignidade humana, direitos humanos e liberdades fundamentais. Enquanto isso, a contemporaneidade forja subjetividades líquidas, que, de modo similar às máquinas — cada vez mais “humanizadas” —, existem em função de realidades virtuais, em detrimento das demandas e dos problemas concretos das coletividades.
Medical philosophy. Medical ethics, Business ethics
Mathematics has become inescapable in modern, digitized societies: there is hardly any area of life left that isn't affected by it, and we as mathematicians play a central role in this. Our actions affect what others, in particular our students, decide to do with mathematics, and how mathematics affects the world, for better or worse. In return, the study of ethics in mathematics (EiM) has become increasingly important, even though it is still unknown to many. This exposition tries to change that, by motivating ethics in mathematics as an interesting, tractable, non-trivial, well-defined and good research area for mathematicians to consider.
Industry actors in the United States have gained extensive influence in conversations about the regulation of general-purpose artificial intelligence (AI) systems. Although industry participation is an important part of the policy process, it can also cause regulatory capture, whereby industry co-opts regulatory regimes to prioritize private over public welfare. Capture of AI policy by AI developers and deployers could hinder such regulatory goals as ensuring the safety, fairness, beneficence, transparency, or innovation of general-purpose AI systems. In this paper, we first introduce different models of regulatory capture from the social science literature. We then present results from interviews with 17 AI policy experts on what policy outcomes could compose regulatory capture in US AI policy, which AI industry actors are influencing the policy process, and whether and how AI industry actors attempt to achieve outcomes of regulatory capture. Experts were primarily concerned with capture leading to a lack of AI regulation, weak regulation, or regulation that over-emphasizes certain policy goals over others. Experts most commonly identified agenda-setting (15 of 17 interviews), advocacy (13), academic capture (10), information management (9), cultural capture through status (7), and media capture (7) as channels for industry influence. To mitigate these particular forms of industry influence, we recommend systemic changes in developing technical expertise in government and civil society, independent funding streams for the AI ecosystem, increased transparency and ethics requirements, greater civil society access to policy, and various procedural safeguards.
In the past few years, calls for integrating ethics modules in engineering curricula have multiplied. Despite this positive trend, a number of issues with these embedded programs remains. First, learning goals are underspecified. A second limitation is the conflation of different dimensions under the same banner, in particular confusion between ethics curricula geared towards addressing the ethics of individual conduct and curricula geared towards addressing ethics at the societal level. In this article, we propose a tripartite framework to overcome these difficulties. Our framework analytically decomposes an ethics module into three dimensions. First, there is the ethical dimension, which pertains to the learning goals. Second, there is the moral dimension, which addresses the moral relevance of engineers conduct. Finally, there is the political dimension, which scales up issues of moral relevance at the civic level. All in all, our framework has two advantages. First, it provides analytic clarity, i.e. it enables course instructors to locate ethical dilemmas in either the moral or political realm and to make use of the tools and resources from moral and political philosophy. Second, it depicts a comprehensive ethical training, which enables students to both reason about moral issues in the abstract, and to socially contextualize potential solutions.
Nsovo Nyeleti Mayimele, Patrick Hulisani Demana, Mothobi Godfrey Keele
The manufacturing sector of the pharmaceutical industry has faced criticism for disparities in access to pharmaceuticals, especially within the context of past incidents and the COVID-19 pandemic. Balancing profitability with the public responsibility to produce affordable, safe and effective medicines is challenging. The World Health Organisation (WHO) recognises the significant role pharmacists play in discovering, manufacturing and dispensing medicines. Pharmacists are responsible for safeguarding pharmaceuticals at all levels of care and where medicines are used. The research aimed to assess the involvement of pharmacists in the strategic leadership of Multinational Pharmaceutical Companies (MPCs) operating in South Africa. The study assessed the presence of pharmacists, recognised as custodians of medicines, in the strategic leadership of pharmaceutical companies operating in South Africa but headquartered globally. A desktop review was done to assess the company profiles, including revenue, size, number of employees and professional backgrounds of the persons in strategic leadership, including board and executive levels. The pharmaceutical companies were headquartered in eleven countries across Asia (3), Africa (1), North America (1), and Europe (6). On average, these companies operated in 86.6 countries (SD ±46.2). The strategic leadership roles within MPCs were comprised of individuals with backgrounds in commerce, sciences, and engineering. Predominantly, professionals with backgrounds in commerce held significant representation in both board membership and executive leadership within these companies. Notably, only 3.2% (33 out of 1023) of leaders possessed a pharmacy qualification, with a mere 27% (9 out of 33) being female. This was the least represented professional background among the strategic leaders, and the likelihood was affected by gender. The pharmacists more likely to hold strategic positions were predominantly male, had additional qualifications, and were situated in specific countries like India and South Africa.
Pharmaceutical companies are essential in producing medicines to address global healthcare needs, functioning as healthcare service providers. Strategic leaders in these companies guide the manufacturing sites' strategic goals of the companies. The study's outcomes revealed a restricted presence of pharmacists in leadership roles despite their typical responsibility for manufacturing sites. These pharmacists were often found to have limited authority and were excluded from pivotal decision-making processes, resulting in significant implications for patient welfare.
Marialena Bevilacqua, Nicholas Berente, Heather Domin
et al.
We propose a Holistic Return on Ethics (HROE) framework for understanding the return on organizational investments in artificial intelligence (AI) ethics efforts. This framework is useful for organizations that wish to quantify the return for their investment decisions. The framework identifies the direct economic returns of such investments, the indirect paths to return through intangibles associated with organizational reputation, and real options associated with capabilities. The holistic framework ultimately provides organizations with the competency to employ and justify AI ethics investments.
Justification. This article discusses the processes and factors of influence that determine the direction of improving the entrepreneurial culture in Russia.
The purpose of the article. To study the composition of factors influencing the process of improving entrepreneurial culture.
Materials and methods. The article provides a comparative analysis of external and internal factors influencing the components of the institution of entrepreneurial culture. The article uses methods for studying, generalizing and comparative analysis of the information received.
The results of the study. The author considered and analyzed the culture of doing business and the objective and subjective factors influencing its improvement.
Conclusions. The success of a business is determined by the image of the organization, the trust of consumers and the degree of influence within the business community. The traditional image of a leader as an authoritarian manager is not viable in the modern world. The interaction between the employer and the employee affects the culture of entrepreneurship within the organization, and the processes of globalization have a worldwide impact. Today, a large number of parameters that determine the success of an entrepreneur. These are not only financial criteria, but also organizational, technical, social, which together create a culture of entrepreneurship. Business entities, when interacting with partners from different countries, develop a new unified business culture.
As we grant artificial intelligence increasing power and independence in contexts like healthcare, policing, and driving, AI faces moral dilemmas but lacks the tools to solve them. Warnings from regulators, philosophers, and computer scientists about the dangers of unethical artificial intelligence have spurred interest in automated ethics-i.e., the development of machines that can perform ethical reasoning. However, prior work in automated ethics rarely engages with philosophical literature. Philosophers have spent centuries debating moral dilemmas so automated ethics will be most nuanced, consistent, and reliable when it draws on philosophical literature. In this paper, I present an implementation of automated Kantian ethics that is faithful to the Kantian philosophical tradition. I formalize Kant's categorical imperative in Dyadic Deontic Logic, implement this formalization in the Isabelle theorem prover, and develop a testing framework to evaluate how well my implementation coheres with expected properties of Kantian ethic. My system is an early step towards philosophically mature ethical AI agents and it can make nuanced judgements in complex ethical dilemmas because it is grounded in philosophical literature. Because I use an interactive theorem prover, my system's judgements are explainable.