The climate crisis requires responses that integrate scientific, ethical, social, and technological perspectives. Artificial intelligence (AI) has emerged as a powerful tool in climate modelling, environmental monitoring, and energy optimisation, yet its growing use also raises critical environmental, ethical, legal, and social questions. This contribution examines the ambivalent role of AI in the ecological crisis, addressing both its promises and its risks. On the one hand, AI supports improvements in climate forecasting, renewable energy management, and real-time detection of environmental degradation. On the other hand, the energy demands of data centres, resource-intensive hardware production, algorithmic bias, corporate concentration of power, and technocratic decision-making reveal contradictions that challenge its sustainability. The discussion explores these issues through interdisciplinary lenses, including environmental ethics, philosophy of technology, and legal governance, and concludes with recommendations for socially just, ecologically responsible, and democratically accountable uses of AI. Rather than assuming AI as an inherently sustainable solution, this analysis argues that its contribution to climate action depends fundamentally on the values, institutions, and power structures that shape its development.
Functional testing is essential for verifying that the business logic of mobile applications aligns with user requirements, serving as the primary methodology for quality assurance in software development. Despite its importance, functional testing remains heavily dependent on manual effort due to two core challenges. First, acquiring and reusing complex business logic from unstructured requirements remains difficult, which hinders the understanding of specific functionalities. Second, a significant semantic gap exists when adapting business logic to the diverse GUI environments, which hinders the generation of test cases for specific mobile applications. To address the preceding challenges, we propose LogiDroid, a two-stage approach that generates individual functional test cases by extracting business logic and adapting it to target applications. First, in the Knowledge Retrieval and Fusion stage, we construct a dataset to retrieve relevant cases and extract business logic for the target functionality. Second, in the Context-Aware Test Generation stage, LogiDroid jointly analyzes the extracted business logic and the real-time GUI environment to generate functional test cases. This design allows LogiDroid to accurately understand application semantics and use domain expertise to generate complete test cases with verification assertions. We assess the effectiveness of LogiDroid using two widely-used datasets that cover 28 real-world applications and 190 functional requirements. Experimental results show that LogiDroid successfully tested 40% of functional requirements on the FrUITeR dataset (an improvement of over 48% compared to the state-of-the-art approaches) and 65% on the Lin dataset (an improvement of over 55% compared to the state-of-the-art approaches). These results demonstrate the significant effectiveness of LogiDroid in functional test generation.
Liyang Zhao, Olurotimi Seton, Himadeep Reddy Reddivari
et al.
The sales process involves sales functions converting leads or opportunities to customers and selling more products to existing customers. The optimization of the sales process thus is key to success of any B2B business. In this work, we introduce a principled approach to sales optimization and business AI, namely the Causal Predictive Optimization and Generation, which includes three layers: 1) prediction layer with causal ML 2) optimization layer with constraint optimization and contextual bandit 3) serving layer with Generative AI and feedback-loop for system enhancement. We detail the implementation and deployment of the system in LinkedIn, showcasing significant wins over legacy systems and sharing learning and insight broadly applicable to this field.
Jorge Cisneros, Timothy Wojan, Matthew Williams
et al.
Public-use microdata samples (PUMS) from the United States (US) Census Bureau on individuals have been available for decades. However, large increases in computing power and the greater availability of Big Data have dramatically increased the probability of re-identifying anonymized data, potentially violating the pledge of confidentiality given to survey respondents. Data science tools can be used to produce synthetic data that preserve critical moments of the empirical data but do not contain the records of any existing individual respondent or business. Developing public-use firm data from surveys presents unique challenges different from demographic data, because there is a lack of anonymity and certain industries can be easily identified in each geographic area. This paper briefly describes a machine learning model used to construct a synthetic PUMS based on the Annual Business Survey (ABS) and discusses various quality metrics. Although the ABS PUMS is currently being refined and results are confidential, we present two synthetic PUMS developed for the 2007 Survey of Business Owners, similar to the ABS business data. Econometric replication of a high impact analysis published in Small Business Economics demonstrates the verisimilitude of the synthetic data to the true data and motivates discussion of possible ABS use cases.
The increasing and widespread use of BPMN business processes, also embodying DMN tables, requires tools and methodologies to verify their correctness. However, most commonly used frameworks to build BPMN+DMN models only allow designers to detect syntactical errors, thus ignoring semantic (behavioural) faults. This forces business processes designers to manually run single executions of their BPMN+DMN processes using proprietary tools in order to detect failures. Furthermore, how proprietary tools translate a BPMN+DMN process to a computer simulation is left unspecified. In this paper, we advance this state of the art by designing a tool, named BDTransTest providing: i) a translation from a BPMN + DMN process B to a Java program P ; ii) the synthesis and execution of a testing plan for B, that may require the business designer to disambiguate some input domain; iii) the analysis of the coverage achieved by the testing plan in terms of nodes and edges of B. Finally, we provide an experimental evaluation of our methodology on BPMN+DMN processes from the literature.
Research objectives and hypothesis/research questions
The paper focuses on four key areas influencing organizational resilience: hybrid and remote work models, business continuity management (BCM) strategies, applications of artificial intelligence (AI), and investments in cybersecurity. The aim of the analysis is to examine how these factors support organizations in adapting to changing conditions by enhancing flexibility, crisis resilience, and management effectiveness.
Research methods
The research method applied was theoretical analysis based on a review of relevant literature and industry reports. The findings indicate that flexible work models promote organizational adaptability in volatile conditions, while effectively implemented BCM strategies enhance operational resilience. AI improves adaptability and operational performance but requires ongoing risk monitoring related to its implementation. Cybersecurity, as a crucial component of organizational stability, is gaining significance in the face of increasing digital threats.
Main results
The conclusions drawn from the analysis emphasize the need to integrate traditional management models with modern technologies, as well as the importance of continued research into trust in AI-based solutions, their transparency, and ethical implications. Organizations should strive to build structures that are resistant to disruption by developing robust crisis management strategies and investing in intelligent systems that support decision-making and protect informational assets.
Implications for theory and practice
The publication highlights the growing importance of organizational flexibility and adaptability. Effective management requires integrating traditional models with modern technologies. AI not only automates processes but also influences decision-making. This necessitates further research on algorithmic trust, transparency, and AI ethics. Organizations should invest in advanced risk analysis tools and automated data recovery systems. Strategic planning and the implementation of contingency procedures enable rapid crisis response and minimize their impact.
Management. Industrial management, Management information systems
Artificial intelligence (AI) has emerged as a powerful tool, that has the potential to impact society on multiple levels. Increased adoption as well as employment of AI in new product development and business processes have led to heightened interest and optimism on one hand, whilst increasing fears of potential negative societal consequences on the other. The ethics of AI has subsequently become a topical issue for academics, industry players, health practitioners and regulators, who have a goal and responsibility to protect the public and limit widening inequality. Despite the publication of numerous AI ethical frameworks, guidelines and regulations, none have specifically focused on nutrition and behaviour change. Advances in technology, including AI and machine learning, have opened up novel ways to deliver personalization to guide individuals towards healthier behaviours or to manage their conditions. This perspective synthesizes the key topics that intersect in nutrition and behaviour change where AI is leveraged to provide personalized advice. We propose a 7-pillar framework to guide the development of ethical and transparent AI solutions to build consumer and practitioner trust.
Laura Minkova, Jessica López Espejel, Taki Eddine Toufik Djaidja
et al.
As businesses increasingly rely on automation to streamline operations, the limitations of Robotic Process Automation (RPA) have become apparent, particularly its dependence on expert knowledge and inability to handle complex decision-making tasks. Recent advancements in Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), have paved the way for Intelligent Automation (IA), which integrates cognitive capabilities to overcome the shortcomings of RPA. This paper introduces Text2Workflow, a novel method that automatically generates workflows from natural language user requests. Unlike traditional automation approaches, Text2Workflow offers a generalized solution for automating any business process, translating user inputs into a sequence of executable steps represented in JavaScript Object Notation (JSON) format. Leveraging the decision-making and instruction-following capabilities of LLMs, this method provides a scalable, adaptable framework that enables users to visualize and execute workflows with minimal manual intervention. This research outlines the Text2Workflow methodology and its broader implications for automating complex business processes.
Quentin Romero Lauro, Jeffrey P. Bigham, Yasmine Kotturi
Small business owners stand to benefit from generative AI technologies due to limited resources, yet they must navigate increasing legal and ethical risks. In this paper, we interview 11 entrepreneurs and support personnel to investigate existing practices of how entrepreneurs integrate generative AI technologies into their business workflows. Specifically, we build on scholarship in HCI which emphasizes the role of small, offline networks in supporting entrepreneurs' technology maintenance. We detail how entrepreneurs resourcefully leveraged their local networks to discover new use cases of generative AI (e.g., by sharing accounts), assuage heightened techno-anxieties (e.g., by recruiting trusted confidants), overcome barriers to sustained use (e.g., by receiving wrap-around support), and establish boundaries of use. Further, we suggest how generative AI platforms may be redesigned to better support entrepreneurs, such as by taking into account the benefits and tensions of use in a social context.
Virginia Grande, Natalie Kiesler, Maria Andreina Francisco R
The advent of Large Language Models (LLMs) started a serious discussion among educators on how LLMs would affect, e.g., curricula, assessments, and students' competencies. Generative AI and LLMs also raised ethical questions and concerns for computing educators and professionals. This experience report presents an assignment within a course on professional competencies, including some related to ethics, that computing master's students need in their careers. For the assignment, student groups discussed the ethical process by Lennerfors et al. by analyzing a case: a fictional researcher considers whether to attend the real CHI 2024 conference in Hawaii. The tasks were (1) to participate in in-class discussions on the case, (2) to use an LLM of their choice as a discussion partner for said case, and (3) to document both discussions, reflecting on their use of the LLM. Students reported positive experiences with the LLM as a way to increase their knowledge and understanding, although some identified limitations. The LLM provided a wider set of options for action in the studied case, including unfeasible ones. The LLM would not select a course of action, so students had to choose themselves, which they saw as coherent. From the educators' perspective, there is a need for more instruction for students using LLMs: some students did not perceive the tools as such but rather as an authoritative knowledge base. Therefore, this work has implications for educators considering the use of LLMs as discussion partners or tools to practice critical thinking, especially in computing ethics education.
Jan von der Assen, Jasmin Hochuli, Thomas Grübl
et al.
Threat modeling has been successfully applied to model technical threats within information systems. However, a lack of methods focusing on non-technical assets and their representation can be observed in theory and practice. Following the voices of industry practitioners, this paper explored how to model insider threats based on business process models. Hence, this study developed a novel insider threat knowledge base and a threat modeling application that leverages Business Process Modeling and Notation (BPMN). Finally, to understand how well the theoretic knowledge and its prototype translate into practice, the study conducted a real-world case study of an IT provider's business process and an experimental deployment for a real voting process. The results indicate that even without annotation, BPMN diagrams can be leveraged to automatically identify insider threats in an organization.
This research develops advanced methodologies for Large Language Models (LLMs) to better manage linguistic behaviors related to emotions and ethics. We introduce DIKE, an adversarial framework that enhances the LLMs' ability to internalize and reflect global human values, adapting to varied cultural contexts to promote transparency and trust among users. The methodology involves detailed modeling of emotions, classification of linguistic behaviors, and implementation of ethical guardrails. Our innovative approaches include mapping emotions and behaviors using self-supervised learning techniques, refining these guardrails through adversarial reviews, and systematically adjusting outputs to ensure ethical alignment. This framework establishes a robust foundation for AI systems to operate with ethical integrity and cultural sensitivity, paving the way for more responsible and context-aware AI interactions.
Obesity is a widespread problem in the United States, particularly affecting Black communities. It is a public health problem, a long-term, cumulative issue of economic and social justice and inequality for this demographic group. Thus, the key to solving it is to eliminate persistent structural root causes. According to the Centers for Disease Control and Prevention (CDC), in 2022, the prevalence of obesity among African American adults is 49.5%, with Black women having the highest prevalence compared to other racial and ethnic groups. The main purpose of this study is to provide a comprehensive literature review that examines the multifaceted factors contributing to obesity among African American women, as well as systematise the determinants related to the economic well-being of the community, social factors, cultural patterns of lifestyle in the community, etc. The analysis revealed a clear inverse relationship between income and obesity among African Americans, with this trend being particularly pronounced among women than men and differing across age groups in black communities. The economic determinants of obesity in Black women are related to the fact that low-income households have limited access to affordable and nutritious food and are regularly exposed to stress related to financial difficulties, with so-called “food swamps” and “food deserts” being common in low-income areas. The social determinants of obesity are related to the fact that Black women face higher levels of racism and sexism than other demographic groups, and unequal social conditions cause structural disparities in health, education and employment. Psycho-social and cultural determinants (cultural norms of body image, social influencers, religion, social networks and family upbringing, etc.) play a key role in the emergence of the problem under study, so Black women often model their eating and physical activity habits by cultural traditions, and those who struggle with overweight may face stigma, social isolation and discrimination. The article makes recommendations for reducing obesity among Black women, which primarily relate to the development of culturally sensitive nutrition education programs, community-based health promotion programs, community-centered food policy advocacy, technology-based health platforms, public-private partnerships for affordable healthy food retail, etc.
Black women in America continue to face formidable barriers in the pursuit of leadership roles, encountering both racial and gender-based systemic obstacles that deny them the benefit of the doubt, limit their access to influential networks and limit opportunities for career advancement to top executive leadership positions. Unlike their white counterparts, Black women seldom experience the privilege of being seen as “neutral” or “qualified” without added scrutiny, making their journey to leadership fraught with challenges. Resilience, the capacity to adapt and persist in adversity, is crucial in navigating these challenges. Kamala Harris’s 2024 presidential campaign exemplifies these barriers. Despite her qualifications, Harris lost to a white male opponent with a criminal record, highlighting systemic biases. Supported by the “Win With Black Women” initiative, Harris’s campaign relied on social capital and resilience to counteract entrenched prejudice. However, the limitations of social capital alone underscore the need for systemic change. This paper uses social capital theory and the newly proposed benefit of the doubt (BoD) theory to examine how Black women leverage networks and resilience to navigate barriers, highlighting the structural privileges that disadvantage them. A conceptual framework model that highlights the pathways through which social capital, BoD denial, and organizational evaluation practices intersect, impacting leadership opportunities for Black women, is included to visually demonstrate these dynamics and the interaction of key constructs in the leadership trajectory of Black women.
José Miguel Biscaia Fernández, María del Rocío González-Soltero, Carlos Julio Biscaia Fernández
et al.
La inteligencia artificial generativa (IAG), con aplicaciones como ChatGPT, se ha convertido en una interesante herramienta en el ámbito de la educación biomédica. Entre sus bondades destaca la simplificación y mejora del proceso enseñanza-aprendizaje, favoreciendo la búsqueda de información, la creación y actualización de contenido, la simulación de escenarios clínicos, la atención personalizada o la evaluación inmediata. Sin embargo, son muchos los riesgos que comporta una herramienta tan novedosa y disruptiva. Debido a ello, con el objetivo de evaluar estas amenazas, utilizamos los Principios Éticos de la UNESCO y el Reglamento de la Unión Europea sobre Inteligencia Artificial, concluyendo que el principal riesgo de estas aplicaciones deriva de su potencial capacidad discriminatoria, especialmente en referencia a la evaluación y categorización de los estudiantes; también, de la aparición de diferentes sesgos técnicos, de la falta de equidad, de control, de transparencia y de privacidad, o del plagio y la falsa autoría.
Medical philosophy. Medical ethics, Business ethics
Administrative office employees spend much time confined in their workspaces as they work hard to provide the critical support required for the overall performance of their organizations. As a result, their comfort should be given priority by their organizations, be it private or public. This article investigated the administrative employees’ perceptions of their offices’ physical environment comfort in a public university. Different aspects of their physical environment, such as furniture, noise, office temperature, lighting and space, were examined as variables influencing their performance. The systematization of literary sources for solving the problem of arranging a comfortable physical environment in private organizations proved a significant dependence of the productivity of employees on the physical environment in which they perform their duties. However, there is the scarcity of research conducted in a public organization, especially in the higher education sector. The methodological tool of the research was the method of quantitative analysis, in which a questionnaire was used to collect data from 81 administrative staff of a public university with several campuses in South Africa. The findings indicated that many respondents generally perceived a comfortable physical environment necessary to increase performance. These were, however, not always matched by their perceptions of what transpired at their offices. For instance, 63% of respondents viewed comfortable furniture as critical for the performance of their duties. Nevertheless, only 55% of respondents agreed that their university furniture was comfortable, with 24% disagreeing and 21% taking a neutral stance. The split in perceptions makes it imperative for the university to attend to areas of weakness and inequality in providing physical environment resources. A future study could examine whose offices are more comfortable than others in university contexts. In addition, a promising direction of future research should be the reconciliation of employees' perception of the comfort of the physical environment in offices with the results of observations. This will enrich the obtained results.
Rina Arum Prastyanti, Eiad Yafi, Kelik Wardiono
et al.
Pop-up advertisements have become prevalent on websites. When users click on the banner, they navigate a separate window; banner and pop-up advertisements contain attractive audio-visual and animated graphics. This intrusive advertising has not explicitly regulated Indonesia's current legislation, including Electronic Transaction and Information Law 11/2008 (ITE Law). Also, it is exempted in the Indonesian Pariwara Ethics, guidelines for advertising ethics and procedure in Indonesia. This study aimed to revisit consumers’ protection toward pop-up advertisements in Indonesia, with two main discussions. First, it discussed online consumers' perceptions of pop-up advertisements and the classification of their responses. Second, it enquired to what extent the legal and ethics protection for online consumers in Indonesia. By using empirical legal research, this study concluded that the ITE Law prohibits anyone from spreading online information with content that violates immorality and gambling, as it often contains in pop-up advertisements. Through the lens of business ethics, pop-up advertisements are new media and they should not be installed in such a way as to interfere with the freedom of internet users, given that pop-up advertisements do not reflect the ethics of honesty, trust, and advice in business.
Sven Weinzierl, Sebastian Dunzer, Sandra Zilker
et al.
Predictive business process monitoring (PBPM) techniques predict future process behaviour based on historical event log data to improve operational business processes. Concerning the next activity prediction, recent PBPM techniques use state-of-the-art deep neural networks (DNNs) to learn predictive models for producing more accurate predictions in running process instances. Even though organisations measure process performance by key performance indicators (KPIs), the DNN`s learning procedure is not directly affected by them. Therefore, the resulting next most likely activity predictions can be less beneficial in practice. Prescriptive business process monitoring (PrBPM) approaches assess predictions regarding their impact on the process performance (typically measured by KPIs) to prevent undesired process activities by raising alarms or recommending actions. However, none of these approaches recommends actual process activities as actions that are optimised according to a given KPI. We present a PrBPM technique that transforms the next most likely activities into the next best actions regarding a given KPI. Thereby, our technique uses business process simulation to ensure the control-flow conformance of the recommended actions. Based on our evaluation with two real-life event logs, we show that our technique`s next best actions can outperform next activity predictions regarding the optimisation of a KPI and the distance from the actual process instances.