This article applies the insights of social practice theory to the study of proenvironmental behaviour change through an ethnographic case study (nine months of participant observation and 38 semi-structured interviews) of a behaviour change initiative — Environment Champions — that occurred in a workplace. In contrast to conventional, individualistic and rationalist approaches to behaviour change, social practice theory de-centres individuals from analyses, and turns attention instead towards the social and collective organization of practices — broad cultural entities that shape individuals’ perceptions, interpretations and actions within the world. By considering the planning and delivery of the Environment Champions initiative, the article suggests that practice theory provides a more holistic and grounded perspective on behaviour change processes as they occur in situ. In so doing, it offers up a wide range of mundane footholds for behavioural change, over and above individuals’ attitudes or values. At the same time, it reveals the profound difficulties encountered in attempts to challenge and change practices, difficulties that extend far beyond the removal of contextual ‘barriers’ to change and instead implicate the organization of normal everyday life. The article concludes by considering the benefits and shortcomings of a practice-based approach emphasizing a need for it to develop a greater understanding of the role of social interactions and power relations in the grounded performance of practices.
This paper argues that the two leading AGI firms -- OpenAI and Anthropic -- construct sociotechnical imaginaries through a structurally consistent rhetorical strategy, despite meaningful differences in execution. Drawing on Jasanoff (2015)'s framework of sociotechnical imaginaries, the paper analyzes two essays published in late 2024: Sam Altman's "The Intelligence Age" and Dario Amodei's "Machines of Loving Grace." Close comparative reading identifies four shared rhetorical operations: the self-exemption move, which disavows prophetic authority while exercising it; teleological naturalization, which embeds AGI's arrival in narratives of historical inevitability; qualified acknowledgment, which absorbs concessions to risk into an optimistic frame; and implicit indispensability, which positions each firm as central to the imagined future without naming it as a commercial actor. That two competing institutions with different cultures, risk philosophies, and leaders with notably different public personae converge on the same rhetorical architecture suggests the imaginary reflects not only firm-level strategy but the institutional position these firms occupy. The paper extends the sociotechnical imaginaries framework from nation-states to private firms at the frontier of transformative technology development, identifies the discursive mechanism through which corporate authority over technological futures is projected and stabilized, and demonstrates that this mechanism is at minimum structural rather than idiosyncratic. The findings raise the question of what institutional arrangements would make that authority contestable from outside the firms that produce it.
Alexander Dickerson, Cesare Robotti, Giulio Rossetti
Corporate bond factor research faces a replication crisis. The crisis stems from two sources that inflate reported factor premia: transaction prices whose measurement error enters both sorting signals and return denominators, creating a correlated errors-in-variables bias, and asymmetric ex-post return filtering that embeds future information into factor construction. Applying our framework to a 'factor zoo' of 108 signals across nine thematic clusters, we show that the majority of previously documented factors do not produce statistically significant bond CAPM alphas after correction. We provide an open source framework via Open Bond Asset Pricing, including error-corrected TRACE data, bias corrected factors, and software for reproducible research.
Ensuring that large language models (LLMs) respect diverse cultural values is crucial for social equity. However, existing approaches often treat cultural groups as homogeneous and overlook within-group heterogeneity induced by intersecting demographic attributes, leading to unstable behavior under varying persona granularity. We propose ACE-Align (Attribute Causal Effect Alignment), a causal-effect framework that aligns how specific demographic attributes shift different cultural values, rather than treating each culture as a homogeneous group. We evaluate ACE-Align across 14 countries spanning five continents, with personas specified by subsets of four attributes (gender, education, residence, and marital status) and granularity instantiated by the number of specified attributes. Across all persona granularities, ACE-Align consistently outperforms baselines. Moreover, it improves geographic equity by reducing the average alignment gap between high-resource and low-resource regions from 9.81 to 4.92 points, while Africa shows the largest average gain (+8.48 points). Code is available at https://github.com/Wells-Luo/ACE-Align.
Purpose – Based on the premise that young adults, as knowledge workers, are overstimulated by a constant bombardment of information from digital channels like social media, this paper aims to explore how the information overload, largely redundant and noisy, extracts cognitive resources from the workers without providing meaningful interaction, resulting in an aversive state of boredom characterized by a desire to engage in any meaningful activity but an inability to do so, both in their private and organizational lives. Design/methodology/approach – A review of the literature on boredom, focusing on the meaning and attention components (MAC) model of boredom is conducted to explain the phenomenon. This is followed by an exploration of information overload and the proposal of a dynamic spillover of boredom from the nonwork domains of workers to their organizational lives by integrating literature on work/life boundary spillover mechanisms. Findings – The propositions suggest that overstimulation and boredom are carried into their organizations by the workers. Research limitations/implications – The moderating effects of personality traits and organizational contexts like culture and digital infrastructure, which are outside the scope of this paper, can inform future research. Practical implications – The perilous ramifications of this spillover on both workers and their organizations are discussed, along with strategies for how organizations can help workers find meaning and purpose in their workspaces to reduce their propensity for boredom. Originality/value – This paper addresses and extends the limited research on the effects of information overload from social media on the organizational lives of knowledge workers.
The comparative case study examines the transformation of the former industrial neighborhoods Savamala (Belgrade) and NDSM Wharf (Amsterdam), both located on riverfronts. By employing a blend of network theoretical and empirical approaches, the research examines governance in urban regeneration programs. The research focuses on three objectives. The first objective is to explain the differences in governance in the regeneration between the two selected case studies. The research thus explores the urban policy formation in both cases: involvement of different stakeholders, the decision-making process, policy goals, and network dynamics. The network of stakeholders includes actors from the public and private sectors. The policy network theoretical and empirical approach is applied to explore the policy-making process. Likewise, the analytical approach explores the social-structural, cultural, and social-psychological contexts in which the actors are embedded, and is applied to explore individual and collective social actions, thus providing an explanation of how those actions have led to the creation of policy outputs. The second objective of the research is to explore policy implementation through the utilization of the network governance approach. The goal is to identify, distinguish, and explore the modes of governance and thus provide an explanation of the power relations in the implementation of regeneration programs in the selected urban environments. The third objective is to question the effectiveness of the governance modes that have been discovered, on two levels, namely, on the network (collective) and community level. This research thus provides answers to whether and why the network and community level goals have or have not been achieved, and to what extent. In the first case study, the research findings suggest the existence of two contrasting policy networks with the different actors’ attributes and structural variables and policy goals behind them. Those policies have also produced two different modes of governance. In the initial phase of the regeneration of Savamala, a fragmented-governed network mode is detected. Whilst, hierarchy is observed in the second phase of the regeneration process. Conversely, in the second case study, a coherency in urban politics can be detected and the modes of network governance are discovered in both phases of the regeneration process. The results of the comparative analysis suggest that network governance modes generate a greater degree of overall effectiveness. Furthermore, the positive outcomes of the regeneration process can be discerned in the urban contexts that support the development of this type of governance structure. This underscores the significance of network governance theory, particularly in the investigation of the regeneration of former industrial riverfronts. Conversely, a governance mode such as hierarchy exhibits limited overall effectiveness, while a fragmented-governed network mode exhibits overall effectiveness to a great extent, but with robust limitations. The former is not effective, as it is not inclusive and relies heavily on the interests of private actors and a handful of political elites, while the latter may lack the stability necessary to engender positive outcomes over the long term.
Organizational behaviour, change and effectiveness. Corporate culture
A new self-normalized CUSUM test is proposed for detecting changes in the mean of a locally stationary time series. For stationary data, self-normalization relies on the factorization of a constant long-run variance and a stochastic factor. In this case, the CUSUM statistic can be divided by another statistic proportional to the long-run variance, so that the latter cancels, avoiding estimation of the long-run variance. Under local stationarity, the partial sum process converges to $\int_0^t σ(x) d B_x$ and no such factorization is possible. To overcome this obstacle, a self-normalized test statistic is introduced, based on a bivariate partial-sum process. Weak convergence of the process is proven, and it is shown that the resulting self-normalized test attains asymptotic level $α$ under the null hypothesis of no change, while being consistent against abrupt, gradual, and multiple changes under mild assumptions. Simulation studies show that the proposed test has accurate size and substantially improved finite-sample power relative to existing approaches. Two data examples illustrate practical performance.
The Yellow River is China's mother river and a cradle of human civilization. The ancient Yellow River culture is, moreover, an indispensable part of human art history. To conserve and inherit the ancient Yellow River culture, we designed RiverEcho, a real-time interactive system that responds to voice queries using a large language model and a cultural knowledge dataset, delivering explanations through a talking-head digital human. Specifically, we built a knowledge database focused on the ancient Yellow River culture, including the collection of historical texts and the processing pipeline. Experimental results demonstrate that leveraging Retrieval-Augmented Generation (RAG) on the proposed dataset enhances the response quality of the Large Language Model(LLM), enabling the system to generate more professional and informative responses. Our work not only diversifies the means of promoting Yellow River culture but also provides users with deeper cultural insights.
Taisei Yamamoto, Ryoma Kumon, Danushka Bollegala
et al.
As large language models (LLMs) are increasingly deployed worldwide, ensuring their fair and comprehensive cultural understanding is important. However, LLMs exhibit cultural bias and limited awareness of underrepresented cultures, while the mechanisms underlying their cultural understanding remain underexplored. To fill this gap, we conduct a neuron-level analysis to identify neurons that drive cultural behavior, introducing a gradient-based scoring method with additional filtering for precise refinement. We identify culture-general neurons contributing to cultural understanding regardless of cultures, and culture-specific neurons tied to an individual culture. Culture-general and culture-specific neurons account for less than 1% of all neurons and are concentrated in shallow to middle MLP layers. We validate their role by showing that suppressing them substantially degrades performance on cultural benchmarks (by up to 30%), while performance on general natural language understanding (NLU) benchmarks remains largely unaffected. Moreover, we show that culture-specific neurons support knowledge of not only the target culture, but also related cultures. Finally, we demonstrate that training on NLU benchmarks can diminish models' cultural understanding when we update modules containing many culture-general neurons. These findings provide insights into the internal mechanisms of LLMs and offer practical guidance for model training and engineering. Our code is available at https://github.com/ynklab/CULNIG
Recent progress in Multimodal Large Language Models (MLLMs) have significantly enhanced the ability of artificial intelligence systems to understand and generate multimodal content. However, these models often exhibit limited effectiveness when applied to non-Western cultural contexts, which raises concerns about their wider applicability. To address this limitation, we propose the Traditional Chinese Culture understanding Benchmark (TCC-Bench), a bilingual (i.e., Chinese and English) Visual Question Answering (VQA) benchmark specifically designed for assessing the understanding of traditional Chinese culture by MLLMs. TCC-Bench comprises culturally rich and visually diverse data, incorporating images from museum artifacts, everyday life scenes, comics, and other culturally significant contexts. We adopt a semi-automated pipeline that utilizes GPT-4o in text-only mode to generate candidate questions, followed by human curation to ensure data quality and avoid potential data leakage. The benchmark also avoids language bias by preventing direct disclosure of cultural concepts within question texts. Experimental evaluations across a wide range of MLLMs demonstrate that current models still face significant challenges when reasoning about culturally grounded visual content. The results highlight the need for further research in developing culturally inclusive and context-aware multimodal systems. The code and data can be found at: https://tcc-bench.github.io/.
Naushaba Chowdhury, Pravin Balaraman, Jonathan Liu
et al.
Purpose The purpose of this paper is to examine the influences of employee perception of corporate social responsibility (CSR) in the Readymade Garment Industry (RMG). The RMG industry in Bangladesh has faced constant criticism of their working practices, and following some fatal incidents, the industry was faced with external pressures of implementing CSR practices and policies. Manufacturers invested and initiated CSR in their business and marketing strategy to survive in the global competition. Employees are internal stakeholders that help to implement and disseminate strategies successfully; however, there is not enough knowledge in the area of employee perception of CSR. Design/methodology/approach The paper is an exploratory study using the quantitative data collection method. In total, 128 responses have been collected from participants who are employees of garment factories in Bangladesh to understand their perception of CSR. Regression analysis has been conducted to ascertain the relationships between the factors that influence employee perception. Theories of stakeholder management, organizational citizenship behaviour, social exchange theory and employee engagement have been used to analyse the factors that influence employee perception. Findings The findings show that the factors that influence perception of CSR are not confined to the stakeholder’s initiatives but are significantly dependent on the employees’ direct involvement, engagement and personal values as a beneficiary and an executioner. In addition to the stakeholder’s initiatives that are a key deliverable to the marketing strategy, the employees are influenced by their personal beliefs and practices that can be associated with influences of religion, culture and the wider social landscape. Research limitations/implications The data is limited to a small number of factories located near the capital, Dhaka, this is a small sample compared to the 4,000 factories in Bangladesh. Further research can be conducted based on a larger data set, which could represent a wider range of employee perspectives from different factories relating to size, product category and geographical location. The study does not expand on the factors that influence employee perception specifically. Practical implications The findings of the study can help the employers understand that the organization’s priority and participation are not the only factors that influence the employee’s perceptions. The employees’ assessment of the stakeholder’s intentions of CSR, which are reflected in the organization’s priority, shapes employee perceptions that are influenced by their personal values and beliefs. The awareness of the factors that influence the employees will enable organizations to motivate them and deliver on expectations of the business partners. Social implications It is the practices aimed at the employees that enhance their engagement in CSR that enable them to reciprocate and influence their perception of the organization’s fair and genuine motives. The effectiveness of this aids the macro-marketing aspects of managing social concerns and the impact of businesses. Originality/value The data collected is primary data from employees of garment manufacturers. The hypothesized framework is developed by the authors, and the outcomes of the factors that influence the employee perception of CSR are escalated from the analysis conducted by the authors.
In this paper, we present the technical details and periodic findings of our project, CareerAgent, which aims to build a generative simulation framework for a Holacracy organization using Large Language Model-based Autonomous Agents. Specifically, the simulation framework includes three phases: construction, execution, and evaluation, and it incorporates basic characteristics of individuals, organizations, tasks, and meetings. Through our simulation, we obtained several interesting findings. At the organizational level, an increase in the average values of management competence and functional competence can reduce overall members' stress levels, but it negatively impacts deeper organizational performance measures such as average task completion. At the individual level, both competences can improve members' work performance. From the analysis of social networks, we found that highly competent members selectively participate in certain tasks and take on more responsibilities. Over time, small sub-communities form around these highly competent members within the holacracy. These findings contribute theoretically to the study of organizational science and provide practical insights for managers to understand the organization dynamics.
Success in todays data-driven corporate climate requires a deep understanding of employee behavior. Companies aim to improve employee satisfaction, boost output, and optimize workflow. This research study delves into creating synthetic data, a powerful tool that allows us to comprehensively understand employee performance, flexibility, cooperation, and team dynamics. Synthetic data provides a detailed and accurate picture of employee activities while protecting individual privacy thanks to cutting-edge methods like agent-based models (ABMs), Generative Adversarial Networks (GANs), and statistical models. Through the creation of multiple situations, this method offers insightful viewpoints regarding increasing teamwork, improving adaptability, and accelerating overall productivity. We examine how synthetic data has evolved from a specialized field to an essential resource for researching employee behavior and enhancing management efficiency. Keywords: Agent-Based Model, Generative Adversarial Network, workflow optimization, organizational success
The field of quickest change detection (QCD) concerns design and analysis of algorithms to estimate in real time the time at which an important event takes place, and identify properties of the post-change behavior. It is shown in this paper that approaches based on reinforcement learning (RL) can be adapted based on any "surrogate information state" that is adapted to the observations. Hence we are left to choose both the surrogate information state process and the algorithm. For the former, it is argued that there are many choices available, based on a rich theory of asymptotic statistics for QCD. Two approaches to RL design are considered: (i) Stochastic gradient descent based on an actor-critic formulation. Theory is largely complete for this approach: the algorithm is unbiased, and will converge to a local minimum. However, it is shown that variance of stochastic gradients can be very large, necessitating the need for commensurately long run times; (ii) Q-learning algorithms based on a version of the projected Bellman equation. It is shown that the algorithm is stable, in the sense of bounded sample paths, and that a solution to the projected Bellman equation exists under mild conditions. Numerical experiments illustrate these findings, and provide a roadmap for algorithm design in more general settings.
This paper examines the optimal organizational rules that govern the process of dividing a fixed surplus. The process is modeled as a sequential multilateral bargaining game with costly recognition. The designer sets the voting rule -- i.e., the minimum number of votes required to approve a proposal -- and the mechanism for proposer recognition, which is modeled as a biased generalized lottery contest. We show that for diverse design objectives, the optimum can be achieved by a dictatorial voting rule, which simplifies the game into a standard biased contest model.
Shaswata Mitra, Subash Neupane, Trisha Chakraborty
et al.
Security Operations Center (SoC) analysts gather threat reports from openly accessible global threat repositories and tailor the information to their organization's needs, such as developing threat intelligence and security policies. They also depend on organizational internal repositories, which act as private local knowledge database. These local knowledge databases store credible cyber intelligence, critical operational and infrastructure details. SoCs undertake a manual labor-intensive task of utilizing these global threat repositories and local knowledge databases to create both organization-specific threat intelligence and mitigation policies. Recently, Large Language Models (LLMs) have shown the capability to process diverse knowledge sources efficiently. We leverage this ability to automate this organization-specific threat intelligence generation. We present LocalIntel, a novel automated threat intelligence contextualization framework that retrieves zero-day vulnerability reports from the global threat repositories and uses its local knowledge database to determine implications and mitigation strategies to alert and assist the SoC analyst. LocalIntel comprises two key phases: knowledge retrieval and contextualization. Quantitative and qualitative assessment has shown effectiveness in generating up to 93% accurate organizational threat intelligence with 64% inter-rater agreement.
Tobias Schimanski, Jingwei Ni, Roberto Spacey
et al.
To handle the vast amounts of qualitative data produced in corporate climate communication, stakeholders increasingly rely on Retrieval Augmented Generation (RAG) systems. However, a significant gap remains in evaluating domain-specific information retrieval - the basis for answer generation. To address this challenge, this work simulates the typical tasks of a sustainability analyst by examining 30 sustainability reports with 16 detailed climate-related questions. As a result, we obtain a dataset with over 8.5K unique question-source-answer pairs labeled by different levels of relevance. Furthermore, we develop a use case with the dataset to investigate the integration of expert knowledge into information retrieval with embeddings. Although we show that incorporating expert knowledge works, we also outline the critical limitations of embeddings in knowledge-intensive downstream domains like climate change communication.
The remarkable ecological success of humans is often attributed to our ability to develop complex cultural artefacts that enable us to cope with environmental challenges. The evolution of complex culture (cumulative cultural evolution) is usually modelled as a collective process in which individuals invent new artefacts (innovation) and copy information from others (social learning). This classic picture overlooks the growing role of intelligent algorithms in the digital age (e.g. search engines, recommender systems and large language models) in mediating information between humans, with potential consequences for cumulative cultural evolution. Building on a previous model, we investigate the combined effects of network-based social learning and a simplistic version of algorithmic mediation on cultural accumulation. We find that algorithmic mediation significantly impacts cultural accumulation and that this impact grows as social networks become less densely connected. Cultural accumulation is most effective when social learning and algorithmic mediation are combined, and the optimal ratio depends on the network's density. This work is an initial step towards formalizing the impact of intelligent algorithms on cumulative cultural evolution within an established framework. Models like ours provide insights into mechanisms of human-machine interaction in cultural contexts, guiding hypotheses for future experimental testing.