arXiv Open Access 2025

Evaluating Intra-firm LLM Alignment Strategies in Business Contexts

Noah Broestl Benjamin Lange Cristina Voinea Geoff Keeling Rachael Lam
Lihat Sumber

Abstrak

Instruction-tuned Large Language Models (LLMs) are increasingly deployed as AI Assistants in firms for support in cognitive tasks. These AI assistants carry embedded perspectives which influence factors across the firm including decision-making, collaboration, and organizational culture. This paper argues that firms must align the perspectives of these AI Assistants intentionally with their objectives and values, framing alignment as a strategic and ethical imperative crucial for maintaining control over firm culture and intra-firm moral norms. The paper highlights how AI perspectives arise from biases in training data and the fine-tuning objectives of developers, and discusses their impact and ethical significance, foregrounding ethical concerns like automation bias and reduced critical thinking. Drawing on normative business ethics, particularly non-reductionist views of professional relationships, three distinct alignment strategies are proposed: supportive (reinforcing the firm's mission), adversarial (stress-testing ideas), and diverse (broadening moral horizons by incorporating multiple stakeholder views). The ethical trade-offs of each strategy and their implications for manager-employee and employee-employee relationships are analyzed, alongside the potential to shape the culture and moral fabric of the firm.

Topik & Kata Kunci

Penulis (5)

N

Noah Broestl

B

Benjamin Lange

C

Cristina Voinea

G

Geoff Keeling

R

Rachael Lam

Format Sitasi

Broestl, N., Lange, B., Voinea, C., Keeling, G., Lam, R. (2025). Evaluating Intra-firm LLM Alignment Strategies in Business Contexts. https://arxiv.org/abs/2505.18779

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓