The Algorithmic Blind Spot: Bias, Moral Status, and the Future of Robot Rights
Abstrak
Contemporary debates in AI ethics increasingly foreground the prospective moral status of artificial intelligence and the possibility of extending moral or legal rights to artificial agents. While such discussions raise substantive philosophical questions, they often proceed alongside a comparatively limited engagement with the empirically documented harms generated by algorithmic systems already embedded within social, legal, and economic institutions. We conceptualize this asymmetry as an algorithmic blind spot: a discursive-structural pattern in which disproportionate ethical investment in speculative future artificial agents marginalizes empirically documented and asymmetrically distributed harms affecting human populations. The paper analyzes prominent strands of the robot rights literature and juxtaposes them with empirical evidence of algorithmic bias and harm across domains including employment, criminal justice, surveillance, and facial recognition. It demonstrates how ethical preoccupation with hypothetical future entities can obscure existing injustices, diffuse responsibility, and impede mechanisms of accountability and redress. Without rejecting philosophical inquiry into the moral status of artificial systems, the paper instead emphasizes the importance of ethical prioritization and temporal ordering within AI ethics. Addressing the algorithmic blind spot, we argue, requires re-centering ethical evaluation on human impacts, institutional responsibility, and the governance of algorithmic systems currently in operation. In doing so, the paper introduces a conceptual framework for critically assessing ethical discourse in AI and underscores the need to align ethical reflection more closely with its immediate social consequences.
Topik & Kata Kunci
Penulis (2)
Rahulrajan Karthikeyan
Moses Boudourides
Akses Cepat
- Tahun Terbit
- 2026
- Bahasa
- en
- Sumber Database
- arXiv
- Akses
- Open Access ✓