Liability for artificial intelligence reasoning technologies – a cognitive autonomy that does not help
Abstrak
Purpose This paper aims to contribute to the discussion around the liability consequences of artificial intelligence (AI) in the context of its reasoning capabilities. Various liability regimes have been discussed so far. Depending on the liability regime, character and severance of the materialized default, point in time and the likes, the liability is being attributed to conceiver, designer, programmer, deployer and operator. None of these conceptual proposals are unquestioned or a priori right. Design/methodology/approach The proposed contribution undertakes a comprehensive analysis of the most pressing issues and theoretical considerations, including the prospects of de lege ferenda proposals. In particular, it is determined and delimited by the known concepts of liabilities for harm in correlation with the field of human rights. The proposed top-down approach provides a trajectory for examining specific issues such as AI personhood. Findings To prevent the evasion of liability transparency is vital as it enables stakeholders to identify and address potential issues before they result in harm. The current landscape of AI liability is characterized by a reluctance to accept responsibility and a lack of clear framework. Stakeholders at all levels must commit to developing a comprehensive liability frameworks that provide clear guidance on the responsibilities. Originality/value Emerging trends, such as the increasing integration of AI into critical infrastructure and the rise of autonomous systems, present new challenges for liability regimes. As AI technologies evolve, the governance must adapt. The principle of respondeat superior must be reinterpreted to address the complexities of AI liability.
Penulis (1)
Tomasz Braun
Akses Cepat
- Tahun Terbit
- 2025
- Bahasa
- en
- Total Sitasi
- 4×
- Sumber Database
- CrossRef
- DOI
- 10.1108/cg-09-2024-0471
- Akses
- Open Access ✓