If we don’t draw a clear boundary between human and AI decision authority, we’re not being serious about risk.
In safety critical domains, this isn’t an “easy job” or a checkbox role. It carries real weight decisions can affect life, liberty, and irreversible harm.
Health and safety work is not just a function; it’s a responsibility that demands clarity, accountability, and disciplined judgment.
Preface: Governance is not a buzz word, it is law behind decision making. Governance without Accountability is silly business.
Abstract of my latest paper!
Artificial intelligence (AI) systems are increasingly embedded in high‑stakes decision processes involving life, liberty, and large scale environmental risk. Existing debates on responsibility gaps and accountability in AI focus on attributing blame and liability but stop short of articulating a cross‑domain boundary on what AI may ultimately decide. This paper proposes a Core Ethical Boundary: AI systems must never hold final authority over decisions that can cause irreversible harm, such as death, permanent disability, indefinite loss of liberty, or non‑recoverable ecological damage. This boundary is justified through the concept of moral reversibility capacity, defined as the ability to bear responsibility, be sanctioned, and participate in moral repair. Because AI lacks such capacity, it cannot legitimately exercise ultimate authority over irreversible outcomes.
Having established this limit on AI authority, the paper argues that human decision‑making must also be constrained. Traditional appeals to “experience based judgment” are insufficient in rapidly changing organizational environments where past experience loses predictive value. Unstructured human judgment introduces inconsistency, hidden subjectivity, and non‑auditable reasoning. Therefore, the objective is not to restrict AI or human agents, but to allocate authority, accountability, and performance indicators (KPIs) according to the functional strengths of each. AI systems should expand, structure, and audit the decision space, while humans should retain accountable authority within explicitly defined, non‑subjective decision procedures. This distribution ensures that neither AI nor human actors rely on opaque intuition; instead, both operate within a governance architecture designed to minimize arbitrariness, enhance traceability, and align decision authority with the nature of the task and the organization’s objectives.
hashtag#CoreEthicalBoundary hashtag#IrreversibleHarm hashtag#MoralReversibilityCapacity hashtag#MoralRepair hashtag#UltimateAuthority hashtag#ResponsibilityGaps hashtag#GovernanceArchitecture hashtag#DecisionSpace hashtag#Traceability hashtag#FunctionalStrengths hashtag#UnstructuredHumanJudgment hashtag#AccountableAuthority hashtag#Auditability hashtag#OpaqueIntuition hashtag#NonSubjectiveProcedures
https://explore.openaire.eu/search/result?pid=10.5281/zenodo.19911612