POPL 2025
Sun 19 - Sat 25 January 2025 Denver, Colorado, United States

Machine learning models that automate decision-making are increasingly being used in consequential areas such as loan approvals, pretrial bail approval, hiring, and many more. Unfortunately, most of these models are black-boxes, i.e., they are unable to reveal how they reach these prediction decisions. A need for transparency demands justification for such predictions. An affected individual might also desire explanations to understand why a decision was made. Ethical and legal considerations may require informing individuals of changes needed to produce a desirable outcome. This paper focuses on this problem through the automatic generation of counterfactual explanations. We propose a framework Causally Constrained Counterfactual Generation (C3G) that utilizes answer set programming (ASP) and the s(CASP) goal-directed ASP system to automatically generate counterfactual explanations from rules generated by rule-based machine learning (RBML) algorithms. Unlike traditional causal based approaches such as MINT, which relies on Structural Causal Models (SCMs) with predefined structural equations, C3G leverages the flexibility of Answer Set Programming (ASP) to model causal dependencies through logical rules, allowing for broader applicability across various domains. In our framework, we show how counterfactual explanations are computed and justified by imagining worlds where some or all factual assumptions are altered/changed. More importantly, we show how we can navigate between these worlds, namely, go from our original world/scenario where we obtain an undesired/negative outcome to the imagined world where we obtain a desired/positive outcome.