Laziness has long been recognized as useful for inference in probabilistic programs [Koller et al. 1997, Pfeffer 2009, Kiselyov and Shan 2009, Pfeffer et al. 2015], as it can drastically reduce the space of possible program executions, automatically marginalizing random choices that cannot affect the program’s result. However, inference engines for modern PPLs have largely moved away from lazy evaluation. This abstract revisits the question of lazy inference in light of modern advances in PPL semantics [Dash et al. 2023] and implementation [Holtzen et al. 2020]. Concretely, we propose a new variant of knowledge compilation—a state-of-the-art approach to inference for discrete probabilistic programs [Holtzen et al. 2020, Kimmig et al. 2011]—that exploits lazy evaluation to significantly improve performance, and we prove it correct using Dash et al. [2023]’s semantics for lazy, higher-order probabilistic programming. Early experiments show that lazy knowledge compilation can deliver significant performance gains, suggesting that the insights of early PPLs can be fruitfully combined with modern advances.