A decision-theoretic characterization of perfect calibration is that an agent seeking to minimize a proper loss in expectation cannot improve their outcome by post-processing a perfectly calibrated predictor. Hu and Wu (FOCS'24) use this to define an approximate calibration measure called calibration decision loss (CDL), which measures the maximal improvement achievable by any post-processing over any proper loss. Unfortunately, CDL turns out to be intractable to even weakly approximate in the offline setting, given black-box access to the predictions and labels.
We suggest circumventing this by restricting attention to structured families of post-processing functions K. We define the calibration decision loss relative to $K$, denoted CDL$_K$ where we consider all proper losses but restrict post-processings to a structured family $K$. We develop a comprehensive theory of when CDL$_K$ is information-theoretically and computationally tractable, and use it to prove both upper and lower bounds for natural classes $K$. In addition to introducing new definitions and algorithmic techniques to the theory of calibration for decision making, our results give rigorous guarantees for some widely used recalibration procedures in machine learning.