Skip to content

calibration

calibration

compute_bias(y_obs, y_pred, feature=None, weights=None, *, functional='mean', level=0.5, n_bins=10)

Compute generalised bias conditional on a feature.

This function computes and aggregates the generalised bias, i.e. the values of the canonical identification function, versus (grouped by) a feature. This is a good way to assess whether a model is conditionally calibrated or not. Well calibrated models have bias terms around zero. For the mean functional, the generalised bias is the negative residual y_pred - y_obs. See Notes for further details.

Parameters:

Name Type Description Default
y_obs array-like of shape (n_obs)

Observed values of the response variable. For binary classification, y_obs is expected to be in the interval [0, 1].

required
y_pred array-like of shape (n_obs) or (n_obs, n_models)

Predicted values of the conditional expectation of Y, E(Y|X).

required
feature array-like of shape (n_obs) or None

Some feature column.

None
weights array-like of shape (n_obs) or None

Case weights. If given, the bias is calculated as weighted average of the identification function with these weights. Note that the standard errors and p-values in the output are based on the assumption that the variance of the bias is inverse proportional to the weights. See the Notes section for details.

None
functional str

The functional that is induced by the identification function V. Options are:

  • "mean". Argument level is neglected.
  • "median". Argument level is neglected.
  • "expectile"
  • "quantile"
'mean'
level float

The level of the expectile of quantile. (Often called \(\alpha\).) It must be 0 < level < 1. level=0.5 and functional="expectile" gives the mean. level=0.5 and functional="quantile" gives the median.

0.5
n_bins int

The number of bins for numerical features and the maximal number of (most frequent) categories shown for categorical features. Due to ties, the effective number of bins might be smaller than n_bins. Null values are always included in the output, accounting for one bin. NaN values are treated as null values.

10

Returns:

Name Type Description
df DataFrame

The result table contains at least the columns:

  • bias_mean: Mean of the bias
  • bias_cout: Number of data rows
  • bias_weights: Sum of weights
  • bias_stderr: Standard error, i.e. standard deviation of bias_mean
  • p_value: p-value of the 2-sided t-test with null hypothesis: bias_mean = 0
Notes

A model \(m(X)\) is conditionally calibrated iff \(\mathbb{E}(V(m(X), Y)|X)=0\) almost surely with canonical identification function \(V\). The empirical version, given some data, reads \(\bar{V} = \frac{1}{n}\sum_i \phi(x_i) V(m(x_i), y_i)\) with a test function \(\phi(x_i)\) that projects on the specified feature. For a feature with only two distinct values "a" and "b", this becomes \(\bar{V} = \frac{1}{n_a}\sum_{i \text{ with }x_i=a} V(m(a), y_i)\) with \(n_a=\sum_{i \text{ with }x_i=a}\) and similar for "b". With case weights, this reads \(\bar{V} = \frac{1}{\sum_i w_i}\sum_i w_i \phi(x_i) V(m(x_i), y_i)\). This generalises the classical residual (up to a minus sign) for target functionals other than the mean. See [FLM2022].

The standard error for \(\bar{V}\) is calculated in the standard way as \(\mathrm{SE} = \sqrt{\operatorname{Var}(\bar{V})} = \frac{\sigma}{\sqrt{n}}\) and the standard variance estimator for \(\sigma^2 = \operatorname{Var}(\phi(x_i) V(m(x_i), y_i))\) with Bessel correction, i.e. division by \(n-1\) instead of \(n\).

With case weights, the variance estimator becomes \(\operatorname{Var}(\bar{V}) = \frac{1}{n-1} \frac{1}{\sum_i w_i} \sum_i w_i (V(m(x_i), y_i) - \bar{V})^2\) with the implied relation \(\operatorname{Var}(V(m(x_i), y_i)) \sim \frac{1}{w_i} \). If your weights are for repeated observations, so-called frequency weights, then the above estimate is conservative because it uses \(n - 1\) instead of \((\sum_i w_i) - 1\).

References
[FLM2022]

T. Fissler, C. Lorentzen, and M. Mayer. "Model Comparison and Calibration Assessment". (2022) arxiv:2202.12780.

Examples:

>>> compute_bias(y_obs=[0, 0, 1, 1], y_pred=[-1, 1, 1 , 2])
shape: (1, 5)
┌───────────┬────────────┬──────────────┬─────────────┬──────────┐
│ bias_mean ┆ bias_count ┆ bias_weights ┆ bias_stderr ┆ p_value  │
│ ---       ┆ ---        ┆ ---          ┆ ---         ┆ ---      │
│ f64       ┆ u32        ┆ f64          ┆ f64         ┆ f64      │
╞═══════════╪════════════╪══════════════╪═════════════╪══════════╡
│ 0.25      ┆ 4          ┆ 4.0          ┆ 0.478714    ┆ 0.637618 │
└───────────┴────────────┴──────────────┴─────────────┴──────────┘
>>> compute_bias(y_obs=[0, 0, 1, 1], y_pred=[-1, 1, 1 , 2],
... feature=["a", "a", "b", "b"])
shape: (2, 6)
┌─────────┬───────────┬────────────┬──────────────┬─────────────┬─────────┐
│ feature ┆ bias_mean ┆ bias_count ┆ bias_weights ┆ bias_stderr ┆ p_value │
│ ---     ┆ ---       ┆ ---        ┆ ---          ┆ ---         ┆ ---     │
│ str     ┆ f64       ┆ u32        ┆ f64          ┆ f64         ┆ f64     │
╞═════════╪═══════════╪════════════╪══════════════╪═════════════╪═════════╡
│ a       ┆ 0.0       ┆ 2          ┆ 2.0          ┆ 1.0         ┆ 1.0     │
│ b       ┆ 0.5       ┆ 2          ┆ 2.0          ┆ 0.5         ┆ 0.5     │
└─────────┴───────────┴────────────┴──────────────┴─────────────┴─────────┘
Source code in src/model_diagnostics/calibration/identification.py
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
def compute_bias(
    y_obs: npt.ArrayLike,
    y_pred: npt.ArrayLike,
    feature: Optional[Union[npt.ArrayLike, pl.Series]] = None,
    weights: Optional[npt.ArrayLike] = None,
    *,
    functional: str = "mean",
    level: float = 0.5,
    n_bins: int = 10,
):
    r"""Compute generalised bias conditional on a feature.

    This function computes and aggregates the generalised bias, i.e. the values of the
    canonical identification function, versus (grouped by) a feature.
    This is a good way to assess whether a model is conditionally calibrated or not.
    Well calibrated models have bias terms around zero.
    For the mean functional, the generalised bias is the negative residual
    `y_pred - y_obs`.
    See Notes for further details.

    Parameters
    ----------
    y_obs : array-like of shape (n_obs)
        Observed values of the response variable.
        For binary classification, y_obs is expected to be in the interval [0, 1].
    y_pred : array-like of shape (n_obs) or (n_obs, n_models)
        Predicted values of the conditional expectation of Y, `E(Y|X)`.
    feature : array-like of shape (n_obs) or None
        Some feature column.
    weights : array-like of shape (n_obs) or None
        Case weights. If given, the bias is calculated as weighted average of the
        identification function with these weights.
        Note that the standard errors and p-values in the output are based on the
        assumption that the variance of the bias is inverse proportional to the
        weights. See the Notes section for details.
    functional : str
        The functional that is induced by the identification function `V`. Options are:

        - `"mean"`. Argument `level` is neglected.
        - `"median"`. Argument `level` is neglected.
        - `"expectile"`
        - `"quantile"`
    level : float
        The level of the expectile of quantile. (Often called \(\alpha\).)
        It must be `0 < level < 1`.
        `level=0.5` and `functional="expectile"` gives the mean.
        `level=0.5` and `functional="quantile"` gives the median.
    n_bins : int
        The number of bins for numerical features and the maximal number of (most
        frequent) categories shown for categorical features. Due to ties, the effective
        number of bins might be smaller than `n_bins`. Null values are always included
        in the output, accounting for one bin. NaN values are treated as null values.

    Returns
    -------
    df : polars.DataFrame
        The result table contains at least the columns:

        - `bias_mean`: Mean of the bias
        - `bias_cout`: Number of data rows
        - `bias_weights`: Sum of weights
        - `bias_stderr`: Standard error, i.e. standard deviation of `bias_mean`
        - `p_value`: p-value of the 2-sided t-test with null hypothesis:
          `bias_mean = 0`

    Notes
    -----
    A model \(m(X)\) is conditionally calibrated iff
    \(\mathbb{E}(V(m(X), Y)|X)=0\) almost surely with canonical identification
    function \(V\).
    The empirical version, given some data, reads
    \(\bar{V} = \frac{1}{n}\sum_i \phi(x_i) V(m(x_i), y_i)\) with a test function
    \(\phi(x_i)\) that projects on the specified feature.
    For a feature with only two distinct values `"a"` and `"b"`, this becomes
    \(\bar{V} = \frac{1}{n_a}\sum_{i \text{ with }x_i=a} V(m(a), y_i)\) with
    \(n_a=\sum_{i \text{ with }x_i=a}\) and similar for `"b"`.
    With case weights, this reads
    \(\bar{V} = \frac{1}{\sum_i w_i}\sum_i w_i \phi(x_i) V(m(x_i), y_i)\).
    This generalises the classical residual (up to a minus sign) for target functionals
    other than the mean. See `[FLM2022]`.

    The standard error for \(\bar{V}\) is calculated in the standard way as
    \(\mathrm{SE} = \sqrt{\operatorname{Var}(\bar{V})} = \frac{\sigma}{\sqrt{n}}\) and
    the standard variance estimator for \(\sigma^2 = \operatorname{Var}(\phi(x_i)
    V(m(x_i), y_i))\) with Bessel correction, i.e. division by \(n-1\) instead of
    \(n\).

    With case weights, the variance estimator becomes \(\operatorname{Var}(\bar{V})
    = \frac{1}{n-1} \frac{1}{\sum_i w_i} \sum_i w_i (V(m(x_i), y_i) - \bar{V})^2\) with
    the implied relation \(\operatorname{Var}(V(m(x_i), y_i)) \sim \frac{1}{w_i} \).
    If your weights are for repeated observations, so-called frequency weights, then
    the above estimate is conservative because it uses \(n - 1\) instead
    of \((\sum_i w_i) - 1\).

    References
    ----------
    `[FLM2022]`

    :   T. Fissler, C. Lorentzen, and M. Mayer.
        "Model Comparison and Calibration Assessment". (2022)
        [arxiv:2202.12780](https://arxiv.org/abs/2202.12780).

    Examples
    --------
    >>> compute_bias(y_obs=[0, 0, 1, 1], y_pred=[-1, 1, 1 , 2])
    shape: (1, 5)
    ┌───────────┬────────────┬──────────────┬─────────────┬──────────┐
    │ bias_mean ┆ bias_count ┆ bias_weights ┆ bias_stderr ┆ p_value  │
    │ ---       ┆ ---        ┆ ---          ┆ ---         ┆ ---      │
    │ f64       ┆ u32        ┆ f64          ┆ f64         ┆ f64      │
    ╞═══════════╪════════════╪══════════════╪═════════════╪══════════╡
    │ 0.25      ┆ 4          ┆ 4.0          ┆ 0.478714    ┆ 0.637618 │
    └───────────┴────────────┴──────────────┴─────────────┴──────────┘
    >>> compute_bias(y_obs=[0, 0, 1, 1], y_pred=[-1, 1, 1 , 2],
    ... feature=["a", "a", "b", "b"])
    shape: (2, 6)
    ┌─────────┬───────────┬────────────┬──────────────┬─────────────┬─────────┐
    │ feature ┆ bias_mean ┆ bias_count ┆ bias_weights ┆ bias_stderr ┆ p_value │
    │ ---     ┆ ---       ┆ ---        ┆ ---          ┆ ---         ┆ ---     │
    │ str     ┆ f64       ┆ u32        ┆ f64          ┆ f64         ┆ f64     │
    ╞═════════╪═══════════╪════════════╪══════════════╪═════════════╪═════════╡
    │ a       ┆ 0.0       ┆ 2          ┆ 2.0          ┆ 1.0         ┆ 1.0     │
    │ b       ┆ 0.5       ┆ 2          ┆ 2.0          ┆ 0.5         ┆ 0.5     │
    └─────────┴───────────┴────────────┴──────────────┴─────────────┴─────────┘
    """
    validate_same_first_dimension(y_obs, y_pred)
    n_pred = length_of_second_dimension(y_pred)
    pred_names, _ = get_sorted_array_names(y_pred)

    if weights is not None:
        validate_same_first_dimension(weights, y_obs)
        w = np.asarray(weights)
        if w.ndim > 1:
            msg = f"The array weights must be 1-dimensional, got weights.ndim={w.ndim}."
            raise ValueError(msg)
    else:
        w = np.ones_like(y_obs, dtype=float)

    df_list = []
    with pl.StringCache():
        is_categorical = False
        is_string = False

        if feature is None:
            feature_name = None
        else:
            feature_name = array_name(feature, default="feature")
            # The following statement, i.e. possibly the creation of a pl.Categorical,
            # MUST be under the StringCache context manager!
            feature = pl.Series(name=feature_name, values=feature)
            validate_same_first_dimension(y_obs, feature)
            if (feature.dtype == pl.Categorical) or (
                polars_version >= Version("0.20.0") and feature.dtype == pl.Enum
            ):
                # FIXME: polars >= 0.20.0
                is_categorical = True
            elif feature.dtype in [pl.Utf8, pl.Object]:
                # We could convert strings to categoricals.
                is_string = True
            # FIXME: polars >= 0.19.14
            # Then, just use Series.dtype.is_float()
            elif (hasattr(feature.dtype, "is_float") and feature.dtype.is_float()) or (
                not hasattr(feature.dtype, "is_float") and feature.is_float()
            ):
                # We treat NaN as Null values, numpy will see a Null as a NaN.
                feature = feature.fill_nan(None)
            else:
                # integers
                pass

            if is_categorical or is_string:
                # For categorical and string features, knowing the frequency table in
                # advance makes life easier in order to make results consistent.
                # Consider
                #     feature  count
                #         "a"      3
                #         "b"      2
                #         "c"      2
                #         "d"      1
                # with n_bins = 2. As we want the effective number of bins to be at
                # most n_bins, we want, in the above case, only "a" in the final
                # result. Therefore, we need to internally decrease n_bins to 1.
                if feature.null_count() == 0:
                    value_count = feature.value_counts(sort=True)
                    n_bins_ef = n_bins
                else:
                    value_count = feature.drop_nulls().value_counts(sort=True)
                    n_bins_ef = n_bins - 1

                if n_bins_ef >= value_count.shape[0]:
                    n_bins = value_count.shape[0]
                else:
                    # FIXME: polars >= 0.20
                    if polars_version >= Version("0.20.0"):
                        count_name = "count"
                    else:
                        count_name = "counts"
                    n = value_count[count_name][n_bins_ef]
                    n_bins_tmp = value_count.filter(pl.col(count_name) >= n).shape[0]
                    if n_bins_tmp > n_bins_ef:
                        n_bins = value_count.filter(pl.col(count_name) > n).shape[0]
                    else:
                        n_bins = n_bins_tmp

                if feature.null_count() >= 1:
                    n_bins += 1

                if n_bins == 0:
                    msg = (
                        "Due to ties, the effective number of bins is 0. "
                        f"Consider to increase n_bins>={n_bins_tmp}."
                    )
                    warnings.warn(msg, UserWarning, stacklevel=2)
            else:
                # Binning
                # We use method="inverted_cdf" (same as "lower") instead of the
                # default "linear" because "linear" produces as many unique values
                # as before.
                # If we have Null values, we should reserve one bin for it and reduce
                # the effective number of bins by 1.
                n_bins_ef = max(1, n_bins - (feature.null_count() >= 1))
                q = np.nanquantile(
                    feature,
                    # Improved rounding errors by using integers and dividing at the
                    # end as opposed to np.linspace with 1/n_bins step size.
                    q=np.arange(1, n_bins_ef) / n_bins_ef,
                    method="inverted_cdf",
                )
                q = np.unique(q)  # Some quantiles might be the same.
                # We want: bins[i-1] < x <= bins[i]
                f_binned = np.digitize(feature, bins=q, right=True)
                # Now, we insert Null values again at the original places.
                f_binned = (
                    pl.LazyFrame(
                        [
                            pl.Series("__f_binned", f_binned, dtype=feature.dtype),
                            feature,
                        ]
                    )
                    .select(
                        pl.when(pl.col(feature_name).is_null())
                        .then(None)
                        .otherwise(pl.col("__f_binned"))
                        .alias("bin")
                    )
                    .collect()
                    .get_column("bin")
                )

        for i in range(len(pred_names)):
            # Loop over columns of y_pred.
            x = y_pred if n_pred == 0 else get_second_dimension(y_pred, i)

            bias = identification_function(
                y_obs=y_obs,
                y_pred=x,
                functional=functional,
                level=level,
            )

            if feature is None:
                bias_mean = np.average(bias, weights=w)
                bias_weights = np.sum(w)
                bias_count = bias.shape[0]
                # Note: with Bessel correction
                bias_stddev = np.average((bias - bias_mean) ** 2, weights=w) / np.amax(
                    [1, bias_count - 1]
                )
                df = pl.DataFrame(
                    {
                        "bias_mean": [bias_mean],
                        "bias_count": pl.Series([bias_count], dtype=pl.UInt32),
                        "bias_weights": [bias_weights],
                        "bias_stderr": [np.sqrt(bias_stddev)],
                    }
                )
            else:
                df = pl.DataFrame(
                    {
                        "y_obs": y_obs,
                        "y_pred": x,
                        feature_name: feature,
                        "bias": bias,
                        "weights": w,
                    }
                )

                agg_list = [
                    pl.col("bias_mean").first(),
                    pl.count("bias").alias("bias_count"),
                    pl.col("weights").sum().alias("bias_weights"),
                    (
                        (
                            pl.col("weights")
                            * ((pl.col("bias") - pl.col("bias_mean")) ** 2)
                        ).sum()
                        / pl.col("weights").sum()
                    ).alias("variance"),
                ]

                if is_categorical or is_string:
                    groupby_name = feature_name
                else:
                    # See above for the creation of the binned feature f_binned.
                    df = df.hstack([f_binned])
                    groupby_name = "bin"
                    agg_list.append(pl.col(feature_name).mean())

                df = df.lazy().select(
                    [
                        pl.all(),
                        (
                            (pl.col("weights") * pl.col("bias"))
                            .sum()
                            .over(groupby_name)
                            / pl.col("weights").sum().over(groupby_name)
                        ).alias("bias_mean"),
                    ]
                )
                # FIXME: polars >= 0.19
                if polars_version >= Version("0.19.0"):
                    df = df.group_by(groupby_name)
                else:
                    df = df.groupby(groupby_name)
                df = (
                    df.agg(agg_list)
                    .with_columns(
                        [
                            pl.when(pl.col("bias_count") > 1)
                            .then(pl.col("variance") / (pl.col("bias_count") - 1))
                            .otherwise(pl.col("variance"))
                            .sqrt()
                            .alias("bias_stderr"),
                        ]
                    )
                    # With sort and head alone, we could lose the null value, but we
                    # want to keep it.
                    # .sort("bias_count", descending=True)
                    # .head(n_bins)
                    .with_columns(
                        pl.when(pl.col(feature_name).is_null())
                        .then(pl.max("bias_count") + 1)
                        .otherwise(pl.col("bias_count"))
                        .alias("__priority")
                    )
                    .sort("__priority", descending=True)
                    .head(n_bins)
                )
                # FIXME: When n_bins=0, the resut should be an empty dataframe
                # (0 rows and some columns). For some unknown reason as of
                # polars 0.20.20, the following sort neglects the head(0) statement.
                # Therefore, we place an explicit collect here. This should not be
                # needed!
                if n_bins == 0 or feature.null_count() >= 1:
                    df = df.collect().lazy()
                df = df.sort(feature_name, descending=False)

                df = df.select(
                    [
                        pl.col(feature_name),
                        pl.col("bias_mean"),
                        pl.col("bias_count"),
                        pl.col("bias_weights"),
                        pl.col("bias_stderr"),
                    ]
                ).collect()

                # if is_categorical:
                #     # Pyarrow does not yet support sorting dictionary type arrays,
                #     # see
                #     # https://issues.apache.org/jira/browse/ARROW-14314
                #     # We resort to pandas instead.
                #     import pyarrow as pa
                #     df = df.to_pandas().sort_values(feature_name)
                #     df = pa.Table.from_pandas(df)

            # Add column with p-value of 2-sided t-test.
            # We explicitly convert "to_numpy", because otherwise we get:
            #   RuntimeWarning: A builtin ctypes object gave a PEP3118 format string
            #   that does not match its itemsize, so a best-guess will be made of the
            #   data type. Newer versions of python may behave correctly.
            stderr_ = df.get_column("bias_stderr")
            p_value = np.full_like(stderr_, fill_value=np.nan)
            n = df.get_column("bias_count")
            p_value[np.asarray((n > 1) & (stderr_ == 0), dtype=bool)] = 0
            mask = stderr_ > 0
            x = df.get_column("bias_mean").filter(mask).to_numpy()
            n = df.get_column("bias_count").filter(mask).to_numpy()
            stderr = stderr_.filter(mask).to_numpy()
            # t-statistic t (-|t| and factor of 2 because of 2-sided test)
            p_value[np.asarray(mask, dtype=bool)] = 2 * special.stdtr(
                n - 1,  # degrees of freedom
                -np.abs(x / stderr),
            )
            df = df.with_columns(pl.Series("p_value", p_value))

            # Add column "model".
            if n_pred > 0:
                model_col_name = "model_" if feature_name == "model" else "model"
                df = df.with_columns(
                    pl.Series(model_col_name, [pred_names[i]] * df.shape[0])
                )

            # Select the columns in the correct order.
            col_selection = []
            if n_pred > 0:
                col_selection.append(model_col_name)
            if feature_name is not None and feature_name in df.columns:
                col_selection.append(feature_name)
            col_selection += [
                "bias_mean",
                "bias_count",
                "bias_weights",
                "bias_stderr",
                "p_value",
            ]
            df_list.append(df.select(col_selection))

        df = pl.concat(df_list)
    return df

identification_function(y_obs, y_pred, *, functional='mean', level=0.5)

Canonical identification function.

Identification functions act as generalised residuals. See Notes for further details.

Parameters:

Name Type Description Default
y_obs array-like of shape (n_obs)

Observed values of the response variable. For binary classification, y_obs is expected to be in the interval [0, 1].

required
y_pred array-like of shape (n_obs)

Predicted values of the functional, e.g. the conditional expectation of the response, E(Y|X).

required
functional str

The functional that is induced by the identification function V. Options are:

  • "mean". Argument level is neglected.
  • "median". Argument level is neglected.
  • "expectile"
  • "quantile"
'mean'
level float

The level of the expectile of quantile. (Often called \(\alpha\).) It must be 0 < level < 1. level=0.5 and functional="expectile" gives the mean. level=0.5 and functional="quantile" gives the median.

0.5

Returns:

Name Type Description
V ndarray of shape (n_obs)

Values of the identification function.

Notes

The function \(V(y, z)\) for observation \(y=y_{pred}\) and prediction \(z=y_{pred}\) is a strict identification function for the functional \(T\), or induces the functional \(T\) as:

\[ \mathbb{E}[V(Y, z)] = 0\quad \Leftrightarrow\quad z\in T(F) \quad \forall \text{ distributions } F \in \mathcal{F} \]

for some class of distributions \(\mathcal{F}\). Implemented examples of the functional \(T\) are mean, median, expectiles and quantiles.

functional strict identification function \(V(y, z)\)
mean \(z - y\)
median \(\mathbf{1}\{z \ge y\} - \frac{1}{2}\)
expectile \(2 \mid\mathbf{1}\{z \ge y\} - \alpha\mid (z - y)\)
quantile \(\mathbf{1}\{z \ge y\} - \alpha\)

For level \(\alpha\).

References
[Gneiting2011]

T. Gneiting. "Making and Evaluating Point Forecasts". (2011) doi:10.1198/jasa.2011.r10138 arxiv:0912.0902

Examples:

>>> identification_function(y_obs=[0, 0, 1, 1], y_pred=[-1, 1, 1 , 2])
array([-1,  1,  0,  1])
Source code in src/model_diagnostics/calibration/identification.py
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
def identification_function(
    y_obs: npt.ArrayLike,
    y_pred: npt.ArrayLike,
    *,
    functional: str = "mean",
    level: float = 0.5,
) -> np.ndarray:
    r"""Canonical identification function.

    Identification functions act as generalised residuals. See [Notes](#notes) for
    further details.

    Parameters
    ----------
    y_obs : array-like of shape (n_obs)
        Observed values of the response variable.
        For binary classification, y_obs is expected to be in the interval [0, 1].
    y_pred : array-like of shape (n_obs)
        Predicted values of the `functional`, e.g. the conditional expectation of
        the response, `E(Y|X)`.
    functional : str
        The functional that is induced by the identification function `V`. Options are:

        - `"mean"`. Argument `level` is neglected.
        - `"median"`. Argument `level` is neglected.
        - `"expectile"`
        - `"quantile"`
    level : float
        The level of the expectile of quantile. (Often called \(\alpha\).)
        It must be `0 < level < 1`.
        `level=0.5` and `functional="expectile"` gives the mean.
        `level=0.5` and `functional="quantile"` gives the median.

    Returns
    -------
    V : ndarray of shape (n_obs)
        Values of the identification function.

    Notes
    -----
    The function \(V(y, z)\) for observation \(y=y_{pred}\) and prediction
    \(z=y_{pred}\) is a strict identification function for the functional \(T\), or
    induces the functional \(T\) as:

    \[
    \mathbb{E}[V(Y, z)] = 0\quad \Leftrightarrow\quad z\in T(F) \quad \forall
    \text{ distributions } F
    \in \mathcal{F}
    \]

    for some class of distributions \(\mathcal{F}\). Implemented examples of the
    functional \(T\) are mean, median, expectiles and quantiles.

    | functional | strict identification function \(V(y, z)\)           |
    | ---------- | ---------------------------------------------------- |
    | mean       | \(z - y\)                                            |
    | median     | \(\mathbf{1}\{z \ge y\} - \frac{1}{2}\)              |
    | expectile  | \(2 \mid\mathbf{1}\{z \ge y\} - \alpha\mid (z - y)\) |
    | quantile   | \(\mathbf{1}\{z \ge y\} - \alpha\)                   |

    For `level` \(\alpha\).

    References
    ----------
    `[Gneiting2011]`

    :   T. Gneiting.
        "Making and Evaluating Point Forecasts". (2011)
        [doi:10.1198/jasa.2011.r10138](https://doi.org/10.1198/jasa.2011.r10138)
        [arxiv:0912.0902](https://arxiv.org/abs/0912.0902)

    Examples
    --------
    >>> identification_function(y_obs=[0, 0, 1, 1], y_pred=[-1, 1, 1 , 2])
    array([-1,  1,  0,  1])
    """
    y_o: np.ndarray
    y_p: np.ndarray
    y_o, y_p = validate_2_arrays(y_obs, y_pred)

    if functional in ("expectile", "quantile") and (level <= 0 or level >= 1):
        msg = f"Argument level must fulfil 0 < level < 1, got {level}."
        raise ValueError(msg)

    if functional == "mean":
        return y_p - y_o
    elif functional == "median":
        return np.greater_equal(y_p, y_o) - 0.5
    elif functional == "expectile":
        return 2 * np.abs(np.greater_equal(y_p, y_o) - level) * (y_p - y_o)
    elif functional == "quantile":
        return np.greater_equal(y_p, y_o) - level
    else:
        allowed_functionals = ("mean", "median", "expectile", "quantile")
        msg = (
            f"Argument functional must be one of {allowed_functionals}, got "
            f"{functional}."
        )
        raise ValueError(msg)

plot_bias(y_obs, y_pred, feature=None, weights=None, *, functional='mean', level=0.5, n_bins=10, confidence_level=0.9, ax=None)

Plot model bias conditional on a feature.

This plots the generalised bias (residuals), i.e. the values of the canonical identification function, versus a feature. This is a good way to assess whether a model is conditionally calibrated or not. Well calibrated models have bias terms around zero. See Notes for further details.

For numerical features, NaN are treated as Null values. Null values are always plotted as rightmost value on the x-axis and marked with a diamond instead of a dot.

Parameters:

Name Type Description Default
y_obs array-like of shape (n_obs)

Observed values of the response variable. For binary classification, y_obs is expected to be in the interval [0, 1].

required
y_pred array-like of shape (n_obs) or (n_obs, n_models)

Predicted values of the conditional expectation of Y, E(Y|X).

required
feature array-like of shape (n_obs) or None

Some feature column.

None
weights array-like of shape (n_obs) or None

Case weights. If given, the bias is calculated as weighted average of the identification function with these weights. Note that the standard errors and p-values in the output are based on the assumption that the variance of the bias is inverse proportional to the weights. See the Notes section for details.

None
functional str

The functional that is induced by the identification function V. Options are:

  • "mean". Argument level is neglected.
  • "median". Argument level is neglected.
  • "expectile"
  • "quantile"
'mean'
level float

The level of the expectile or quantile. (Often called \(\alpha\).) It must be 0 <= level <= 1. level=0.5 and functional="expectile" gives the mean. level=0.5 and functional="quantile" gives the median.

0.5
n_bins int

The number of bins for numerical features and the maximal number of (most frequent) categories shown for categorical features. Due to ties, the effective number of bins might be smaller than n_bins.

10
confidence_level float

Confidence level for error bars. If 0, no error bars are plotted. Value must fulfil 0 <= confidence_level < 1.

0.9
ax matplotlib.axes.Axes or plotly Figure

Axes object to draw the plot onto, otherwise uses the current Axes.

None

Returns:

Name Type Description
ax

Either the matplotlib axes or the plotly figure. This is configurable by setting the plot_backend via model_diagnostics.set_config or model_diagnostics.config_context.

Notes

A model \(m(X)\) is conditionally calibrated iff \(E(V(m(X), Y))=0\) a.s. The empirical version, given some data, reads \(\frac{1}{n}\sum_i V(m(x_i), y_i)\). See [FLM2022]`.

References
FLM2022

T. Fissler, C. Lorentzen, and M. Mayer. "Model Comparison and Calibration Assessment". (2022) arxiv:2202.12780.

Source code in src/model_diagnostics/calibration/plots.py
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
def plot_bias(
    y_obs: npt.ArrayLike,
    y_pred: npt.ArrayLike,
    feature: Optional[npt.ArrayLike] = None,
    weights: Optional[npt.ArrayLike] = None,
    *,
    functional: str = "mean",
    level: float = 0.5,
    n_bins: int = 10,
    confidence_level: float = 0.9,
    ax: Optional[mpl.axes.Axes] = None,
):
    r"""Plot model bias conditional on a feature.

    This plots the generalised bias (residuals), i.e. the values of the canonical
    identification function, versus a feature. This is a good way to assess whether
    a model is conditionally calibrated or not. Well calibrated models have bias terms
    around zero.
    See Notes for further details.

    For numerical features, NaN are treated as Null values. Null values are always
    plotted as rightmost value on the x-axis and marked with a diamond instead of a
    dot.

    Parameters
    ----------
    y_obs : array-like of shape (n_obs)
        Observed values of the response variable.
        For binary classification, y_obs is expected to be in the interval [0, 1].
    y_pred : array-like of shape (n_obs) or (n_obs, n_models)
        Predicted values of the conditional expectation of Y, `E(Y|X)`.
    feature : array-like of shape (n_obs) or None
        Some feature column.
    weights : array-like of shape (n_obs) or None
        Case weights. If given, the bias is calculated as weighted average of the
        identification function with these weights.
        Note that the standard errors and p-values in the output are based on the
        assumption that the variance of the bias is inverse proportional to the
        weights. See the Notes section for details.
    functional : str
        The functional that is induced by the identification function `V`. Options are:

        - `"mean"`. Argument `level` is neglected.
        - `"median"`. Argument `level` is neglected.
        - `"expectile"`
        - `"quantile"`
    level : float
        The level of the expectile or quantile. (Often called \(\alpha\).)
        It must be `0 <= level <= 1`.
        `level=0.5` and `functional="expectile"` gives the mean.
        `level=0.5` and `functional="quantile"` gives the median.
    n_bins : int
        The number of bins for numerical features and the maximal number of (most
        frequent) categories shown for categorical features. Due to ties, the effective
        number of bins might be smaller than `n_bins`.
    confidence_level : float
        Confidence level for error bars. If 0, no error bars are plotted. Value must
        fulfil `0 <= confidence_level < 1`.
    ax : matplotlib.axes.Axes or plotly Figure
        Axes object to draw the plot onto, otherwise uses the current Axes.

    Returns
    -------
    ax :
        Either the matplotlib axes or the plotly figure. This is configurable by
        setting the `plot_backend` via
        [`model_diagnostics.set_config`][model_diagnostics.set_config] or
        [`model_diagnostics.config_context`][model_diagnostics.config_context].

    Notes
    -----
    A model \(m(X)\) is conditionally calibrated iff \(E(V(m(X), Y))=0\) a.s. The
    empirical version, given some data, reads \(\frac{1}{n}\sum_i V(m(x_i), y_i)\).
    See [FLM2022]`.

    References
    ----------
    `FLM2022`

    :   T. Fissler, C. Lorentzen, and M. Mayer.
        "Model Comparison and Calibration Assessment". (2022)
        [arxiv:2202.12780](https://arxiv.org/abs/2202.12780).
    """
    if not (0 <= confidence_level < 1):
        msg = (
            f"Argument confidence_level must fulfil 0 <= level < 1, got "
            f"{confidence_level}."
        )
        raise ValueError(msg)
    with_errorbars = confidence_level > 0

    if ax is None:
        plot_backend = get_config()["plot_backend"]
        if plot_backend == "matplotlib":
            ax = plt.gca()
        else:
            import plotly.graph_objects as go

            fig = ax = go.Figure()
    elif isinstance(ax, mpl.axes.Axes):
        plot_backend = "matplotlib"
    elif is_plotly_figure(ax):
        import plotly.graph_objects as go

        plot_backend = "plotly"
        fig = ax
    else:
        msg = (
            "The ax argument must be None, a matplotlib Axes or a plotly Figure, "
            f"got {type(ax)}."
        )
        raise ValueError(msg)

    df = compute_bias(
        y_obs=y_obs,
        y_pred=y_pred,
        feature=feature,
        weights=weights,
        functional=functional,
        level=level,
        n_bins=n_bins,
    )

    if df["bias_stderr"].fill_nan(None).null_count() > 0 and with_errorbars:
        msg = (
            "Some values of 'bias_stderr' are null. Therefore no error bars are "
            "shown for that y_pred/model, despite the fact that confidence_level>0 "
            "was set to True."
        )
        warnings.warn(msg, UserWarning, stacklevel=2)

    if "model_" in df.columns:
        col_model = "model_"
    elif "model" in df.columns:
        col_model = "model"
    else:
        col_model = None

    if feature is None:
        # We treat the predictions from different models as a feature.
        feature_name = col_model
        feature_has_nulls = False
    else:
        feature_name = array_name(feature, default="feature")
        feature_has_nulls = df[feature_name].null_count() > 0

    is_categorical = False
    is_string = False
    feature_dtype = df.get_column(feature_name).dtype
    if (feature_dtype == pl.Categorical) or (
        polars_version >= Version("0.20.0") and feature_dtype == pl.Enum
    ):
        is_categorical = True
    elif feature_dtype in [pl.Utf8, pl.Object]:
        is_string = True

    n_x = df[feature_name].n_unique()

    # horizontal line at y=0
    if plot_backend == "matplotlib":
        ax.axhline(y=0, xmin=0, xmax=1, color="k", linestyle="dotted")
    else:
        fig.add_hline(y=0, line={"color": "black", "dash": "dot"}, showlegend=False)

    # bias plot
    if feature is None or col_model is None:
        pred_names = [None]
    else:
        # pred_names = df[col_model].unique() this automatically sorts
        pred_names, _ = get_sorted_array_names(y_pred)
    n_models = len(pred_names)
    with_label = feature is not None and (n_models >= 2 or feature_has_nulls)

    if (is_string or is_categorical) and feature_has_nulls:
        # We want the Null values at the end and therefore sort again.
        df = df.sort(feature_name, descending=False, nulls_last=True)

    for i, m in enumerate(pred_names):
        filter_condition = True if m is None else pl.col(col_model) == m
        df_i = df.filter(filter_condition)
        label = m if with_label else None

        if df_i["bias_stderr"].null_count() > 0:
            with_errorbars_i = False
        else:
            with_errorbars_i = with_errorbars

        if with_errorbars_i:
            # We scale bias_stderr by the corresponding value of the t-distribution
            # to get our desired confidence level.
            n = df_i["bias_count"].to_numpy()
            conf_level_fct = special.stdtrit(
                np.maximum(n - 1, 1),  # degrees of freedom, if n=0 => bias_stderr=0.
                1 - (1 - confidence_level) / 2,
            )
            df_i = df_i.with_columns(
                [(pl.col("bias_stderr") * conf_level_fct).alias("bias_stderr")]
            )

        if is_string or is_categorical:
            df_ii = df_i.filter(pl.col(feature_name).is_not_null())
            # We x-shift a little for a better visual.
            span = (n_x - 1) / n_x / n_models  # length for one cat value and one model
            x = np.arange(n_x - feature_has_nulls)
            if n_models > 1:
                x = x + (i - n_models // 2) * span * 0.5
            if plot_backend == "matplotlib":
                ax.errorbar(
                    x,
                    df_ii["bias_mean"],
                    yerr=df_ii["bias_stderr"] if with_errorbars_i else None,
                    marker="o",
                    linestyle="None",
                    capsize=4,
                    label=label,
                )
            else:
                fig.add_scatter(
                    x=x,
                    y=df_ii["bias_mean"],
                    error_y={
                        "type": "data",  # value of error bar given in data coordinates
                        "array": df_ii["bias_stderr"] if with_errorbars_i else None,
                        "width": 4,
                        "visible": True,
                    },
                    marker={"color": get_plotly_color(i)},
                    mode="markers",
                    name=label,
                )
        else:
            if with_errorbars_i:
                lower = df_i["bias_mean"] - df_i["bias_stderr"]
                upper = df_i["bias_mean"] + df_i["bias_stderr"]
                if plot_backend == "matplotlib":
                    ax.fill_between(
                        df_i[feature_name],
                        lower,
                        upper,
                        alpha=0.1,
                    )
                else:
                    # plotly has not equivalent of fill_between and needs a bit more
                    # coding
                    # FIXME: polars >= 0.20.0 use df_i[::-1, feature_name]
                    color = get_plotly_color(i)
                    fig.add_scatter(
                        x=pl.concat([df_i[feature_name], df_i[feature_name][::-1]]),
                        y=pl.concat([lower, upper[::-1]]),
                        fill="toself",
                        fillcolor=color,
                        hoverinfo="skip",
                        line={"color": color},
                        mode="lines",
                        opacity=0.1,
                        showlegend=False,
                    )
            if plot_backend == "matplotlib":
                ax.plot(
                    df_i[feature_name],
                    df_i["bias_mean"],
                    linestyle="solid",
                    marker="o",
                    label=label,
                )
            else:
                fig.add_scatter(
                    x=df_i[feature_name],
                    y=df_i["bias_mean"],
                    marker_symbol="circle",
                    mode="lines+markers",
                    line={"color": get_plotly_color(i)},
                    name=label,
                )

        if df_i[feature_name].null_count() > 0:
            # Null values are plotted as diamonds as rightmost point.
            df_i_null = df_i.filter(pl.col(feature_name).is_null())

            if is_string or is_categorical:
                x_null = np.array([n_x - 1])
            else:
                x_min = df_i[feature_name].min()
                x_max = df_i[feature_name].max()
                if n_x == 1:
                    # df_i[feature_name] is the null value.
                    x_null, span = np.array([0]), 1
                elif n_x == 2:
                    x_null, span = np.array([2 * x_max]), 0.5 * x_max / n_models
                else:
                    x_null = np.array([x_max + (x_max - x_min) / n_x])
                    span = (x_null - x_max) / n_models

            if n_models > 1:
                x_null = x_null + (i - n_models // 2) * span * 0.5

            if plot_backend == "matplotlib":
                color = ax.get_lines()[-1].get_color()  # previous line color
                ax.errorbar(
                    x_null,
                    df_i_null["bias_mean"],
                    yerr=df_i_null["bias_stderr"] if with_errorbars_i else None,
                    marker="D",
                    linestyle="None",
                    capsize=4,
                    label=None,
                    color=color,
                )
            else:
                fig.add_scatter(
                    x=x_null,
                    y=df_i_null["bias_mean"],
                    error_y={
                        "type": "data",  # value of error bar given in data coordinates
                        "array": df_i_null["bias_stderr"] if with_errorbars_i else None,
                        "width": 4,
                        "visible": True,
                    },
                    marker={"color": get_plotly_color(i), "symbol": "diamond"},
                    mode="markers",
                    showlegend=False,
                )

    if is_categorical or is_string:
        if df_i[feature_name].null_count() > 0:
            # print(f"{df_i=}")
            # Without cast to pl.Uft8, the following error might occur:
            # exceptions.ComputeError: cannot combine categorical under a global string
            # cache with a non cached categorical
            tick_labels = df_i[feature_name].cast(pl.Utf8).fill_null("Null")
        else:
            tick_labels = df_i[feature_name]
        x_label = feature_name
        if plot_backend == "matplotlib":
            ax.set_xticks(np.arange(n_x), labels=tick_labels)
        else:
            fig.update_layout(
                xaxis={
                    "tickmode": "array",
                    "tickvals": np.arange(n_x),
                    "ticktext": tick_labels,
                }
            )
    elif feature_name is not None:
        x_label = "binned " + feature_name
    else:
        x_label = ""

    if feature is None:
        title = "Bias Plot"
    else:
        model_name = array_name(y_pred, default="")
        # test for empty string ""
        title = "Bias Plot" if not model_name else "Bias Plot " + model_name

    if plot_backend == "matplotlib":
        ax.set(xlabel=x_label, ylabel="bias", title=title)
    else:
        fig.update_layout(xaxis_title=x_label, yaxis_title="bias", title=title)

    if with_label and plot_backend == "matplotlib":
        if feature_has_nulls:
            # Add legend entry for diamonds as Null values.
            # Unfortunately, the Null value legend entry often appears first, but we
            # want it at the end.
            ax.scatter([], [], marker="D", color="grey", label="Null values")
            handles, labels = ax.get_legend_handles_labels()
            if (labels[-1] != "Null values") and "Null values" in labels:
                i = labels.index("Null values")
                # i can't be the last index
                labels = labels[:i] + labels[i + 1 :] + [labels[i]]
                handles = handles[:i] + handles[i + 1 :] + [handles[i]]
            ax.legend(handles=handles, labels=labels)
        else:
            ax.legend()
    elif with_label and feature_has_nulls:
        fig.add_scatter(
            x=[None],
            y=[None],
            mode="markers",
            name="Null values",
            marker={"size": 7, "color": "grey", "symbol": "diamond"},
        )

    return ax

plot_reliability_diagram(y_obs, y_pred, weights=None, *, functional='mean', level=0.5, n_bootstrap=None, confidence_level=0.9, diagram_type='reliability', ax=None)

Plot a reliability diagram.

A reliability diagram or calibration curve assesses auto-calibration. It plots the conditional expectation given the predictions E(y_obs|y_pred) (y-axis) vs the predictions y_pred (x-axis). The conditional expectation is estimated via isotonic regression (PAV algorithm) of y_obs on y_pred. See Notes for further details.

Parameters:

Name Type Description Default
y_obs array-like of shape (n_obs)

Observed values of the response variable. For binary classification, y_obs is expected to be in the interval [0, 1].

required
y_pred array-like of shape (n_obs) or (n_obs, n_models)

Predicted values of the conditional expectation of Y, E(Y|X).

required
weights array-like of shape (n_obs) or None

Case weights.

None
functional str

The functional that is induced by the identification function V. Options are:

  • "mean". Argument level is neglected.
  • "median". Argument level is neglected.
  • "expectile"
  • "quantile"
'mean'
level float

The level of the expectile or quantile. (Often called \(\alpha\).) It must be 0 <= level <= 1. level=0.5 and functional="expectile" gives the mean. level=0.5 and functional="quantile" gives the median.

0.5
n_bootstrap int or None

If not None, then scipy.stats.bootstrap with n_resamples=n_bootstrap is used to calculate confidence intervals at level confidence_level.

None
confidence_level float

Confidence level for bootstrap uncertainty regions.

0.9
diagram_type str
  • "reliability": Plot a reliability diagram.
  • "bias": Plot roughly a 45 degree rotated reliability diagram. The resulting plot is similar to plot_bias, i.e. y_pred - E(y_obs|y_pred) vs y_pred.
'reliability'
ax matplotlib.axes.Axes or plotly Figure

Axes object to draw the plot onto, otherwise uses the current Axes.

None

Returns:

Name Type Description
ax

Either the matplotlib axes or the plotly figure. This is configurable by setting the plot_backend via model_diagnostics.set_config or model_diagnostics.config_context.

Notes

The expectation conditional on the predictions is \(E(Y|y_{pred})\). This object is estimated by the pool-adjacent violator (PAV) algorithm, which has very desirable properties:

- It is non-parametric without any tuning parameter. Thus, the results are
  easily reproducible.
- Optimal selection of bins
- Statistical consistent estimator

For details, refer to [Dimitriadis2021].

References
[Dimitriadis2021]

T. Dimitriadis, T. Gneiting, and A. I. Jordan. "Stable reliability diagrams for probabilistic classifiers". In: Proceedings of the National Academy of Sciences 118.8 (2021), e2016191118. doi:10.1073/pnas.2016191118.

Source code in src/model_diagnostics/calibration/plots.py
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
def plot_reliability_diagram(
    y_obs: npt.ArrayLike,
    y_pred: npt.ArrayLike,
    weights: Optional[npt.ArrayLike] = None,
    *,
    functional: str = "mean",
    level: float = 0.5,
    n_bootstrap: Optional[str] = None,
    confidence_level: float = 0.9,
    diagram_type: str = "reliability",
    ax: Optional[mpl.axes.Axes] = None,
):
    r"""Plot a reliability diagram.

    A reliability diagram or calibration curve assesses auto-calibration. It plots the
    conditional expectation given the predictions `E(y_obs|y_pred)` (y-axis) vs the
    predictions `y_pred` (x-axis).
    The conditional expectation is estimated via isotonic regression (PAV algorithm)
    of `y_obs` on `y_pred`.
    See [Notes](#notes) for further details.

    Parameters
    ----------
    y_obs : array-like of shape (n_obs)
        Observed values of the response variable.
        For binary classification, y_obs is expected to be in the interval [0, 1].
    y_pred : array-like of shape (n_obs) or (n_obs, n_models)
        Predicted values of the conditional expectation of Y, `E(Y|X)`.
    weights : array-like of shape (n_obs) or None
        Case weights.
    functional : str
        The functional that is induced by the identification function `V`. Options are:

        - `"mean"`. Argument `level` is neglected.
        - `"median"`. Argument `level` is neglected.
        - `"expectile"`
        - `"quantile"`
    level : float
        The level of the expectile or quantile. (Often called \(\alpha\).)
        It must be `0 <= level <= 1`.
        `level=0.5` and `functional="expectile"` gives the mean.
        `level=0.5` and `functional="quantile"` gives the median.
    n_bootstrap : int or None
        If not `None`, then `scipy.stats.bootstrap` with `n_resamples=n_bootstrap`
        is used to calculate confidence intervals at level `confidence_level`.
    confidence_level : float
        Confidence level for bootstrap uncertainty regions.
    diagram_type: str
        - `"reliability"`: Plot a reliability diagram.
        - `"bias"`: Plot roughly a 45 degree rotated reliability diagram. The resulting
          plot is similar to `plot_bias`, i.e. `y_pred - E(y_obs|y_pred)` vs `y_pred`.
    ax : matplotlib.axes.Axes or plotly Figure
        Axes object to draw the plot onto, otherwise uses the current Axes.

    Returns
    -------
    ax :
        Either the matplotlib axes or the plotly figure. This is configurable by
        setting the `plot_backend` via
        [`model_diagnostics.set_config`][model_diagnostics.set_config] or
        [`model_diagnostics.config_context`][model_diagnostics.config_context].

    Notes
    -----
    The expectation conditional on the predictions is \(E(Y|y_{pred})\). This object is
    estimated by the pool-adjacent violator (PAV) algorithm, which has very desirable
    properties:

        - It is non-parametric without any tuning parameter. Thus, the results are
          easily reproducible.
        - Optimal selection of bins
        - Statistical consistent estimator

    For details, refer to [Dimitriadis2021].

    References
    ----------
    `[Dimitriadis2021]`

    :   T. Dimitriadis, T. Gneiting, and A. I. Jordan.
        "Stable reliability diagrams for probabilistic classifiers".
        In: Proceedings of the National Academy of Sciences 118.8 (2021), e2016191118.
        [doi:10.1073/pnas.2016191118](https://doi.org/10.1073/pnas.2016191118).
    """
    if ax is None:
        plot_backend = get_config()["plot_backend"]
        if plot_backend == "matplotlib":
            ax = plt.gca()
        else:
            import plotly.graph_objects as go

            fig = ax = go.Figure()
    elif isinstance(ax, mpl.axes.Axes):
        plot_backend = "matplotlib"
    elif is_plotly_figure(ax):
        import plotly.graph_objects as go

        plot_backend = "plotly"
        fig = ax
    else:
        msg = (
            "The ax argument must be None, a matplotlib Axes or a plotly Figure, "
            f"got {type(ax)}."
        )
        raise ValueError(msg)

    if diagram_type not in ("reliability", "bias"):
        msg = (
            "Parameter diagram_type must be either 'reliability', 'bias', "
            f"got {diagram_type}."
        )
        raise ValueError(msg)

    if (n_cols := length_of_second_dimension(y_obs)) > 0:
        if n_cols == 1:
            y_obs = get_second_dimension(y_obs, 0)
        else:
            msg = (
                f"Array-like y_obs has more than 2 dimensions, y_obs.shape[1]={n_cols}"
            )
            raise ValueError(msg)

    y_min, y_max = get_array_min_max(y_pred)
    if diagram_type == "reliability":
        if plot_backend == "matplotlib":
            ax.plot([y_min, y_max], [y_min, y_max], color="k", linestyle="dotted")
        else:
            fig.add_scatter(
                x=[y_min, y_max],
                y=[y_min, y_max],
                mode="lines",
                line={"color": "black", "dash": "dot"},
                showlegend=False,
            )
    elif plot_backend == "matplotlib":
        # horizontal line at y=0

        # The following plots in axis coordinates
        # ax.axhline(y=0, xmin=0, xmax=1, color="k", linestyle="dotted")
        # but we plot in data coordinates instead.
        ax.hlines(0, xmin=y_min, xmax=y_max, color="k", linestyle="dotted")
    else:
        # horizontal line at y=0
        fig.add_hline(y=0, line={"color": "black", "dash": "dot"}, showlegend=False)

    if n_bootstrap is not None:
        if functional == "mean":

            def iso_statistic(y_obs, y_pred, weights=None, x_values=None):
                iso_b = (
                    IsotonicRegression_skl(out_of_bounds="clip")
                    .set_output(transform="default")
                    .fit(y_pred, y_obs, sample_weight=weights)
                )
                return iso_b.predict(x_values)

        else:

            def iso_statistic(y_obs, y_pred, weights=None, x_values=None):
                iso_b = IsotonicRegression(functional=functional, level=level).fit(
                    y_pred, y_obs, sample_weight=weights
                )
                return iso_b.predict(x_values)

    n_pred = length_of_second_dimension(y_pred)
    pred_names, _ = get_sorted_array_names(y_pred)

    for i in range(len(pred_names)):
        y_pred_i = y_pred if n_pred == 0 else get_second_dimension(y_pred, i)

        if functional == "mean":
            iso = (
                IsotonicRegression_skl()
                .set_output(transform="default")
                .fit(y_pred_i, y_obs, sample_weight=weights)
            )
        else:
            iso = IsotonicRegression(functional=functional, level=level).fit(
                y_pred_i, y_obs, sample_weight=weights
            )

        # confidence intervals
        if n_bootstrap is not None:
            data: tuple[npt.ArrayLike, ...]
            data = (y_obs, y_pred_i) if weights is None else (y_obs, y_pred_i, weights)

            boot = bootstrap(
                data=data,
                statistic=partial(iso_statistic, x_values=iso.X_thresholds_),
                n_resamples=n_bootstrap,
                paired=True,
                confidence_level=confidence_level,
                # Note: method="bca" might result in
                # DegenerateDataWarning: The BCa confidence interval cannot be
                # calculated. This problem is known to occur when the distribution is
                # degenerate or the statistic is np.min.
                method="basic",
            )

            # We make the interval conservatively monotone increasing by applying
            # np.maximum.accumulate etc.
            lower = -np.minimum.accumulate(-boot.confidence_interval.low)
            upper = np.maximum.accumulate(boot.confidence_interval.high)
            if diagram_type == "bias":
                lower = iso.X_thresholds_ - lower
                upper = iso.X_thresholds_ - upper
            if plot_backend == "matplotlib":
                ax.fill_between(iso.X_thresholds_, lower, upper, alpha=0.1)
            else:
                # plotly has not equivalent of fill_between and needs a bit more coding
                color = get_plotly_color(i)
                fig.add_scatter(
                    x=np.r_[iso.X_thresholds_, iso.X_thresholds_[::-1]],
                    y=np.r_[lower, upper[::-1]],
                    fill="toself",
                    fillcolor=color,
                    hoverinfo="skip",
                    line={"color": color},
                    mode="lines",
                    opacity=0.1,
                    showlegend=False,
                )

        # reliability curve
        label = pred_names[i] if n_pred >= 2 else None

        y_plot = (
            iso.y_thresholds_
            if diagram_type == "reliability"
            else iso.X_thresholds_ - iso.y_thresholds_
        )
        if plot_backend == "matplotlib":
            ax.plot(iso.X_thresholds_, y_plot, label=label)
        else:
            fig.add_scatter(
                x=iso.X_thresholds_,
                y=y_plot,
                mode="lines",
                line={"color": get_plotly_color(i)},
                name=label,
            )

    xlabel_mapping = {
        "mean": "E(Y|X)",
        "median": "median(Y|X)",
        "expectile": f"{level}-expectile(Y|X)",
        "quantile": f"{level}-quantile(Y|X)",
    }
    ylabel_mapping = {
        "mean": "E(Y|prediction)",
        "median": "median(Y|prediction)",
        "expectile": f"{level}-expectile(Y|prediction)",
        "quantile": f"{level}-quantile(Y|prediction)",
    }
    xlabel = "prediction for " + xlabel_mapping[functional]
    if diagram_type == "reliability":
        ylabel = "estimated " + ylabel_mapping[functional]
        title = "Reliability Diagram"
    else:
        ylabel = "prediction - estimated " + ylabel_mapping[functional]
        title = "Bias Reliability Diagram"

    if n_pred <= 1 and len(pred_names[0]) > 0:
        title = title + " " + pred_names[0]

    if plot_backend == "matplotlib":
        if n_pred >= 2:
            ax.legend()
        ax.set_title(title)
        ax.set(xlabel=xlabel, ylabel=ylabel)
    else:
        if n_pred <= 1:
            fig.update_layout(showlegend=False)
        fig.update_layout(xaxis_title=xlabel, yaxis_title=ylabel, title=title)

    return ax