Valid Confidence Sets in Multiclass and Multilabel Prediction

Ask for valid confidence:

  • “Valid”: validate for test data, train data, or the generating process?
  • “Confidence”: $P(Y \notin C(X)) \le \alpha$

To avoid too much attention on data based validation, a framework called conformal inference was proposed by Vovk et al. in 2005,

  • $n$ observations,
  • desired confidence level $1-\alpha$,
  • construct confidence sets $C(x)$ using conform methods so that the sets capture the underlying the distribution
    • a new pair $(X_{n+1}, Y_{n+1})$ from the same distribution,
    • $P(Y_{n+1}\in C(X_{n+1})) \le 1-\alpha$

Planted: by ;

Lei Ma (2021). 'Valid Confidence Sets in Multiclass and Multilabel Prediction', Datumorphism, 04 April. Available at: https://datumorphism.leima.is/wiki/machine-learning/classification/valid-confidence-sets-in-multiclass-multilabel-prediction/.