A Bayesian approach to machine learning model comparison
Type of publication: | Idiap-Com |
Citation: | Morais_Idiap-Com-01-2023 |
Number: | Idiap-Com-01-2023 |
Year: | 2023 |
Month: | 2 |
Institution: | Idiap |
Abstract: | Performance measures are an important component of machine learning algorithms. They are useful when it comes to evaluate the quality of a model, but also to help the algorithm improve itself. Every need has its own metric. However, when we have a small data set, these measures don’t express properly the performance of the model. That’s when confidence intervals and credible regions come in handy. Expressing the performance measures in a probabilistic setting lets us develop them as distributions. Then we can use those distributions to establish credible regions. In the first instance we will address the precision, recall and F1-score followed by the accuracy, specificity and Jaccard index. We will study the coverage of the credible regions computed through the posterior distributions. Then we will discuss ROC curve, precision-recall curve and k-fold cross-validation. Finally we will conclude with a small discussion about what we could do with dependent samples. |
Keywords: | |
Projects |
Idiap |
Authors | |
Editors | |
Added by: | [ADM] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|