week – 3 Metrics >>> How to Win a Data Science Competition Learn from Top Kagglers
1. Suppose we solve a binary classification task and our solution is scores with logloss. What predictions are more preferable in terms of logloss if true labels are y_true = [0, 0, 0, 0].
2. Suppose we solve a regression task and we optimize MSE error. If we managed to lower down MSE loss on either train set or test set, how did we change Pearson correlation coefficient between target vector and the predictions on the same set?
3. What would be a best constant prediction for a following multi-class classification task with 4 classes? The solution is scored with multi-class logloss. The number of objects of each class in train set is: 18, 3, 15, 24.
Enter four comma separated values. Round each to two decimal places and use a leading zero before a fractional part (e.g. “0.50”; not “.5”).
0.30, 0.05, 0.25, 0.40
4. What is the best constant predictor for R-squared metric?
5. Select the correct statements.
6. Suppose the target metric is M1, and optimization loss is M2. We train a model and monitor its quality on a holdout set using metrics M1 and M2.