Overall precision of model, TP / (TP + FP)
Overall recall of model, TP / (TP + FN)
Overall F1 score of model, 2 / (1 / Precision + 1 / Recall)
AuROC of model
AuPR of model
Error of model
True positive count at Spark's default decision threshold (0.5)
True negative count at Spark's default decision threshold (0.5)
False positive count at Spark's default decision threshold (0.5)
False negative count at Spark's default decision threshold (0.5)
Metrics across different threshold values
AuPR of model
AuROC of model
Error of model
Overall F1 score of model, 2 / (1 / Precision + 1 / Recall)
False negative count at Spark's default decision threshold (0.5)
False positive count at Spark's default decision threshold (0.5)
Overall precision of model, TP / (TP + FP)
Overall recall of model, TP / (TP + FN)
True negative count at Spark's default decision threshold (0.5)
True positive count at Spark's default decision threshold (0.5)
Metrics across different threshold values
Write this instance to json string
Write this instance to json string
should pretty print
json string of the instance
Convert metrics class to a map
Convert metrics class to a map
a map from metric name to metric value
Convert metrics into Metadata for saving
Convert metrics into Metadata for saving
skip unsupported values
Metadata metadata
RuntimeException
in case of unsupported value type
This instance json string
This instance json string
json string of the instance
Metrics for binary classification models
Overall precision of model, TP / (TP + FP)
Overall recall of model, TP / (TP + FN)
Overall F1 score of model, 2 / (1 / Precision + 1 / Recall)
AuROC of model
AuPR of model
Error of model
True positive count at Spark's default decision threshold (0.5)
True negative count at Spark's default decision threshold (0.5)
False positive count at Spark's default decision threshold (0.5)
False negative count at Spark's default decision threshold (0.5)
Metrics across different threshold values