Binary-Classification Performance-Metrics Benchmarking Data

Published: 14 September 2020| Version 4 | DOI: 10.17632/2g36672s5f.4
Contributors:
Gürol Canbek, Tugba Taskaya Temizel, Seref Sagiroglu

Description

This data provides the detailed test results of the benchmarking for the binary-classification performance metrics. The benchmark comprising three stages was applied on 13 metrics namely True Positive Rate, True Negative Rate, Positive Predictive Value, Negative Predictive Value, Accuracy, Informedness, Markedness, Balanced Accuracy, G, Normalized Mutual Information, F1, Cohen’s Kappa, and Mathew’s Correlation Coefficient (MCC). The new benchmarking method is described in Gürol Canbek, Tugba Taskaya Temizel, and Seref Sagiroglu, "BenchMetrics: A Systematic Benchmarking Method for Binary-Classification Performance Metrics", Neural Computing and Applications, 2020 (Submitted). The related benchmarking API and an interactive ready-to-run experimentation platform is available online (see the references)

Files

Steps to reproduce

An open-source benchmarking library in R is also provided in GitHub at https://github.com/gurol/benchmetrics.

Categories

Machine Learning, Machine Learning Theory, Contingency Table Analysis, Performance Measurement, Accuracy Analysis, Classification (Machine Learning), Classifier Evaluation

Licence