|
Say I have a cascade of classifiers for detecting fraud. C1-->C2-->... -->CN where if one decide something is fraud, it is left out and not tested further. I have a performance curve of each classifier (ROC , FPR vs. TPR). The order cannot be changed as the first classifier is designed only to look on part of the data, the second on larger part of the data and so on and so forth. How can I set the threshold of each classifier to get optimal performance in the end of the chain in terms of TPR (true positive rate) for a certain FPR (false positive rate). Thanks, HS. |