|
I have been looking into theoretical frameworks for method selection and have found very little systematic, mathematically-motivated work. What I have found is substantial, if piecemeal, work on particular methods and their tuning (i.e. prior selection in Bayesian methods), and method selection via bias selection (e.g. Inductive Policy: The Pragmatics of Bias Selection). I may be unrealistic at this early stage of machine learning's development, but I was hoping to find something like what measurement theory does in prescribing admissible transformations and tests by scale type, only writ large in the arena of learning problems. Any suggestions? Edit (for clarity): By 'method selection', I mean a framework for distinguishing the appropriate (or better, optimal) method with respect to a problem, or problem type. Informally, we do this all the time in data mining and statistics-- we are presented with a problem (e.g. test, classify, predict) for which we have some background knowledge (variables are known to be (e.g. independence, data type), and for which auxiliary assumptions are made (e.g. normality, homoscedasticity), and we must select a method for solving our problem. Now there are mathematical prescriptions along the lines of convergence results, optimality, time/space complexity, but no framework for their systematic application, that I am aware of. I realize that this is a hairy problem, but I do not think it ill-formed, and I was hoping that users more sophisticated than myself would have leads. This question may be sufficiently out-of-scope as to get modded-out, but this forum seemed the best place to ask such a question. |
This is an interesting question. I don't think it's out-of-scope.