|
There was an earlier question on this forum about the most influential ideas from 1995-2005. On somewhat similar lines, I'm curious to know what people think are the current hot research topics in machine learning? Bayesian nonparametrics, deep belief nets, privacy preserving learning are some of those that come to my mind (though they too have been around for quite some time now). What do others think, especially areas in which not much prior work exists? |
|
Algorithms related to large Complex / Social Networks |
|
Decision making seems to have a resurgence of interest. The contextual bandit or bandit with side information problem is very important to all the major Internet companies (for serving ads) and is an active area of research. This year there have been important advances in reinforcement learning in fully observable worlds. I think decision making algorithms are the next frontier now that classification is good enough for a lot of problems. |
|
Look at recent NIPS and ICML tutorials to get an idea of what are the (maybe already slightly cooling) hot areas. I'd add to ogrisel's list learning to rank, the intersection between compressed sensing and machine learning (which is a bit more than sparsity methods), the use of human computation (as in mechanical turk) to improve learning systems, submodularity, and maybe topic modeling (although the fever might have passed for this as well). There also hot things on a smaller scale; I saw three different papers this year on margin kernel perceptrons on a budget (margin perceptron with unlearning, pa on a budget, and pegasos on a budget). |
|
Here is my take: deep learning and sparsity inducing algorithms and their convergence with ideas coming from neuroscience researchers trying to do brain reverse engineering using fMRI and MEG data. |