|
what are key concepts in
needed to start machine learning a text book has too many topics and I dont get clear idea which one is more relevant. Any help is appreciated |
|
I sometimes find it helps to go from the other way around. Grab a book/tutorial about Bayesian inference (for example), read it and when you don't understand a concept, pick up your Statistics/Probability book and read on the concerned topic. Although to do any proper machine learning you really should take full classes on each of those topics (probability/statistics are sometimes merged as one). 1
I take a top down and bottom up approach. Do as you say - read about ML and then read on concepts you don't understand. After a little while, you will realise a lot of what you are reading has some undertone. When I recognised that, I grab a book on that topic and read a bit more about it. There is always a danger of just skimming through Stats, which could lead to worse results than if you knew nothing! As an example, learning about EM led me to read about distributions and Baye's theorem. After a couple of clustering algorithms used Baye's theorem, I found a book on it and read that.
(Jun 22 '11 at 03:19)
Robert Layton
1
I agree completely. I started out by implementing and playing with simple NLP n-gram and nonparametric Bayesian models on different data sets. I didn't understand the underlying statistics at all, but playing with well thought out simulation data will give you a feel for what the different structures in probability theory and statistics buy you. Later when I learned formal probability and statistics and found that I already had a lot of the intuitions needed to understand the models and so, a lot of it was like learning a language to talk about what i knew from playing with data. Also, I learned a lot of new intuitions and better understood the ones I had from before. The new intuitions let me build new models as a lot of machine learning is piecing together portions of different models together.
(Jun 22 '11 at 17:22)
Joseph Austerweil
|
|
If you are starting, I recommend you to check the online course by Andrew Ng in Stanford University. Is free, and it has a videolecture of different topics. You'll probably will need to read something like Bishop's "Pattern Recognition and ML" I really recommend you to start with these books. If you have no idea of Linear Algebra, Probability or Statistics, you HAVE TO read at least basic textbooks on the topics, any undergrad textbook should do the trick. I will add that Ng's course is moderately intense mathematically, and if you find it intimidating (though you might not) there are many other offerings out there. He also spends the last quarter of the course on reinforcement learning, an interesting but specialized area of machine learning.
(Jun 22 '11 at 11:29)
Jacob Jensen
Indeed, I talked a bit with Zoubin.G and Wray Buntine about this, and they did agree that while the course is great, it somewhat neglects things like graphical models and Bayesian inference, instead going for more MAP methods. I guess different people would have different takes on the same course.
(Jun 29 '11 at 04:26)
Leon Palafox ♦
|
|
I would start by watching the Linear Algebra Khan Academy. Each lecture is only 10 minutes and there is a fast forward button so you can skip over stuff you already know. I would then pick up a copy of Machine Learning by Tom Mitchell. I found it a little more accessable then Bishop's "Pattern Recognition and ML" As you read the book, if you see something that you don't understand, look it up on wikipedia. If you don't run into something you don't understand on wikipedia, push the current topic on your "stack" and look up what you don't understand. Once you clear your stack and get back to the book you will know much more than when you left. After finishing Mitchell, I would do the same with Bishop as it will give you an deepter understanding and you should better prepared to understand it. Bishop's book is indeed pretty hardcore. Ethem Alpaydin's Introduction to Machine Learning is the one I was recommended and I liked it as well.
(Jun 24 '11 at 12:50)
levesque
|
|
Skim through the following and use them as a reference as you read PRML, Hastie & Tibshirani or Duda & Hart: Hogg, Probability and Statistical Inference Berger, Statistical Decision Theory and Bayesian Analysis Strang, Linear Algebra Boyd & Vandenberghe, Convex Optimization Also see Online tutorials: http://doushen.org/Resource.aspx Quora: How can you learn Mathematics for machine learning? Convex Optimisation is very important advise
(May 08 '14 at 11:56)
Sergey Ten
|
|
The used mathematics depends on the used machine learning methods. To understand maximum-likelihood estimation and Bayesian methods, you will need:
You don't need many things from linear algebra for this path. Knowledge of matrix multiplication is used only later, when programming something. |
|
If you read the excellent material "A tutorial on kernel methods for categorization" you find that an introduction to Functional analysis is welcome. The kernel trick is rarely fully understood, although it is very useful (see RBF networks and non linear SVM). One can get a deep understanding if Reproducing kernel Hilbert spaces and other mathematical tools. Another example is "Topological properties of generalized approximation spaces" where the authors use topological elements for discussing rough sets. Although I did not read this later article, one of my math professors made some interesting connections between this article and Computational Intelligence areas. |