|
Over the last 2 years, I have taken a variety of classes in Reinforcement Learning, NLP, Robotics, Vision, Graphical Models, and Optimization in pursuit of a sort of "general model of intelligence" to pursue research in. Each domain has tools capable of doing specific tasks very well, and ones based on learning parameters from data improve drastically as data increases. However, what research is currently being conducted in integrating domain-specific knowledge into a more general system? |
|
I recommend that you have a look at AGI and BICA (for example BICA-08) and the "integrative track" at AAAI. Thanks! I had no idea there were any conferences dedicated to Strong AI!
(Dec 30 '10 at 05:54)
Daniel Duckwoth
|
|
I'm not surprised you haven't gotten many answers to this question, given the general stigma associated AGI. That being said I don't think now, 30+ years after blocks world, it is unreasonable to revisit the question, not to imply it hasn't been worked on since then. As you said, such an approach would likely involve the integration of a variety of different techniques. In my mind this would most likely involve some highly recurrent system composed of a lower level system, or series of systems, which maps inputs to concepts/categories which are then manipulated by some higher level 'agent' sitting on top of it all. I plan on pursuing such ideas in my graduate work, although likely under the guise of something slightly less ambitious/scary sounding than AGI. Unfortunately, I can't provide much insight except links to current work which I find promising as components for such a system.
I'd love to hear what other people have to say on the subject. 1
I think your answer is misguided. Deep belief nets, as of now, are not known to necessarily outperform boosted decision trees ( http://event.cwi.nl/uai2010/papers/UAI2010_0282.pdf ) or simple k-means-based feature extractors ( http://robotics.stanford.edu/~ang/papers/nipsdlufl10-AnalysisSingleLayerUnsupervisedFeatureLearning.pdf ). They are promising, but still seem to require a lot of engineering to get right for each specific problem, so shouldn't qualify (yet) as a general technique. HTMs are untested by the machine learning community, and are not known to perform well on any interesting problem. Confabulation seems a natural artifact of probabilistic models, and I'm not sure one can say it extends to arbitrary problems without more engineering than simpler methods; the imagination-engines make no sense for me. Also, someone else had mentioned bayesian methods; those seem to require even more engineering in practice, since a proper model structure needs to be chosen and then an approximate problem must be solved, and there are lots of easy ways to fail badly with this sort of technique.
(Dec 30 '10 at 06:08)
Alexandre Passos ♦
|