Over the last 2 years, I have taken a variety of classes in Reinforcement Learning, NLP, Robotics, Vision, Graphical Models, and Optimization in pursuit of a sort of "general model of intelligence" to pursue research in. Each domain has tools capable of doing specific tasks very well, and ones based on learning parameters from data improve drastically as data increases. However, what research is currently being conducted in integrating domain-specific knowledge into a more general system?

asked Dec 28 '10 at 03:58

Daniel%20Duckwoth's gravatar image

Daniel Duckwoth
954222938

edited Dec 31 '10 at 00:18


2 Answers:

I'm not surprised you haven't gotten many answers to this question, given the general stigma associated AGI. That being said I don't think now, 30+ years after blocks world, it is unreasonable to revisit the question, not to imply it hasn't been worked on since then.

As you said, such an approach would likely involve the integration of a variety of different techniques. In my mind this would most likely involve some highly recurrent system composed of a lower level system, or series of systems, which maps inputs to concepts/categories which are then manipulated by some higher level 'agent' sitting on top of it all.

I plan on pursuing such ideas in my graduate work, although likely under the guise of something slightly less ambitious/scary sounding than AGI.

Unfortunately, I can't provide much insight except links to current work which I find promising as components for such a system.

  • Hinton's deep belief nets and deep learning in general seems to solve the problem of forming higher level conceptual components rather well. The models are also generative which is a bonus.
  • Hawkins' HTMs are an interesting biologically inspired model which incorporate the temporal nature of data. Some of the initial results, such as noisy image image classification, seem promising and they've come a long way from there. It doesn't seem there's much work being done outside of his own company numenta with the theory, and I'm not really sure why this is. I've also read some comments which claim HTMs and Deep Belief Nets might be converging to the same type of model.
  • Confabulation Theory seems to offer a potential candidate for a higher level component. Unfortunately I haven't been able to find much in the way of application.
  • S.L. Thaler's iamgination engines have always fascinated me. I'm not sure where they fit in.

I'd love to hear what other people have to say on the subject.

answered Dec 29 '10 at 14:35

tlake's gravatar image

tlake
313

edited Dec 29 '10 at 14:36

1

I think your answer is misguided.

Deep belief nets, as of now, are not known to necessarily outperform boosted decision trees ( http://event.cwi.nl/uai2010/papers/UAI2010_0282.pdf ) or simple k-means-based feature extractors ( http://robotics.stanford.edu/~ang/papers/nipsdlufl10-AnalysisSingleLayerUnsupervisedFeatureLearning.pdf ). They are promising, but still seem to require a lot of engineering to get right for each specific problem, so shouldn't qualify (yet) as a general technique.

HTMs are untested by the machine learning community, and are not known to perform well on any interesting problem.

Confabulation seems a natural artifact of probabilistic models, and I'm not sure one can say it extends to arbitrary problems without more engineering than simpler methods; the imagination-engines make no sense for me.

Also, someone else had mentioned bayesian methods; those seem to require even more engineering in practice, since a proper model structure needs to be chosen and then an approximate problem must be solved, and there are lots of easy ways to fail badly with this sort of technique.

(Dec 30 '10 at 06:08) Alexandre Passos ♦

I recommend that you have a look at AGI and BICA (for example BICA-08) and the "integrative track" at AAAI.

answered Dec 30 '10 at 05:48

%C5%81ukasz%20Stafiniak's gravatar image

Łukasz Stafiniak
3113

Thanks! I had no idea there were any conferences dedicated to Strong AI!

(Dec 30 '10 at 05:54) Daniel Duckwoth
Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.