|
Ok, so a soft question which will hopefully generate some interesting responses, because it's always fun to think of "big picture" things like that once in a while. Do you think machine learning is an accurate model for the human cognitive process? And what do you see as the future of Machine Learning/AI in general? |
|
I think the negative responses you're getting here is because no one (ok, almost no one) thinks that machine learning approaches represent an "accurate" model of human cognition. |
|
Indeed the phrasing of your question feels a bit naive: nobody here would think that SVMs or MLPs are accurate models of human cognition. They weren't invented to model human cognition in the first place either. However some machine learning researchers claim that now is the time to start thinking to scale machine learning algorithms to wider problems and focus on AI as the long term objective. For instance read the last chapter of Large-Scale Kernel Machines by Bengio and Lecun: Scaling Learning Algorithms towards AI. ...even though cognitive science loved MLPs for because they could "model the behavior of a neuron". But you're not likely to see anyone making that claim anymore.
(Jul 19 '10 at 16:32)
Andrew Rosenberg
1
@Andrew: Don't knock the humble perceptron too hard. It's on it's third wave now (third: deep learning, second: MLP, first: use as linear classifier). Anything that can be at the root of three different fads (I use the term in the benign sense) must have something to it.
(Jul 22 '10 at 19:38)
Jacob Jensen
|
|
Some people (Tom Griffiths included) have been having some success with bayesian models for some cognitive processess, based on the idea of bounded rationality. There was a nips workshop last year, if you're interested. |
|
k, if we're taking this question a little bit seriously... Brain Computer Interface research -- classifying behaviors or thoughts based on EEG or fMRI data -- attempts to model secondary effects (electrical impulses on the surface of the head, or blood flow in the brain) of the human cognitive process. This isn't quite making the leap to saying that the model used in the classification is modeling the brain process, but it's a step closer. The whole idea of the question strikes to a fundamental distinction in AI. Should the goal of AI be to model human cognition (or more broadly, the perception/cognition/action process)? Or should we identify problems that we believe require human-levels of intelligence, and try to discover solutions that perform solve these problems as well as possible? Much of computer science, and almost assuredly represented by the vast majority of the statistical modeling/optimization types who are going to be on this site, have been focused on the latter. That said, there are still people working on modeling human cognitive processes or thinking about machine learning w.r.t. human cognition. There are some probabilistic models of human language acquisition, and evidence of human like processing of garden path sentences in probabilistic parsers. But still...no one thinks your ears have an HMM in them, despite their successful application to speech recognition. |
|
I would disagree with the negative responses. Some AI techniques definitely evolved from theories of human cognition. For examples: ART neural networks and confabulation theory. They may not be "accurate", but they did come from a desire to model human thinking. However, most of machine learning as a discipline started as an (implicit? explicit?) rejection of the goals and methodology of traditional AI. You can see this is many different ways, but, for example, of all sub-areas of computer science, machine learning is one of the most empirically grounded (many COLT papers have experiments, for example, which is not at all common in other theoretical conferences in CS), while traditionally AI was focused a lot more on intuition and interesting ideas than on objectively measurable concerns. So even traditional AI techniques that are used in machine learning have been "reborn". "Confabulation", for example, has been mostly replaced by either sampling or regularization.
(Jul 19 '10 at 17:56)
Alexandre Passos ♦
I do not disagree. However, I feel my main point is still valid: some machine learning came from theories of human cognition. Thus, I do not think the original question is as naive as a few of these posts seem to imply or state outright.
(Jul 19 '10 at 18:04)
TR FitzGibbon
@Alexandre: Even then, some of the principles such as regularization can still be understood (at least at some level) as a rough model underlying human cognition "simpler models should be favored over more complicated ones" which is roughly what the Occam's principle states.
(Jul 19 '10 at 18:17)
spinxl39
@sphinxl39: true, that is one interpretation of regularization. However, it's not the first interpretation, which is that it turns ill-posed problems into well-posed problems, or the most common, which is that it can allow for a very rough sort of VC generalization bound, which has less to do with intuition than with small-sample statistics. But still my point is that while it has a grounding intuition, it's not the main motivator or the characteristic that will make it acceptable and get papers published on it (better performance on real-world data is such a characteristic).
(Jul 19 '10 at 18:24)
Alexandre Passos ♦
@Alexandre: "it's not the main motivator or the characteristic that will make it acceptable and get papers published on it (better performance on real-world data is such a characteristic)." I definitely agree with that!
(Jul 19 '10 at 18:39)
TR FitzGibbon
Is there a theory of cognition that claims that human cognition follows Occam's razor? Occam's razor is more of a scientific/empirical reasoning heuristic rather than an theory itself. Appeals to parsimony show up all over machine learning, but my impression is that its not to echo human cognition, but rather to keep models small, more easily understandable, avoid over-fitting/"curse of dimensionality", etc.
(Jul 19 '10 at 19:07)
Andrew Rosenberg
@ Alexandre: "However, most of machine learning as a discipline started as an (implicit? explicit?) rejection of the goals and methodology of traditional AI[...] while traditionally AI was focused a lot more on intuition and interesting ideas than on objectively measurable concerns." This may or may not be true, but I do not feel it is the most important distinction between AI and ML. I think the most important distinction actually supports the original question and my response: that ML is related to human cognition. In my experience, "traditional AI" is often used as a label for methods that focused on coding large sets of rules and knowledge. Machine learning is a rejection of those goals and methods and adoption of methods that attempt to acquire knowledge by learning form data. Some of those methods are thought to be much more similar to how biological brains work than how knowledge was represented with "traditional AI".
(Jul 20 '10 at 16:48)
TR FitzGibbon
I'd say "cognition" in a vague way, but never "human cognition". Very few of the ML papers I read made any claims whatsoever about human cognition, and most explicitly reject more "human-friendly" approaches in favor of other approaches that work better. This is a valid criticism against most modern ML, and it is usual to hear it, so I don't think it makes sense to claim that ML as it is is related to human cognition. See my comment on Aria's answer, for an example.
(Jul 20 '10 at 16:53)
Alexandre Passos ♦
Now, I'd say, we'd probably just be discussing matters of degree. And, on that point, I would probably agree with you more than not. Much of ML research does not model cognition at all and none of it does so accurately. However, I certainly have read papers that start with theories of human cognition (see my previous examples), though there are very few of them, so I cannot go so far as to say "never human cognition". Thanks for the discussion! (Though I guess we're supposed to be answering questions and not discussing?)
(Jul 20 '10 at 17:41)
TR FitzGibbon
Nah, discussions are sometimes more illuminating that just question-answering.
(Jul 20 '10 at 17:44)
Alexandre Passos ♦
showing 5 of 10
show all
|
|
Well, not all but some models (or paradigms) are to a large (or some) extent motivated by how humans think, understand, or reason about things (or phenomena) in the real world. For example, Bayesian learning, or the very general principle of Occam's razor. |
|
So here's my take on the relationship between ML and human cognition. There are plenty of exceptions to this, but my alarm bells go off when I hear about a piece of ML being compared to human cognition. That's a learned response I have as an AI practitioner because I grew up post AI winter, where one of the many lessons AI learned was that we dramatically over-sold what we did as being related to human cognition. That all being said, as someone who works in natural language processing, my day-to-day research is influenced by the factors and cues which I think as speakers we use to make language analysis decisions. The particular learning mechanisms that NLP/ML use I think are exploited far better by machine learning methodology, but the cues themselves are far simpler and shallow relative to the ones I use to make a decision. I think the best AI research (whether it be in language or vision) is about thinking about how as a human I solve these problems and operationalizing those cues in the language of, right now, machine learning. 1
Also, I think an important advance in machine learning is trying to do things differently from what a person would do, if it seems more appropriate to the problem and is easier to implement or works better on a training set. I've joked with some friends about how it's easy to forget that, given hundreds of bags of meaningless opaque tokens grouped in categories, it'd be really hard to manually make sense of it so as to generalize as well as a state-of-the-art classifier, and yet a lot of applying machine learning consists of reducing all kinds of problems to that (which, when looked at from this angle, feels quite unintuitive and against many principles of psychology, starting from the assumptions of opaque meaningless tokens and clear categories). I think this focusing on structures that work empirically first, and that model/imitate human reasoning second, is one of the best conceptual advances of machine learning as a discipline (together with the focus on generalization).
(Jul 19 '10 at 19:01)
Alexandre Passos ♦
|
|
I guess machine learning, as the highest level, has the main goal of building some predictive model that would correct itself, either in supervised sense (has output), or unsupervised sense (explains the data). This looks like how humans trying to fit into the world, but to achieve the goal ML doesn't has to mimic human's cognitive process. Yeah... I mean birds flap there wings to fly... Does that mean we can fly only by flapping wings? There is, afterall, aerodynamics and lift...
(Aug 30 '10 at 08:39)
kpx
|