Has anyone here had any experience with Hierarchical Temporal Memory? wiki: http://en.wikipedia.org/wiki/Hierarchical_temporal_memory

I'm thinking of using it to build a set of rules from a large dataset as it appears to be more suitable than the neural net approach I am taking at the moment. Has anyone used this for their research work? If so would you recommend it?

It seems to be an interesting idea but currently I don't really know too much about it, so I'm looking for any papers or learning resources. Any suggestions would be appreciated!

asked Oct 17 '10 at 03:37

Janis's gravatar image


6 Answers:

I've read the papers and Hawkins book, it seems like an interesting and intuitive idea, but I think it ends there. No one has (afaik) picked up on this research and everyone I've talked to in the ML community is pretty dismissive of the work in general. You do have to be careful because it's not clear that this work gets the level of exposure because of Hawkins fame (from unrelated fields) or because it is a useful technology.

answered Oct 17 '10 at 04:38

anonymous_4235's gravatar image


Take a look at Hinton's Deep Belief Networks. Numenta seems to converge to them. Hawkins recently recognized that representing a pattern by a single number is limiting. In the future, Numenta is going to use sparse distributed representations.

You can experiment with Deep Belief Networks by following the Deep Learning Tutorials.

answered Oct 24 '10 at 07:19

Ivo%20Danihelka's gravatar image

Ivo Danihelka

edited Oct 24 '10 at 07:52

The now published video and paper show an interesting model of biological neurons. It is good that somebody is looking at biology for inspiration. The performance may be explained later by a theory.

(Dec 16 '10 at 12:05) Ivo Danihelka

Up until recently Hinton's work did not take the temporal / sequential dimension into account. However this is changing in the latest achievements of his Toronto team. So indeed both architectures are somehow converging.

(Dec 17 '10 at 18:49) ogrisel

Yes! There are people who have experience with HTMs!
I'd include myself in this category, although I should disclose that my experience is limited and outdated. The bulk of my "work" with the NuPIC platform was to create a video game for a contest organized by Numenta a few years ago. I never submitted the game because of its roughness around the edges, but its HTM training/inferring module worked rather well.
I am very much looking forward to digging into HTMs again and with renewed focus when a the software release with the new cortical learning algorithms become available in early 2011.

There is a small community of HTMs hackers and researchers, as can be seen for example on Numenta's forums
These forums are not vibrant by most standards, but their sustained and on the whole serious traffic is indicative of a small but motivated following.

More visible are a few commercial applications and various "Proofs of Concepts" (beyond these produced by Numenta). For example:

  • vitamin D produced a professional grade video monitoring software based on Numenta's Vision Toolkit
  • Tech Center Labs has an iPhone/iPad app which provides an interface to server-side HTM networks readily trained for image or voice recognition.

It's not surprising that many of the early successes of HTMs are in the area of vision. Also, it is fair to note that several of the effects/features supported by HTMs networks in these applications can be achieved by other means. Never the less these applications provide validation of the concepts put forth by Jeff Hawkins. As indicated in Ivo Danihelka's answer, Numenta's approach to HTMs and Hinton's research on Deep Belief Networks seem to converge in several regards; this observation can too be interpreted as validation.

I hope this answer provides a bit of balance, as some of the other answers and comments seemed to be, IMHO, a bit quick in dismissing the HTM approach. Certainly Hawkins' personality and fame, along with the biomimetic nature of the concepts he champions so eloquently (see for ex. this video presentation) all contribute to some WOW factor.
We shouldn't, however, fail to see the fundamental value of the proposal because of its "cool factor".

answered Dec 23 '10 at 04:33

ecotone's gravatar image


I can understand why Hinton's models work. It is a probabilistic model. The training attempts to maximize the likelihood of the seen examples. It works for environments with similar inductive bias. I miss that in HTM. Improving HTM would require neuroscience background or blind experimenting.

(Dec 23 '10 at 07:39) Ivo Danihelka

I think Ivo makes a really important point - as I understand it Nupic uses Swarming (PSO) for optimisation instead of gradient descent, because gradient descent requires a mathematically sound model to start with. Is that fair to say?

(Oct 31 '14 at 16:09) Ben Ritchie

I've had the same experience as anonymous_4235: i.e. people tend to be fairly dismissive of the technology. Despite that, I find the ideas intriguing and I like that Numenta is exploring a different approach to the goal many of us share.

I'll point you to: http://www.numenta.com/htm-overview/education.php

They just released a PDF detailing their new algorithms that use the sparse distributed representations. They provide a lot of psuedo-code so you can probably do what I tried to do and implement the whole thing yourself (it's not trivial because they are a bit vague on many parameter settings), but they also claim that they are going to release some source code early next year.

answered Dec 18 '10 at 15:30

karpathy's gravatar image


Actually, there's a company that was founded on this idea and is actively building the core technology and applications. It's called Numenta. http://www.numenta.com/

answered Oct 17 '10 at 20:24

Kevin%20Canini's gravatar image

Kevin Canini

Founded by Hawkins, on Hawkins' dime. There's really no independent validation that any of his stuff actually works as well as he claims. He certainly hasn't attempted to publish it in any peer-reviewed venue.

(Dec 17 '10 at 17:06) David Warde Farley ♦

Sorta like the black-scholes equations. If you were sitting on a gold mine like that, would you have published it?

(Dec 17 '10 at 17:16) Brian Vandenberg

im actually implementing it now... to tell you the truth, you get the basic idea off Hawkins but this realm of computing is pretty wild west, you have to come up with your own ideas, cause Hawkins doesnt even know the whole thing, but if you want to take HTM as a starting point, im interested in interactive video, thats my reason behind studying this.

Heres the gist in mine so far.

Theres two different kinds of synapses, proximal and distal. the proximal synapses go in from the sensory data into the layer of nodes and back again. getting them to go in and out means you have to store a list of shapes that it compares the input to and it picks the closest shape to modify (or teach) there can be 16 - 256 or more of them per node. I dont know how important the sparse representation is, but at the moment, its in my version. My reason for that is, Im guessing it saves more space (more free proximal dendrites, connected to each node to the previous layer) to store more video.

you need more than one shape per node, because otherwise you have an overwriting problem, you need more "space" inside the node to store more feedback outputs, because youve pooled all the data to one node, thats how you restore it.

then there is the distals, these i think are done in a similar way, you pick the closest temporal sequence to increment or decrement synapses of, and that way you dont overwrite temporality either.

if you have both of these things, you should be able to pump video into your network, and then play it back out of it. ;)

My best advice is, unfortunately all us scientists are all on our own when it comes to this "brain" stuff, but whatever you do, dont be afraid to come up with your own ideas, its the secret of learning.

One other thing, if the eye is small enough - it runs realtime really quickly, its shocking.

answered Dec 07 '12 at 11:17

Magnus%20Wootton's gravatar image

Magnus Wootton

edited Dec 07 '12 at 11:24

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.