Is the SIFT algorithm scalable? Most questions I've seen asked about SIFT seem focussed on a simple comparison between two images. Instead of determining how similar two images are, would it be practical to use SIFT to find the N closest matching images out of a collection of thousands of images?
For example, would it be practical to use SIFT to generate keypoints for a batch of images, store the keypoints in a database, and then find the ones that have the shortest Euclidean distance to the keypoints generated for a "query" image?
When calculating the Euclidean distance, would you ignore the x, y, scale, and orientation parts of the keypoints, and only look at the descriptor?