|
Hello, I am interested in recent papers or books on image compression using Machine Learning-based techniques. I know about Jiang's survey on Image compression using neural networks, but this article seems a little outdated (1999). Do you know some more recent resources (articles, books) that treat about this subject? Thanks, Lucian
This question is marked "community wiki".
|
|
If lossy compression is acceptable, k-means clustering (or other methods for vector quantization) can achieve compression at a level dictated by k (or the number of codewords), by replacing each pixel with its cluster center (or codeword).
This answer is marked "community wiki".
One hurdle in applying machine learning techniques to lossy compression of images, video or audio is that you need to take properties of human perception into account in order to get good results.
(Dec 16 '10 at 13:55)
Oscar Täckström
|
|
The sequence memoizer (Wood et al, A stochastic memoizer for sequence data) works rather well for compression. You can also use almost any probabilistic sequence model as a compression algorithm, but most models studied in the ML literature fare rather poorly. If you're interested in this you should really check out the compression chapters in David Mackay's book, Information Theory, Inference and Learning algorithms.
This answer is marked "community wiki".
|
|
John Paisley has a bunch of papers using Bayesian non-parametric techniques to learn sparse image representations. The application I saw was in-painting but these techniques could be directly used for image compression.
This answer is marked "community wiki".
|