|
I'm sorry if the title isn't descriptive enough, I'm a little uncertain how to phrase what I'm asking. As I've worked on my algorithm implementations, one thing that would be nice to have is a means to write unit tests like you would do with more main-stream software engineering. As an example, if you are performing a feed-forward pass on a neural net; if you have a set of weights and inputs, and known/expected values for intermediate and output units, you could put together a simple unit test to verify that the feed-forward pass is working correctly. Now put this in the context of writing (or reading) a research paper. You want to provide your readers a means of checking each step of their algorithm implementation to ensure it is written correctly. I'm imagining something loosely on par with a signature, or md5 hash, etc, that could be published either in the paper or as supplemental material -- as well as the method for calculating the signature. Any ideas? If this hasn't been done before, what would be a good name for it? If it has been done, what's this called? Are there simple ways to pull it off? |
|
In regards to the unit testing part of your question, something I frequently do to convince myself I've implemented something correctly is gradient checking with numerical approximation. The Stanford UFLDL tutorial gives an example in the context of backpropagation here. |
|
I agree with Andreas. You may do something like what Steyver and Griffiths did here in their LDA paper. The toy example they explain in the paper is one of the demos. And is pretty straight forward in a clean, well documented code |
|
How about providing your code with an example? That has been done quite often ;) If someone wants to reimplement your algorithm, this is incredibly helpful since they can run the original implementation on their data and compare each step of the algorithm.
This answer is marked "community wiki".
That would be nice, but in practice people don't do that very often. In fact, they're usually quite vague in describing their algorithms -- much to my chagrin.
(Aug 05 '11 at 11:02)
Brian Vandenberg
If you think it would be nice, then this is the best reason to do it yourself, right? Also: I think it depends on the community whether code is available. I have the impression in computer vision many people are willing to share their work.
(Aug 05 '11 at 11:05)
Andreas Mueller
@Andreas: If/when I manage to publish anything, I fully intend to publish source-code with it. But that's not necessarily what I was shooting for with my question. I suppose I'm asking whether there's a mathematical theory for identification of an algorithm.
(Aug 05 '11 at 19:49)
Brian Vandenberg
I do not know if I'm right, but maybe you are looking in a different area. I remember a professor back in undergrad doing research on Design Pattern. http://en.wikipedia.org/wiki/Design_pattern_(computer_science) And it kinda sounds like what you want
(Aug 05 '11 at 23:07)
Leon Palafox ♦
I am not sure if that is what you are looking for but I feel that the description of an algorithm (for example in pseudo code) IS a mathematical way of describing it. Alternatively you could identify it by certain invariants and a stopping criterion and then prove that your implementation respects the invariants or something like that.
(Aug 06 '11 at 04:42)
Andreas Mueller
|