|
It seems like structural parameters in most types of neural networks are based on a trial and error basis. But I see many if not most papers do not give much detail of why they chose the structure to make their comparisons. What do most researchers usually do when they want to compare one neural network to another network/machine/method? I've found cases where I've gotten much better results with neural networks than claimed when attempting to reproduce results in papers. |
|
Comparisons are actually pretty poor. People often just use the largest network they can get away with (based on their resources), compare this to a normal backprop net that makes similar computational demands, and maybe a few other methods, and offer these as their results. This means that most "method x beats method y" statements are conditional. |