It seems like structural parameters in most types of neural networks are based on a trial and error basis. But I see many if not most papers do not give much detail of why they chose the structure to make their comparisons.

What do most researchers usually do when they want to compare one neural network to another network/machine/method?

I've found cases where I've gotten much better results with neural networks than claimed when attempting to reproduce results in papers.

asked Apr 04 '11 at 14:39

crdrn's gravatar image

crdrn
402162126

edited Apr 04 '11 at 14:40


One Answer:

Comparisons are actually pretty poor. People often just use the largest network they can get away with (based on their resources), compare this to a normal backprop net that makes similar computational demands, and maybe a few other methods, and offer these as their results. This means that most "method x beats method y" statements are conditional.

answered Apr 04 '11 at 15:08

Jacob%20Jensen's gravatar image

Jacob Jensen
1914315663

Your answer
toggle preview

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.