Revision history[back]
click to hide/show revision 1
Revision n. 1

Jan 24 '11 at 13:35

Paul%20Barba's gravatar image

Paul Barba
4464916

1.Generally, GA's are applied to instances of a problem, with the structure of the genome conveying expectations about the form of a solution. GP is usually applied to a get a generic solution to a class of problems, with less guidance towards the desired form of the solution. However, there's a gray area in the middle of very expressive GA, and GP with tight, domain specific operators/functions.

2.Again, speaking in generalities, GA's aren't very good for very unstructured problems or very general problems (you wouldn't evolve an arbitrary robot motion problem with a single GA, although you could use it for solving a single, instantiated motion problem). GP in principal can solve anything a GA could, but in practice the computational cycles needed to find solutions can be prohibitive, so any time you've got an optimization problem (which assignment of n variables is best) you're likely to prefer a GA. Theoretically, any solvable problem could eventually be solved with a properly instantiated GP/GA, but in practice there's a number of properties that make the techniques untenable: poor or misleading discrimination by evaluation functions (e.g., no way to differentiate a poor solution from a so-so solution), high complexity (don't expect GP to evolve a 1M line application), expensive evaluation functions (e.g., humans need to manually evaluate every output).

  1. (answered in another answer)

  2. You need enough data that a generalization can be obtained. Too little can miss local behaviors of a function, or lead to overfitting. In general, if the complexity of the problem is so great that you need huge quantities of data, you're probably going to have to encode more knowledge into the genome or function set so that less data is needed. Additionally, at least in GP, it's often desirable to have a sparse sampling for the evaluation of each iteration (changing the sampling each iteration to avoid overfitting,etc.): this can be an additional source of randomness that lets novel code that performs well in some small subset of the problem space reproduce, in the hope that mutations/crossover will generalize that contribution, or make use of it for that portion of the problem.

click to hide/show revision 2
Revision n. 2

Jan 24 '11 at 13:36

Paul%20Barba's gravatar image

Paul Barba
4464916

1.Generally, GA's are applied to instances of a problem, with the structure of the genome conveying expectations about the form of a solution. GP is usually applied to a get a generic solution to a class of problems, with less guidance towards the desired form of the solution. However, there's a gray area in the middle of very expressive GA, and GP with tight, domain specific operators/functions.

2.Again, speaking in generalities, GA's aren't very good for very unstructured problems or very general problems (you wouldn't evolve an arbitrary robot motion problem with a single GA, although you could use it for solving a single, instantiated motion problem). GP in principal can solve anything a GA could, but in practice the computational cycles needed to find solutions can be prohibitive, so any time you've got an optimization problem (which assignment of n variables is best) you're likely to prefer a GA. Theoretically, any solvable problem could eventually be solved with a properly instantiated GP/GA, but in practice there's a number of properties that make the techniques untenable: poor or misleading discrimination by evaluation functions (e.g., no way to differentiate a poor solution from a so-so solution), high complexity (don't expect GP to evolve a 1M line application), expensive evaluation functions (e.g., humans need to manually evaluate every output).

  1. (answered 3.(answered in another answer)

  2. You 4.You need enough data that a generalization can be obtained. Too little can miss local behaviors of a function, or lead to overfitting. In general, if the complexity of the problem is so great that you need huge quantities of data, you're probably going to have to encode more knowledge into the genome or function set so that less data is needed. Additionally, at least in GP, it's often desirable to have a sparse sampling for the evaluation of each iteration (changing the sampling each iteration to avoid overfitting,etc.): this can be an additional source of randomness that lets novel code that performs well in some small subset of the problem space reproduce, in the hope that mutations/crossover will generalize that contribution, or make use of it for that portion of the problem.

powered by OSQA

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.