Revision history[back]

# Expected value of a function with a high dimensional domain

I would like to compute the expected value of a function f(x1,x2): R^n x R^n -> R for a fixed x1. This will be done for thousands of different values of x1 in a context where speed is important. Three problems:

1. f could be slow to compute
2. n is relatively large (> 100)
3. the input space is continuous

I can see a few approaches:

1. Discretize the input space for x2 and exhaustively compute f (bad idea since f is slow and n is large)
2. Perform dimensionality reduction and then compute the expected value on that (eh...)
3. Learn the expected value through regression (this feels dirty)
4. Use some form of sampling

Any other ideas? Could someone familiar with the sampling literature recommend papers that deal with this kind of problem?

# Expected value of a function with a high dimensional domain

I would like to compute the expected value of a function f(x1,x2): R^n x R^n -> R [0,1] for a fixed x1. x1, where f has the form f(x1,x2) = |f2(x1) - f2(x2)| for some f2:R^n -> [0,1]. This will be done for thousands of different values of x1 in a context where speed is important. Three problems:

1. f could be slow to compute
2. n is relatively large (> 100)
3. the input space is continuous

I can see a few approaches:

1. Discretize the input space for x2 and exhaustively compute f (bad idea since f is slow and n is large)
2. Perform dimensionality reduction and then compute the expected value on that (eh...)
3. Learn the expected value through regression (this feels dirty)
4. Use some form of sampling

Any other ideas? Could someone familiar with the sampling literature recommend papers that deal with this kind of problem?

User submitted content is under Creative Commons: Attribution - Share Alike; Other things copyright (C) 2010, MetaOptimize LLC.