Appendix I: Tips for low cost MAD projects

  • Use a steady-state or very inexpensive transient forward model (i.e. 1D, simple dynamics, coarsely discretized domain, etc.)

Aim for less than 3 seconds clock time (without the FM GUI)

  • Maintain low-dimensionality of the parameter domain (only a few [3-5] structural parameters and anchors)

Use anchors to “pin-point” information when local characterization is most important

Use structural parameters when global characterization is most important

  • Maintain low-dimensionality of the likelihood vector (Zb)

Use aggregation tools

  • Utilize parametric likelihood function inference instead of non-parametric [coming soon]

In low dimension, the speed of this inferential step will not be drastically reduced, but what will be reduced is the number of simulations required to attain likelihood function convergence, which means smaller overall simulation cost.

  • Use convergence tools to set number of realizations

There is no penalty for creating too many realizations, this will always improve the quality of your estimates, but there is a point of “diminishing returns”, where the improvement is not worth the computational cost to attain it.  Find a balance between accuracy and cost

 

Hollow cost reductions

  • Interchanging random field generators will not usually result in noticeable computational savings

The exception is when a random field generator is very inefficient for a given configuration

  • Using only aggregated data or only measurement series data

The likelihood function inference computational cost is always proportional to the dimensionality of Zb – it does not matter what type of information is contained

  • Loading the minimum number of samples from the prior

There is no cost penalty for increasing the number of samples from the prior and not estimating the posterior for some portion of the samples, however, the samples are used to infer the shape of the prior, so adding samples will improve MAD#’s ability to re-construct your prior distribution in the graphics without increasing your computation time.

  • Not saving the forward model data

This results in a marginal savings during each write step of MAD#, but if you were ever to decide to want to look at other information in the time series at the measurement location in the likelihood function, it requires a brand new set of simulations.


Last edited Oct 30, 2013 at 5:49 PM by frystacka, version 3

Comments

No comments yet.