This project is read-only.

Appendix K: Working with measurement error in multivariate likelihood functions

In this derivation, the use of Gaussian likelihood functions is extend to problems that are greater than one dimension.  Note bold-face indicates a vector throughout.

The starting point is the formulation of the likelihood function, e.g. the probability of the observed Type-B data zgiven a model parametrization θ that incorporates measurement device error ε

Both the observed data and measurement device error vectors are length M, hence the subscript on the integration and that f() indicates a probability distribution function. Note that θ is written as a one-dimensional model, but that the dimension of the parameter space is of no importance in this derivation and that the subsequent proof applies to higher dimensional model parametrizations as well. Further, Equation 1 can be simplified by assuming that the model parametrization and measurement device error are statistically independent variables, which means f(ε|θ) = f(ε) and yields

Note in Equations 1 & 2 and hereon the single integral bar is used for simplicity, but really there are integrals to perform.

Next, we assume that the error conditional likelihood function f(zb|θ,ε) can be approximated using a multivariate kernel density estimator. The 2nd order, Gaussian, multivariate kernel density estimator is

In Equation 3, fN() indicates an estimated probability using N repetitions of the appropriate random experiment (called ‘realizations’ in the MAD# software), zbj(i) is the ith simulation of the jth measured Type-B data, hj is a fixed bandwidth that is a function of the variation in the jth dimension of the simulation ensemble zbj(1)...zbj(N), and j is the index over the dimensionality of the problem M.

The other assumption needed to proceed with integration in Equation 2 is a definition of f(ε). In this proof, we assume that the measurement device errors are statistically independent of each other and Gaussian distributed with zero mean and finite variance (which need not be constant for all M measurements) σ2εj < infinity. Thus we write

Substitution of Equations 3 & 4 in to Equation 2 gives






After, commuting constants, summations, products outside the integral; combining dummy indices that are equivalent; as well as re-writing products of exponential terms as summations of exponential arguments Equation 5 becomes

Focusing on the integral from Equation 6 only, note that it can be re-written as follows.

Note that each integral on the right hand side of Equation 7 has the same structure. Therefore, using the following identity

and Equation 7, the generic, coefficients aj, bj, and cj can be determined as:

Using Equations 8 and 9 solve the right hand side of the integral in Equation 7 as

Inserting Equation 10 into Equation 6 gives

Pulling multiplicative constants out that do not depend on the summation over i; pushing the product over j inside of the exponential function; and collection of dummy indices j and k gives

Finally, determining a common denominator inside the exponential gives

Converting the summation over k to a product outside the exponent and moving the summation over l inside of the summation over i turns Equation 13 into the familiar form, first seen in Equation 3:

Thus, in a similar fashion to the univariate rescaling of the bandwidth, the multivariate kernel density, with integration over M independent error distributions is equivalent to the rescaling of each of the M bandwidths.

Last edited Nov 8, 2013 at 9:58 PM by frystacka, version 3


No comments yet.