DOC

A Simple Entropy-oriented Measure of

By Bryan Cole,2014-08-08 20:57
8 views 0
A Simple Entropy-oriented Measure of

    Epistemic Sensitivity Analysis Based on

    the Concept of Entropy

    B. Krzykacz-Hausmann

    Gesellschaft für Anlagen- und Reaktorsicherheit, Germany

     Contents :

    - What is epistemic sensitivity analysis ?

    - Motivation for introducing the concept of entropy

    - Derivation of entropy-based epistemic sensitivity

     measure and its interpretations

- A simple analytical example

    (=linear normal case)

- A simple estimator from samples

    - numerical/graphical examples and comparison with

     standard sensitivity measures

     Y = M(U,U,...,U) 2n1

    M : computational Model

    Y : a scalar model output

    U,U,...,U: uncertain parameters subject to 2n 1

     "epistemic" uncertainty

    (="lack-of-knowledge", "subjective", "reducible" u.) probability distributions over parameters U=(U,...,U) 1n

    quantify the degree of state of knowledge

     subjectivistic concept of probability interpretation

     (=degree-of-belief)

to be distinguished from

     "aleatory" uncertainty

    (="stochastic", "random", "population heterogeneity" "irreducible", "ontological"),

    probability distributions over variables V=(V,...,V) 1n

    describe random laws

     frequentistic concept of probability interpretation

     (=relative frequency)

    Principal goal of Epistemic Sensitivity Analysis:

     To identify the most important contributors among the parameters U,U,...,U to the epistemic uncertainty in 12n

    output Y.

    ( indicates where the important uncertainty sources are and how to reduce output uncertainty in the most

    effective way i.e. where to make more investigations, or more experiments, or to consult more experts etc. )

     Epistemic sensitivity measure (SM),

     (uncertainty importance indicator):

    indicates the "degree of impact" of the epistemic uncertainty in a parameter U on the epistemic i

    uncertainty in output Y

    ( global SM)

    doesn't indicate how sensitive is the value of outcome Y to small variations of parameter U around a nominal value. i

standard correlation/regression-related SM:

     CC , SRC, PCC, RCC, SRRC, PRCC

    (may not be appropriate for highly non-linear, non-monotonic models)

     Variance-based approach:

     var(Y) ? "overall scalar measure of

     epistemic uncertainty in Y"

(1) s = var(Y) - E(var[Y|U]) ii

     = var(Y) - ? var(Y|U=u) f(u) du ii

     = "amount of uncertainty/variance of output Y

     that is expected to be removed if the true

     value of parameter U will become known" i

     (Principle of "Expected Reduction in Uncertainty/Variance")

    varYE(var[Y|U])i normalized Version =

    varY

    2st ( correlation ratio , 1 order sensitivity index)

(2) ts = E (var [Y| U,....,U, U,....,U]) i1i-1 i+1n

     = ?...? var(Y| U=u,....,U=u,U=u,....,U=u) 11i-1i-1 i+1i+1nn

     f(u,...,u,u,...,u) du,...,dudu...du 1i-1i+1n1i-1i+1n

     = "amount of uncertainty/variance of output Y

     that is expected to remain if the true values of all

     parameters except parameter U will become known" i

    E (var[Y|U,...,U,U,...,U])n1i1i;1normalized version = var Y

    = global sensitivity index for the "total effect of U" i

     (Homma, Saltelli 1996, for independent parameters)

Inconsistencies of variance as a measure of epistemic U.

    1. Uniform distribution represents the highest degree of

     epistemic uncertainty over a bounded interval

    (1) (2)

     (1) expresses more epistemic uncertainty than (2),

     however:

     (2) has greater variance than (1) !!!

    2. Y = not precisely known price of a product,

     1$ 1/3

     (a) Y = 2$ with subjective prob. 1/3 , var(Y) = 0.66

     3$ 1/3 (entropy= ln3)

     1$ 1/2

     (b) Y = with subjective prob. var(Y) = 1.00

     3$ 1/2 (entropy= ln2)

     distr. (a) expresses more epistemic uncertainty than (b),

     however:

     distr. (b) has greater variance than (a) !!!

3. joint epistemic uncertainty for vectors (U,...,U) ? 1n

    4. distribution with maximum epistemic uncertainty ?

    Entropy H(Y) (1948), (Shannon: 1916-2001) :

     p ln (1/p) (discrete Y) i i

     H(Y) =

     ? f(y) ln (1/f(y)) dy (continuous Y) (a probabilistic parameter of a distribution)

     Interpretations :

    Entropy ? scalar measure of the degree of

     "concentration", "heterogeneity", "diversity",

     "surprise", "indeterminacy", "ignorance",

     " lack of knowledge", etc.

    Entropy : scalar measure of "lack of knowledge",

     appropriate to represent epistemic uncertainty

    Variance: scalar measure of "random variability"

     appropriate to represent aleatory uncertainty

Entropy in Uncertainty Analysis:

    The "Maximum Entropy Principle" is used for a long

    time to find appropriate probability distributions and correlations on parameter level.

    (Meeuwissen & Bedford 1993)

    Derivation of an entropy-based sensitivity index:

     H(Y) = ? f(y) ln 1/f(y) dy

     = unconditional entropy of Y (= total uncertainty in Y coming from all parameters)

     H(Y|U=u) = ? f(y|U=u) ln 1/f(y|U=u) dy iii

     =conditional entropy of Y given U=u i

    (= uncertainty in Y that remains if the true value of

    parameter U is known to be =u) i

     H(Y|U) = ? H(Y|U=u) f(u) du iii

     = ? [? f(y| U=u) ln 1/f(y| U=u) dy] f(u) du iii

     = expected conditional entropy of Y

     given Ui

    (= uncertainty in Y that is expected to remain if the true

    value of parameter U will become known ) i

     H(Y) - H(Y|U) : i

    "amount of epistemic uncertainty (entropy) of output Y

    that is expected to be removed if the true value of parameter U will become known" i

     (Principle of "Expected Reduction in Uncertainty/Entropy" )

    st ( 1 order entropy-based SM )

    Properties of the entropy-based measure

     H(Y) - H(Y| U) i

    - ( 0 (not negative)

    - = 0 if and only if Y and U are independent i

    - symmetric in U and Y i

    - invariant under bijective transform. of Y and U i

    - can be extended to vector-valued U and Y to be compared with the variance-based measure

     var(Y) - E(var[Y|U]) i

    An alternative representation+interpretation:

H(Y) - H(Y| U) =..........= H(Y) + H(U) - H(U,Y) iii

     H(Y) + H(U)= measure of the joint uncertainty of i

     U and Y if they were independent i

     H(U,Y) = measure of the joint uncertainty of i

     U and Y as it actually is i

     = "measure of degree of dependence

     between parameter U and output Y" i

    Another alternative representation+interpretation

    1 H(Y) = ?f(y)lndy

    f(y)

    f(u,y)f(u)?f(u)[?lndy]du H(Y| U) = if(u)f(u,y)

    f(u,y)

    f(u,y)lndudy??H(Y) - H(Y| U) =.....= if(u)f(y)

     "Kullback-Leibler discrepancy/distance" measure (1950):

    g(y)?g(y)lndy

    h(y)

     ("directed divergence", "cross-entropy")

    = "Kullback-Leibler discrepancy" of the joint density

     of parameter U and output Y from the product of i

     the marginal densities of U and Y i

    = "degree of departure of parameter U and output Y i

     from being independent"

= "degree of dependence between parameter U and i

     output Y "

    (an alternative approach to the entropy-based sensitivity

    measure without explicitly using the concept of entropy)

    Normalizing transformations of

     H(Y) - H(Y| U) : i

Discrete version:

    H(Y)H(Y|U)i

     = H(Y)

    (measure of dependence in contingency tables)

Continuous version:

    exp(2H(Y))exp(2H(Y|U))i

     ? = Hexp(2H(Y))

     = 1 - exp( -2 [ H(Y) - H(Y|U) ] ) i

" entropic/informational measure of correlation"

     [Linfoot 1957, Kent 1983]

Report this document

For any questions or suggestions please email
cust-service@docsford.com