The following is the list of readings I have made the last few days:
[1] Der Kiureghian (2009) Aleatory vs. Epistemic uncertainty
[2] Helton et al. (2006) survey
[3] Der Kiureghian (2008) Analysis of structural reliability under parameter uncertainties, Probability Engineering Mechanics 23: 351-358.
[4] Faber (2005) On the treatment of uncertainties and probabilities in engineering decision analysis, Journal of Offshore Mechanics and Arctic Engineering, ASME, 127: 243-248.
[5] Hora (1996) Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management, RESS, 54:217-223.
If you haven't read [3], I strongly recommend you reading it. [1] and [5] have very close opinion in terms of separation of the two types of uncertainty, although one deals with structural reliability issue and the other with expert elicitation.
In terms of computation of the epistemic uncertainty, [2] advocates Latin hypercube sampling while [3] recommends an approach developed by Wen and Chen in 1987, i.e., the so-called fast integration method.
Now it seems that the most difficult issue is still the separation of the uncertainties. In your language, which variables go to the inner cycle and which to the outer cycle. Using the importance measure (or sensitivity index) may be a solution. But my guess is that we probably still need to do the separation a priori.
I look forward to hearing your opinion.
A
------------------
M: Hello Again,
Just to continue the vein of discussion, I found the following four special issues of RESS very interesting and worthwhile devoting some time on them. I believe you have read some of them before, e.g., Hofer (1996), who discussed the difference between a covered dice and a dice to be thrown.
RESS, Volume 23, Issue 4 pp. 247-323 (1988). On the meaning of probability
RESS, Volume 54, Issues 2–3 pp. 91-262 (November–December 1996) Treatment of Aleatory and Epistemic Uncertainty
Volume 57, Issue 1 pp. 1-105 (July 1997), The Role of Sensitivity Analysis in the Corroboration of Models and its Links to Model Structural and Parametric Uncertainty
Volume 85, Issues 1–3 pp. 1-376 (July–September 2004) Alternative Representations of Epistemic Uncertainty
The last one involves diverse discussions on uncertainty modeling, which you might not be interested in at this moment.
A
-------------
Arnold
Thanks. I read most of first two. I cannot get the third and haven’t known the last one. for now, I only talk about uncertainty in the probability context, but we should investigate alternative approaches.
I read through Der Kiureghian (2008) paper and agreed we should confined the aleatory and epistemic uncertainties within the model. This bring back my memory. I believed we discussed this before, and I should have read it before but because I didn’t pay much attention because at the end it seems again that aleatory and epistemic uncertainties are integrated into one predictive failure probability/reliability index.
I think he also classified model uncertainty into aleatory in other papers although he said model uncertainty can be reduced. if the model was specified, he is right and therefore the parameter uncertainty/statistical uncertainty will be the only epistemic uncertainty.
I need to talk you to make it clear.
There are some other special issues on sensitivity study:
Computer Physics Communications, 1999, 117(1-2).
Journal of Multicriteria Decision Analysis, 1999, Vol. 8.
Journal of Statistical Computation and Simulation, 1997, 57(1-4).
------------------
Hi M,
Happy Chinese Thanksgiving!
There are three types of epistemic uncertainty.
(1) unidentified/unquantifiable unknowns. Hard to give an example. 911 events before 911. Model error might belong to this.
(2) unknown facts. It is not random, but we simply don't know the fact. For example, the covered coin, the failure rate of the installed pipes of a nuclear power plant, the defect ratio of a specific collection of items, etc.
(3) uncertainty derived from aleatory uncertainty and/or unidentified unknowns. this includes sampling errors and modeling errors. Modeling errors are caused by both aleatory uncertainty and model uncertainty.
Let's keep debating!
---------------
Hello M:
I think the major issues are:
(1) how do we separate (categorize) uncertainties? From operational perspective, Der Kiureghian and Ditlevsen may be correct. But the separation must be related to an ease of interpretation of results. If the results are hard to understand, it is not good categorization. Conceptually, people are still inclined to think aleatory uncertainty as inherent uncertainty (instead of asking if they can be further reduced) and thus interpret the results from objective probability perspective, whereas epistemic uncertainty tends to be understood from subjective probability perspective.
(2) a solid understanding of the three layers of uncertainty:
(a) aleatory uncertainty
(b) epistemic uncertainty
(c) assumptions and knowledge background (model deficiency)
Because of (c), I tend to accept the concept of 'risk-informed' rather than 'risk-based' decision/regulation, etc.
(3) Selection of terminology between confidence interval vs. credible interval. 95% 'confidence' or 'credibility'.
(4) Your suggestion of the sensitivity analysis among the 'grey-area' random variables (aleatory vs. epistemic) is very good, particularly to illustrate the confidence/credibility level.