By most recent count, Sentinel RT currently houses 20 Bayesian networks within it. Some simple, some complex and some naïve. We will go into each network in detail over the coming weeks and months, but first a non drilling introduction to Bayesian Networks – also sometimes called a Bayesian Belief network – to set the stage.
The core ideas around Bayesian Belief networks are below:
1) By the process of modeling a belief network, we establish beliefs that a certain event A will occur, given other events B, C, etc. The secrets (weights) in belief networks are the conditional probability tables that capture this relationship.
2) On slide 2, there are two types of tables (conditional probability tables – orange; and prior probability tables – green). The conditional probability tables are quite self-explanatory (see if you understand the ones on slide 2). These are generally learnt from data, and when data is not available, theory helps arrive at them. The goal is to capture uncertainty to the extent possible.
3) The green tables are priors. Those are the probabilities a belief network uses, when evidence is unavailable. These are also generally available from data, and if data is not available, a uniform probability (i.e., equal probabilities) may be assumed without causing too much heartache.
4) Finally, if we have evidence of whether Jimmy overslept or not, and know about the traffic situation, then we don’t need to know anything about whether he set the alarm, or went to a late night party, to infer if he missed the flight. i.e., Inference is possible with partial data.
If you were able to understand some of the above, that is all you need to know for now. Next week, we will head into describing the Bayesian Network we use to detect washouts.
Click below for additional slides on the topic:
Bayesian Networks in Sentinel RT