Drawing inferences from general lower probabilities on finite possibility spaces
usually involves solving linear programming problems. For some applications this
may be too computationally demanding. Some special classes of lower
probabilities allow for using computationally less demanding techniques. One
such class is formed by the completely monotone lower probabilities, for which
inferences can be drawn efficiently once their MÃ¶bius transform has been
calculated. One option is therefore to draw approximate inferences by using a
completely monotone approximation to a general lower probability; this must be
an outer approximation to avoid drawing inferences that are not implied by the
approximated lower probability. In this presentation, we will discuss existing
and new algorithms for performing this approximation, discuss their relative
strengths and weaknesses, and illustrate how each one works and performs.