In probability theory, it is (as far as I am aware) universal to equate "probability" with a probabilistic measure in the sense of measure theory (possibly a particularly well behaved measure, but never mind). In particular, we do assume $\sigma$-additivity, but not anything more (say, additivity with respect to families with cardinality $\mathfrak{c}$ [which would of course make things break down]).
For me, as a mathematician, this is completely satisfactory, and until recently I hardly realised that it may not be entirely obvious that probability should behave thus. A sufficiently convincing justification for working with measures would be that integration theory is precious, and we want to be able to make use of integrals to compute expected values, variances, moments and so on. And we can't require any "stronger" kind of additivity, since then things fall apart already for a uniform random distribution on $[0,1]$.
However, recently I have had some interactions with non-mathematicians, who approach "higher" mathematics with some understandable uncertainty, but who still find the notion of probability relevant. One of the things it made me realise is that I am not myself fully aware why, in principle, we define things thus and not otherwise. Hence, after this overlong introduction, here is the question. Is there a fundamental reason why measure theory is the "only right way" to deal with probabilities (as opposed to e.g. declaring probabilities to be just finitely additive)? If so, is there a "spectacular" example showing why any other approach would not work? If not, then is there an alternative approach (with any research behind it)?