Blog | 06 October 2022
Moving on from the Pre-Paradigm of Catastrophe Modelling
A reformation of foundational methodologies and analytical outputs from catastrophe models is necessary to achieve real world resilience and value
A scientific paradigm is a framework, or worldview, within which scientists and researchers can use established methodologies to solve novel problems, with an aim to increase our understanding of the world. It is in the application of newfound knowledge that we, as a society, can evolve for the better. Paradigm shifts occur when the current scientific paradigm is proven to be irreconcilably at odds with new information. At Maximum Information, we believe that catastrophe (cat) modelling currently exists in a “pre-paradigm” phase, as defined by Thomas Kuhn[1]:
“The pre-paradigm period, in particular, is regularly marked by frequent and deep debates over legitimate methods, problems, and standards of solution... Throughout the pre-paradigm period when there is a multiplicity of competing schools, evidence of progress, except within schools, is very hard to find.”
These statements hold true within the field of contemporary cat modelling. Because different cat model vendors offer their own unique courses into how to use their model frameworks, a lack of commonality and agreement exists between them. As a result, we have a multiplicity of competing schools, and little evidence from which to independently quantify the progress of knowledge.
By directly addressing challenges of pre-paradigmatic phases themselves, and encouraging a paradigm shift that establishes well-understood and multi-institution/multi-school standards we, as risk managers, will drive cat modelling forward to create a more resilient society.
By directly addressing challenges of pre-paradigmatic phases themselves, and encouraging a paradigm shift that establishes well-understood and multi-institution/multi-school standards we, as risk managers, will drive cat modelling forward to create a more resilient society.
Some may ask: if this paradigm shift is a necessity, why has it not occurred naturally? After all, Kuhn himself argues that paradigm shifts occur spontaneously; actively forcing these shifts may not be ideal or even possible. While this belief may hold for pure (and fully open) scientific disciplines, it does not account for the more opaque realities of the world of contemporary cat modelling.
The highly complicating factor in achieving spontaneous paradigm shifts for a field such as cat modelling is its private-market driven development since its inception in the late 1980s. This prioritises business efficiency and market viability over scientific truth and accuracy.
It is only in the past few years – when the cat modelling pre-paradigm has been severely challenged in its responses to questions on climate change, and its limited ability to serve users beyond traditional re/insurance – that our industry has been able to shine a bright light on the cracks that exist in the pre-paradigm state.
The potential value of cat modelling lies exclusively in it being, first and foremost, a defensible science, and not simply a market-serving tool. We argue that while cat modelling ingests credible and often cutting-edge science, the exercise as a whole – at least in its current private vendor model market form – is fundamentally unscientific.
Our own definition here of what it is to be scientific is derived from Karl Popper[2], and later adapted and iterated by Imre Lakatos[3] as “falsificationism”; science cannot be proven right – only proven wrong[4]. Under this definition, the value in scientific work only exists if it can be theoretically and, when necessary, practically falsified.
At present, as model users we have no independently structured and practical ways to prove cat models wrong. In addition, we also have no methods to even theoretically achieve this. Contemporary cat models provide risk estimates, we (as users) accept that they are highly uncertain[5], and they regularly evolve with every major loss event/model vendor update cycle such that, by the time targeted questions can be structured and formulated about a specific model, said model has often fallen by the wayside, been largely forgotten, and lost in the ether.
While regulators attempt to provide a framework for validating and deploying cat models appropriately, at no point have we collectively – as cat model licensers and users – stood up and formally demanded an answer to the question “how can I know if my cat model is providing real information to the specific decisions I make?”. This severely hampers efficient development and deployment of these risk management tools in their current form. Consequently, there is widespread scepticism about the value of cat models, not just within the cat model user community, but even in the non-technical communities of C-Suite executives that are far downstream of the actual analytics.
It is time, as a model user community, for us to push the cat modelling pre-paradigm forward by defining tests to show what, where and why cat models can and should have real-world value, and to accept that this may lead us to cat modelling that looks very different to the analytics provided to the broad market today.
[1] https://cte.univ-setif2.dz/moodle/pluginfile.php/13602/mod_glossary/attachment/1620/Kuhn_Structure_of_Scientific_Revolutions.pdf
[2] Popper being a philosopher of science who famously was publicly presented as opposing Kuhn’s views of how scientific knowledge progresses, though largely their views were shown to reconcile without severe and significant disagreement in Criticism and the Growth of Knowledge: https://cursosfilos.files.wordpress.com/2015/08/proceedings-of-the-international-colloquium-in-the-philosophy-of-science-london-1965-volume-4-imre-lakatos-ed-alan-musgrave-ed-criticism-and-the-growth-of-knowledge-cambridge.pdf
[3] “Falsification and the Methodology of Scientific Research Programmes”, in Criticism and the Growth of Knowledge.
[4] While there are many different definitions of how one can be scientific, this is one of the least cumbersome and most simple to achieve, thus it should be considered attractive as a first step in an evolution of a pre-paradigmatic field of scientific research.
[5] And are consequently unlikely to accurately estimate losses with a high degree of precision.