GARP Climate Risk Podcast: 30 years of catastrophe modelling

Podcast 26.05.2022

Fathom’s Chief Operating Officer, Dr Andrew Smith and Chief Product Officer Dr Matthew Jones join Jo Paisley on the GARP Climate Risk Podcast to discuss the history of catastrophe modelling within the insurance industry.

Andrew Smith and Matthew Jones explore the history of catastrophe modelling, in addition to the challenges and future opportunities expected due to climate-related flood risk.

There is an increasing need for insurance firms to adopt appropriate catastrophe modelling which also factors in climate risk. This is ever-present in the wake of the Bank of England’s Climate Biennial Exploratory Scenarios (CBES) update published earlier this week, which stated that firms must act sooner than later to avoid unnecessary financial losses from late action to the effects of climate change.

Key points include:

  • The development of physical risk modelling, including natural catastrophes
  • The need for financial professionals to utilise sophisticated climate analytics
  • Innovation and the importance of transparency within the market of catastrophe model vendors

Podcast transcript

 

Joe: Hello and welcome to Garp’s Climate Risk podcast series where we’ll be investigating how climate change is impacting the world of business and finance and what this means for risk management. 

Today we’re going to be looking at modelling the physical risks from climate change with a particular focus on flood risk incorporating climate change into risk models. This is one of the key challenges facing risk professionals today. Modelling physical risks particularly from extreme events is challengin,g requiring specialist knowledge and complex datasets. For example, insurance companies and reinsurers have been modelling natural catastrophes for decades. 

That’s why in today’s episode we’ll be looking at what lessons there are for risk professionals from the evolution of nat cap models, as well as the development of new and sophisticated climate analytics providers. We’ll see how an ecosystem has built up over a number of years involving new data and modelling standards multi-vendor software platforms and a culture of collaboration and innovation. So without further ado let’s start the show.

I’m your host joe paisley and in today’s instalment of the series, I’m delighted to be joined by Andy Smith and Matt Jones from Fathom – a global leader in flood risk intelligence. 

Andy is a Chief Operations Officer at Fathom which he co-founded in 2013 while undertaking his phd in large-scale flood modelling at the University of Bristol. After completing his PhD, Andy held a postdoctoral research position at the university before becoming Chief Operations Officer at Fathom full-time in 2016. 

Matt joined Fathom in 2022, leaving his position as Head of Catastrophe Risk Product at Nasdaq. Previously Matt founded Cat Risk Intelligence a catastrophe risk management consultancy. Before that, he was the Global Head of Catastrophe Management for the Zurich Insurance Group. Matt has a PhD in oceanography and remote sensing from University College London and is a co-author of ‘Natural catastrophe risk management and modelling: a practitioner’s guide’. 

Andy, Matt it’s great to have you both on the show. Thanks so much for joining us today.

Matt: It’s great to be here Joe.

Andy: Yeah great to be here joe. Thanks for the invite. 

Joe: Fantastic. So i think the logical thing to do is to start with you Matt because your work at Nasdaq is really where this whole story begins. 

Just to bring our audience up to speed could you give us a brief history of nat cat models and how they’ve developed in relation to the insurance industry?

Matt: Sure. So you mentioned that cat modelling. Just in case people haven’t come across that before, this refers to natural catastrophe modelling which is a type of model called ‘catastrophe models’ that have been used by the insurance industry for some 30 years now.

People started thinking about catastrophe modelling in the 1970s. Around that time, Don Friedman pioneered an approach of thinking about catastrophes in terms of events and thinking about the impact a large event can have on an insurance company.  However, it wasn’t until about the late 80s/early 90s that software was developed that enabled insurance companies to assess their risk.

This was driven by Hurricane Andrew, an extreme hurricane event in 1992, which caused a number of insolvencies. That gave impetus to catastrophe modelling as people started to think more about the risk management of significant events like hurricanes, earthquakes and floods. 

From there, insurers started to adopt these models and use them in their business. The thing that differentiates nat cat models from other types of models is this concept of a very large event that can impact many locations at roughly the same time, resulting in significant amounts of loss to a book of business. 

The other thing is that these nat cat models gave insurance companies not just the ability to understand their accumulation risk, but also these models come with a financial calculator. The first thing that a catastrophe model calculates is how big an extreme events can get, what areas it might impact and what’s the probability of it impacting these areas. 

The next step is around understanding the given loss that flows from these events and how that interacts with the insurance structures.

Therefore these companies have two things that they are able to do when leveraging nat cat models: understand the science and utilise the financial calculator. 

When in the late 1980s early 90s, two vendors really emerged to offer this service and they have dominated the market ever since. So we have been for 30 years or so in a bit of a duopolised market in terms of that cat modelling. There are now around 15 to 20 cat modelling firms, of which fathom is one. So there’s a bit of history there Joe, I hope that’s helpful.

Joe: Yeah no that’s great. Thanks, Matt, you’ve set the scene very nicely. So these cat models are really looking at tail risks and providing the insurers with an estimate of the impact and the probability of these events. From there they can price their insurance appropriately.

So let’s take the listeners to your work at Nasdaq because I believe you set up a platform which allowed a variety of vendors to put their models onto it? And that has given insurance companies more of a choice. Is that right?

Matt: Yeah absolutely. To talk about the work at Nasdaq, I first need to talk about something called the Oasis Loss Modeling Framework. 

I mentioned just now that for most of its history the cat modelling market has been a bit of a duopoly, two firms have dominated and been very successful. I should give credit where credit’s due, a lot of the thinking and the language about cat modelling that has emerged over the last 30 years is thanks to the work that these two firms have done. 

However, actually having a choice of models and a choice of firms is a good thing in any market. The industries recognized back in about 2012 that it was a good thing to open up the market and encourage different views of risk and encourage more model providers.

This entity called the Oasis Loss Modeling Framework was formed in 2012. It’s led by Dickie Whittaker and it was funded by insurance and reinsurance companies and Lloyd’s of London was a key part of the thinking behind that. The whole aim of Oasis is to provide a technological standard if you like for both building cat models to a certain framework but also deploying cat models as well. One of the pain points of moving to an new platform is that you’ve got to train people up in the technology and so on.

If lots of different cat model developers like Fathom can pivot towards one open-source platform like oasis then it makes it a lot easier for insurers and reinsurers to adopt a whole new set of models through one framework and one standard if you like. That’s what Oasis brought to the industry and still brings to the industry it’s all open-source; the technology and software are on GitHub where anyone can download it. 

We have had them develop our models to the oasis standard. If people want to run an Oasis model they can download the software, set up a platform and run it themselves. So what we did at Nasdaq was take that freely available software and create a service from it. To do that you need to make a very good user experience and also to standardize things. One of the great things about the platform is there are a variety of different cat modelling companies on it. There were 12 different cat modelling companies when I left and loads of different models covering hundreds of different countries and perils. We found that every vendor had its own documentation so we had to standardise that in order to make the user experience really good.

As a result, the user interface is slick and the performance of the platform is seamless and very usable. Creating that sort of community of vendors and creating a really good user experience was a key part of what we did at Nasdaq.

Another key thing we did at Nasdaq was to identify the main issues that people had using the Oasis platform. One of these was around input data which sounds trivial. But actually, for any model to be useful, you have to put good data and good information that feeds into how catastrophe models consider variables such as insured properties, the locations of those properties and building characteristics. All of this information needs to be structured somehow. 

There wasn’t actually an open and well-defined input format for the oasis platform a few years ago back.  Yet that’s the first thing people wanted to be improved when we started working on it. To fix these, we started an industry working group that evolved the open exposure data standard which was then syndicated with a broader industry group. This then actually released that standard out openly on Github to the market. 

After a year and a half Nasdaq handed that back to Oasis to curate and govern. Oasis have since taken that to another level.

Joe: Fascinating you’ve just kind of outlined in a way a blueprint for how to democratize model creation, haven’t you? I feel like other parts of the financial system could probably learn quite a lot from this. I mean insurers on the physical risk side are probably well ahead of the rest of the market. So the models at the moment, do they look ahead to how climate change is affecting the probabilities of these events or the severity? Has that been fully incorporated or is that still working progress?

Matt: Yeah, it’s a good question, Joe. Before I answer it I want to just touch on what you said a minute ago about the insurance community. 

I think you’re right. The insurance community has been using these models for about 30 years now and I think other communities and other industries can probably use these tools too. For example, the banking sector could leverage these models in several different ways and there’s a lot of information and knowledge embedded in that last 30 years. 

So therefore, anyone interested in the impacts of large events and then financially quantification of large events, would find catastrophe models really useful.

In terms of climate change, that’s one of the really exciting areas right now. We’re just seeing catastrophe models emerging that are beginning to think about the future risk posed by climate change.  You’ll be well aware and I think your listeners will be well aware that the regulation in this area really is emerging just now. 

There are a number of different proposals that are out for consultation and so it is a really interesting and topical area. The Bank of England is to some extent leading the way in that area. There was the climate biennial exploratory scenario last year, of which Fathom released a climate conditioned version of its UK flood catastrophe model last year specifically to help insurers and banks respond to the CBES requirements and answer some of those requirements around the physical climate risk. 

The Fathom model was to my knowledge the first climate conditioned catastrophe model of its kind that was specifically tailored to help people meet their regulatory requirements. So I’d say it’s not prevalent but people are absolutely working on climate conditioned cat models including Fathom. 

 It’s an exciting area but obviously the climate requirement is much more than just insurance. I think we’ll see a whole new breed of people wanting to use and gain benefit from catastrophe models

Joe: That is interesting isn’t it… the ecosystem includes the regulators as well.

So Andy I need to turn to you now to discuss Fathom and how you have been building models. Maybe we could start with a little bit of background about you and of course how you got to set up Fathom?

Andy: Sure. I’m a floodless scientist and I moved to the University of Bristol about 10 years ago, maybe a little more. I moved there because the University of Bristol is one of the best research groups in the world atbuilding computational flood models. You can trace the heritage of dynamic flood models back to the University of Bristol. 

That’s really where the idea for Fathom emerged and it came out of really our connections with the insurance market. Back when I was doing my PhD, a bunch of our research was funded by insurance companies trying to understand their risk and their exposure. One of the things that we realized was that the models that we were building at the university actually had real-world application and that was really spurred on by a single event. 

In 2011 there was a really big flood in Thailand. It was a so-called unmodeled loss where nobody saw it coming. It cost billions of dollars people listening may remember that the price of hard drives around the world went up astronomically in the year or two after the event. That’s because a single warehouse got flooded and it had a huge proportion of the world’s manufactured hard drives in it and they all got wiped out. 

Now we realized back then that actually the things that we were building were useful and that our models were picking up some of these exposures. So that’s where the idea for the company emerged. At the time I was actually working on coupling flood models with climate models in a kind of slightly more blue sky science way but quickly started to pivot towards asking: okay how do we build models in really data-poor areas?

Then myself and my other co-founder Chris Sampson, who was also doing his phd, we started to have conversations with some of our insurance partners. A few of them loosely said that if we built these kinds of models they might license them so that spurred us on to start thinking about perhaps starting a company and that’s what we did.

Our PhD supervisor, who’s also one of the co-founders, Paul Bates is a fellow of the royal society and one the most eminent flood scientists in the world. He was really encouraging about our idea and gave us the green light.

We’re now at about 35 people, so my time has moved slightly way from model building and is geared towards steering the ship somewhat and running the company.

Joe: Brilliant brilliant thanks for that overview. To model flood risk across the entire world is pretty ambitious but you started in the US is that right?

Andy: It was actually the other way around. We started off thinking about modelling the whole planet and the reason for that is that we kind of assumed that it would already be a good provision of hazard and risk information in some way like the united states. 

I know this does sound ambitious and in hindsight maybe it was just the naivety of youth but we thought yeah we’ll build models of the whole planet and that is still the main focus of our organization. We want to build models for the whole planet that cover everywhere.

Our ability to do this has grown massively. Even in the last five years the models are way better than they were in the beginning. When we formed the company, we built these global models and we licensed them to organisations like the world bank. But our insurance partners were then saying to us well that’s great you’ve built a global model but can you build a better model of the United States because actually we don’t really understand hazard and risk there very well?

So we started to build models in the US and ended up building some really amazing models. The US is a great place to build models because you can actually test them quite well. There’s a lot of observational data that you can check whether the models are sensible and we’ve done that we’ve published lots of papers outlining how we build the models.

In fact some of the implications of our models in the US have also got a lot of press coverage. Our data has been on the front page of the New York Times and many news outlets around the world. A recent paper a few weeks ago was on sky news so there are a lot of significant implications really from the results of these models.

Joe: Brilliant, I mean I think a lot of people would think a flood was a flood really but I’m assuming that there are so many different factors that go into thinking about floods? It would be great for our audience to understand some of the nuances and not just on the hazards but also for an insurance company or a bank to think about the damage the financial implications and vulnerability to structures.

So maybe you talk about the types of floods and then how do you think about vulnerability?

Andy: That’s a great question and I could honestly probably spend an hour talking about just the hazard bit but I’ll break it down to what I think is the highest level and that is the different types of floods because we have to model different types of floods in different ways. 

The primary climate-driven forms of flooding take the form of fluvial flood risk. That’s when a river drain fills up, over tops its banks and then floods nearby floodplains. There’s also something called pluvial flooding or flash flooding and that’s where you get really intense rainfall over a short period of time and it is the overland flow process itself and very small flow pathways getting filled up urban flash flooding is something that people are familiar with we have to model that in a different way.

Then there’s also coastal risk which is typically driven by high tides and storm surges. We have really high tides inundating coastal regions and all those climate-driven forms of flooding are things that we attempt to tackle here at Fathom. We’re building models of those processes for the whole planet.

The climate driven element is what we focus on here at Fathom but even if we have a perfect picture of hazard in a cat model and even if we understand the hazard perfectly, it’s only one part of a risk model. 

You also need to know about exposure and understand where are the things that could be impacted by a flood. If nothing’s being impacted by a flood then it’s not a disaster. A disaster only occurs when you have the hazard interacting with humans and assets. 

Understanding exposure itself even just where things are is frankly really difficult. I published a paper a couple of years ago working with Facebook and they’ve been building new population maps around the world so they were mapping where people lived around the world at a higher and higher fidelity or resolution. You get very different answers on the number of people exposed to flooding if you have better exposure data sets. 

Then you also mentioned vulnerability. So let’s just assume we have perfect hazard information so we know where the flood’s going to occur we have perfect information on where assets are. Well, actually the interaction of the flood with the asset is what we term vulnerability so what is the result in damage from a flood when it interacts with a certain building type and even that component of a flood model.

In fact, I would argue that is the most uncertain part because when we look at observations often the relationship between a depth of water and damage just isn’t there. So we have to try and treat those things probabilistically making it very nuanced and complex.

Joe: Yeah so thinking about different types of events then and the damage they may cause presumably it would make a difference if it was freshwater or saltwater and how long the water’s been around so the nature of the event.

I mean it gets very complicated doesn’t it very quickly. I’m curious kind of where you’ve got to in terms of incorporating climate change into these models?

Andy: That’s a great question. I’m always somewhat tentative when talking about building flood models. I spent my PhD working on this stuff and have a really good experience in understanding just how uncertain it is. One thing i’ll say first is that building flood models themselves at large scales is itself a very juvenile space. We’ve not been doing it for very long so understanding risk right now is really hard. We’re only really just beginning to do that well. 

Trying to couple flood models with climate models becomes really difficult frankly because the climate models themselves currently aren’t really designed to answer the questions that we care about as risk modellers. So there’s a big gap between the information they give us and the things that we require especially as flood modellers where we need really detailed high resolution information on rare events they don’t really provide that so you have to try and bridge that gap in some way.

We are doing that here at Fathom. We’re working with some of the leading climate groups in the world trying to plug climate models into flood models it is really difficult to do and there are really big uncertainties in doing so. Communicating that uncertainty sometimes can be really quite difficult because if you really understand uncertainty then making decisions on that can be hard.

One of the other things to say about where we’re at right now in coupling flood models with climate models is that some of the research that we’re doing here working with the next generation of climate models shows us that actually, the answers will change in time. We will see different views of hazard and risk as climate models evolve so it’s a reason why I think being really transparent about the models how we build them the methods that we apply. 

It’s really important to do that so you can really communicate the uncertainties and also give an understanding as to why the answers in time may change but we are doing it.

We’re building future flood risk models for the whole planet and the reason that we’re doing that is that people are requiring those answers there’s a good reason why people should care about future flood risk. If you break it down to its simplest physical level as the atmosphere warms up it’ll hold more water. There’s a relationship called the Clausius Clapeyron Law and it dictates that as the air gets hotter it’ll hold more water so it is more nuanced than that. 

There is a lot of variation in space so it’s not that things will get worse everywhere but generally we do think that the climate will result in flood risk amplifying in the future. We are trying to do that and we’re doing it for the whole planet and we’re being driven in some way as well by the regulators who are saying you need to understand this you need to understand risk both now and in the future.

Fathom are building those kinds of models, just to make it even more complex. I’ll round off future risk modelling with a point on changing exposure. If you’re concerned about future risk then you need to care about the hazard but also you care about the exposure and one of the things that we know is that exposure is changing hugely.

We’ve recently published a paper in the u.s and what we’ve done is look at a future hazard model in combination with a future population model produced by the US government. What you see is that actually the uptick in future risk is driven largely by changing exposure the thing that’s dominating the future change signal is the fact that people are living in riskier and riskier areas. 

It’s a phenomenon that we see across the whole planet people are moving to urban centres still and all the best places to live are already occupied so people are forced into fringer areas. That’s really going to drive a lot of risk.

My final point on future risk is on vulnerability and that is that vulnerability itself is changing this is a good news story.

One of the things that we are seeing is that actually buildings and assets are becoming more resilient in time so the vulnerability part of a risk model is improving it seems so again it’s really nuanced and there’s lots of uncertainty but we are attempting to do it 

Joe: That was super clear thank you. I think it’s interesting that we’re kind of living and building in places that we probably wouldn’t if we were more informed from a risk standpoint so talking about incorporating risks into pricing.

I want to pick up on US insurance market because I know there are a lot of changes going on there it’s been opened up to private insurance companies and there’s a greater focus on trying to price the risk. I think our audience would be really interested in your thoughts on this as a case study?

Andy: I can certainly talk to how risk has been viewed in the US historically and how insurance has operated there and insurance has been actually a state program called the national flood insurance program. That provides insurance to those living in government flood zones. In the US, the government flood maps have been built in a way where engineers go out and essentially procure and build small scale flood models of individual river reaches and individual parts of cities. That is really great those models where they exist are what I would call ‘gold standard’ models they can cost hundreds of thousands of dollars for a view of hazard on that single river reach.

Unfortunately, the US is a huge place and those models are often out of date or simply don’t exist really. If you want to have a comprehensive view of risk in the US you need to perhaps use those data in combination with large-scale models like ours. 

If you don’t have that comprehensive view of hazards in the US then pricing risk particularly outside of those government flood zones becomes really hard because you have nothing to go on. So we are seeing a lot of demand for our hazard information in the US. 

The other point here is that where private companies can compete for business in these government flood zones often those maps are really out of date they’ve been built by engineers sometimes stretching back to the 1970s. So having an alternate view alongside those historical maps can actually allow you to be more competitive so you can really try and identify points in those flood zones whereby perhaps it’s not as risky as the map suggests and you can price competitively.

Joe: Hmm… that’s interesting. So what are the key differences between how your models are built versus those gold standard models in the US? 

Andy: Really i think the keyword in how to describe the difference between an engineer going out and building a small scale model is automation. Fathom’s whole modelling process is really a process of automation so the models somewhat build themselves we’re talking many hundreds of thousands of lines of code. 

A good example will be to talk about one specific feature of a flood model and the big difference between an engineer doing it and Fathom doing it and that is how you estimate the size of river channels. An engineer will go out and actually measure it and that can be an expensive thing to do. you have to go and literally survey the channel you simply cannot do that across the US.

The US government has spent over 10 billion dollars building flood models and they are nowhere near having comprehensive coverage the way we do it here. At Fathom we link the size of the river itself to an estimate of how often it fills up. We will say for example in a given area this river will fill up once every two years now we have this thing called a channel solver it’s like a pre-simulation that simulates how big the channel needs to be to convey the one in two-year flow and in that way we can define how big the channel is based on some well-known geomorphic relationships. Then we can actually do that for every single channel in America so we can have estimates of how big every single river channel in the u.s is from the biggest channels like the Mississippi all the way down to really really small streams in cities.

Joe: Wow. that’s very powerful. do you lose anything on accuracy or can it update as you get more information? is that the way it works?

Andy: We will lose something on the accuracy, I think the biggest statement I can make about engineering models is that we validate our model against them. Even though they are models themselves and all models are wrong there’s a famous line ‘all models are wrong but some are useful.

We think they are kind of a gold standard benchmark to validate our models against one of the things that we did when we built the first version of our us model though was to check how our model compared against the FEMA catalogue where it exists. 

We did this in a research paper with Google and the results of that paper were really quite profound because what they showed is that actually, our model can largely replicate the FEMA catalogue. Something built by a small group of researchers for frankly not a lot of resources could largely replicate a model that costs billions of dollars to produce.

We do lose something on accuracy but actually largely and there is some variation in different parts of the US. In some climates our model seems to perform worse and in others it performs better but largely it can replicate the FEMA data.

Joe: That’s interesting so you sort of needed the FEMA data to be able to build your models, you need some of those gold standards but then to make it generalizable if you like or cover a bigger area then we come in with your techniques and you get greater coverage it seems to work very well it’s a kind of very powerful ecosystem? 

Andy: Actually, that’s a great way to put it we need things like the FEMA models to validate them but also they’re required by themselves because they do provide very very very granular views of hazards in individual locations. Thinking about our model is almost like a screening process is a really great way to think about it we can provide a view of hazards and risks anywhere in the world. If you’re really concerned about the results that our model suggests then you can actually delve in and perhaps work with an engineering company or work with organizations that operate with a more kind of eyes on the asset kind of approach.

Joe: Interesting well we’ve covered a lot, haven’t we? We’ve had the history of cat modelling, we’ve talked about data standards, the Oasis Loss Modelling Framework, the Nasdaq platform, this whole ecosystem and now we’ve dived into floods as well…

I suppose we should end because I think I could go on for a long time on this subject. Any words of advice for people listening in to the podcast that work in risk or in finance who know that their firms need to understand more about flood risk? Where should they start? Matt do you want to come in on this?

Matt: Happy to Joe. I think it’s tricky at the moment isn’t it because regulation is emerging and regulation does drive a lot of what’s needed here but the starting point has got to be prioritization.There’s so much that you can do in this area we’ve covered one aspect of physical climate risk there’s transition risks liability risk to think about as well.

When we’re thinking about climate change, prioritization is the most important thing in the context of physical climate risk. So working out which perils matter to the business under which time scales matter and deciding on which scenarios matter. For the moment firms will have to be quite flexible because the regulation is still emerging so it’s not quite clear yet.

So yeah, the main things working out which things matter the most and also which metrics matter. It’s not completely clear yet which are the best metrics to track or to monitor. These are the most important things. Once you’ve done that of course you can establish reporting guidelines and limits and tolerances and monitor and report against those but it’s an emerging area but a very topical very focused.

Joe: Very wise words. Focus on what matters. Andy any last words from you words of advice?

Andy: Words of advice. I’d probably pick two and that is be cautious and be inquisitive. I urge caution because this is frankly a juvenile space this is new science. I’d say be inquisitive with the solutions that are offered to you because there are lots of solutions emerging in this space and there’s a new demand there’s a new market frankly for climate risk information which is driving lots of new service providers to the market.

Delve into these solutions and understand what they’re telling you. It’s the reason why we’re so keen here Fathom to be very transparent with our methods and publish everything and let you know exactly how we do. I think that’s going to be a critically important part for these climate service providers, frankly having credibility. So caution and be inquisitive would be my words of advice.

Joe: Fantastic great words to end on and thank you for taking us on the journey covering an awful lot of ground here it’s been fantastic so good luck in the future and thanks for joining us today. i really enjoyed the conversation.

Matt: Thanks very much thanks joe 

Andy: Thanks joe cheers it’s been great