Fundamental limits to flood inundation modeling

Publication

Nature Water

Large scale models 03.07.2023

In this article, Fathom’s Chairman Professor Paul Bates explores the limits to flood inundation modeling, and explains why it’s still difficult to accurately predict flood hazards at the scale of individual buildings.

Flood inundation modeling is easy, at least in theory. The physical forces and basic equations describing flood waves have been known since the 1870s, and computer models that solve them require only three data sets as input: terrain, boundary conditions and friction. Having built a model, the only other need is for independent observations of water level and flood extent to check that predictions are adequate.

The past 20 years have seen huge improvements to terrain mapping. The advent of airborne laser scanning (LiDAR) in the early 2000s saw the arrival of a new ‘gold standard’ of elevation input for inundation models. However, LiDAR is only freely available for less than 10% of the Earth’s surface. To fill the gaps, digital elevation model (DEM) data sets need to be used, which in their own right allow reasonably skilful inundation prediction at large scales. One such DEM is Fathom’s own Forest and Buildings removed Copernicus DEM, FABDEM, which has been academically scrutinized by independent scientists, who ranked it as the “best 1 arcsecond global DEM”.

Learn more about Fathom’s 30 m global map of elevation with forests and buildings removed, FABDEM

Despite these advances, we still have limited capabilities when it comes to simulating inundation over large areas, and progress in improving the quality of required data has been slow. Advancements are on the horizon in certain areas, with the likes of the Surface Water Ocean Topography (SWOT) satellite, launched in December 2022, promising to provide valuable survey data on all the Earth’s oceans and inland water bodies – data which has never before been measured at the same time and to this level of detail. Funding has been provided by the UKSA Enabling Technologies Programme for a joint Fathom and University of Bristol project to explore how the data collected by SWOT can be used to parametrize, validate and be integrated into global flood models. But the results themselves are years away.

Want to learn more about SWOT? Read our article on ‘Explaining SWOT’ to discover its history, purpose and the valuable data it will collect.

Undeterred by – or perhaps more accurately, unaware of – all the constraints on flood modeling, the insurance, banking and financial sectors are only increasing their calls for models that can provide precise water levels on the scale of individual buildings. 

So how do flood modelers work with these fundamental limitations, without overpromising to clients? The key is transparency. Informed decision making relies on flood modelers highlighting where uncertainties lie.

Read the full piece in Nature Water 

Or download a readable version

At Fathom, we believe that transparency is key to working with the inescapable uncertainties found in flood models. To facilitate this, Fathom’s Metadata allows users to understand how influencing factors impact the certainty of risk.

Related research

arrow-light-short-left
arrow light short right
Read more
Research Paper

Caravan – A global community dataset for large-sample hydrology

Caravan dataset research
Read more
Research Paper

Use of hydrological models in global stochastic flood modeling

Read more
Research Paper

Uneven burden of urban flooding

Read more
Research Paper

Assessing flooding impact to riverine bridges: an integrated analysis

Read more
Research Paper

Increased population exposure to Amphan-scale cyclones under future climates

Read more
Research Paper

A 30 m global map of elevation with forests and buildings removed