Flood forecasting meets machine learning
Last week, we attended the Google ‘Flood Forecasting Meets Machine Learning’ workshop in Tel Aviv, where our Chairman Paul Bates delivered the Faculty Keynote address. In recent years Google have begun an effort to undertake flood forecasting, with initial efforts being focused on India.
This workshop represented an opportunity for the wider flood community to get together with Google and exchange knowledge and ideas. Perhaps unsurprisingly, the Google team have a unique skill-set in Machine Learning and AI techniques and are truly world leading in those fields. The wider flood community, we included, have undertaken many years of development and have a wealth of contextual knowledge in building these kinds of models. The combination of these two skill-sets is, we think, an exciting prospect.
One way in which the kinds of data that Google are producing could help in building out better flood risk products lies in the development of the input datasets used in our models. In recent years, our research has demonstrated that when good data are available,good flood risk models can be built. However, when poor input data are available model quality quickly declines. The importance of having good input data lies not only in the hazard model; at the workshop, we presented some recent research outlining the importance of having reliable exposure datasets when undertaking flood risk calculations. We used new population data, produced by Facebook using Machine Learning and AI, to show that current estimates of populations exposed to flooding mis-represent risk owing to the poor quality of exposure data.
“The importance of having good input data is clear, both in terms of modelling hazard and exposure. There is now a strong case to suggest that enabling better estimates of flood risk globally is no longer a modelling problem, it is a data problem. The work we’ve undertaken with the Facebook population data demonstrates how AI may be utilised to enable more robust modelling. Efforts by organisations such as Google have the potential to take this even further. The development of better input data, such as better terrain datasets, would truly revolutionize our capabilities globally and we should all be excited that Google making efforts in the space.”
Dr Andrew Smith, Fathom’s COO
New estimates of flood exposure in developing countries using high-resolution population data
A Nature Comms publication in which we demonstrate the critical importance of having both high resolution hazard and high resolution population data when assessing at-risk populations.