Report on guidelines for treatment of variance in forecasts, structural uncertainty, risk communication and acceptable levels

Ecosystem-based fisheries management (EBFM), and indeed much fisheries management around the world today, depends on predictions obtained from often-complex computer models. We need models to help us understand the present (for example, how large a fish stock currently is) as well as the future (for example, whether a fish stock could shrink due to rising sea temperatures). We can then draw on these predictions to develop management strategies and measures designed to keep fish stocks and their wider ecosystems healthy, as well as to benefit the livelihoods and populations that fish stocks support.     

But as with all science, modelling involves uncertainties. The design of a model, our understanding of the system we are trying to simulate and the quality of the data fed into the model all affect how closely the resulting predictions match reality (or reality to come), and how certain we are of those predictions. This means it is essential to critically assess our models and everything that goes into them. It also means we need to communicate levels of uncertainty relating to these to users of the resulting predictions to allow them to make fully informed decisions. In this report, part of our Communication work theme, SEAwise leads set out best practice guidelines for the use of our models and communicating their uncertainties.

The report covers important considerations and guidance for: model design and inputs; estimating uncertainty in models and their results; and communicating that uncertainty. To deliver these, we conducted a review of existing literature on ways to calculate the “predictive capability” of models (that is, the ability of a model to predict future events based on past information) and several workshops with SEAwise project participants and members of wider scientific and stakeholder communities. Together, their input was critical to developing our ‘uncertainty guidelines’, which form a key pillar of the project’s output quality assurance process.

We identified a range of sources of uncertainty in modelling that affect a model’s predictive capability. Those sources include “observation uncertainty” (limitations to the data we have), “model uncertainty” (the difference between the model system and the ‘real world’) and “process uncertainty” (natural variations in ecosystem processes such as fluctuations in plankton availability between different years, and species’ life patterns such as differences in the number of eggs produced by a population between different years).

Is more complexity always better, then? Not quite. The more complex the model, the less likely it is to overlook important real-world processes and interactions, but the more likely it is to suffer from computational limitations or large data requirements. Does this mean models are useless? Not usually. But it does mean that we have to do everything we can to minimise uncertainties and communicate those which remain. SEAwise’s best practice recommendations to achieve this are to:

  • Define the purpose of the model and evaluate its predictive capability, in the context of the spatial and/or temporal scale required by the research question.
  • Build on lessons learned from other fields and tools, given the value in taking an interdisciplinary approach and learning from other modelling communities such as climate modelling.
  • Document the data and model, to ensure transparency and that future modellers can try to replicate research findings as needed.
  • Identify relevant aspects of uncertainty and how much uncertainty is acceptable, by engaging with stakeholders to find out which uncertainties most concern them and what range of uncertainty is “acceptable” in terms of decision-making, and how best to communicate this.. 
  • Consider a range of tools for communicating uncertainty, including visual ones such as traffic light systems, in consultation with stakeholders who are likely to use the modelling results.
  • Increase transparency around uncertainty, by making information on this openly available and in a readily understandable language, including on the intended purpose of the model, what it should not be used for, and any related data gaps.
  • Conduct meaningful peer review, drawing on diverse peer reviewer panels formed of both scientists and other stakeholders, and focusing on the performance of models as well as the quality of uncertainty communication.

The best practice guidelines in this report represent a living document that we will add to and refine over time. We’re already applying our own recommendations to all modelling work we undertake, ensuring that those models hit the “sweet spot” where their predictive ability is optimised in a way that ensures our models are complex enough to be realistic and also certain enough to be useful to people making decisions. The SEAwise project is committed to taking a co-creation approach in developing and using models, with active consideration of what fisheries stakeholders think, know, and need. Doing so will ensure that our research outputs can effectively inform the implementation of EBFM in Europe.

Read the full report here.

We use third-party cookies to personalise content and analyse site traffic.

Learn more