Effects of Meteorological and Ancillary Data, Temporal Averaging, and Evaluation Methods on Model Performance and Uncertainty in a Land Surface Model

A single-model 16-member ensemble is used to investigate how external model factors can affect model performance. Ensemble members are constructed with the land surface model (LSM) Joint UK Land Environment Simulator (JULES), with different choices of meteorological forcing [in situ, NCEP Climate Fo...

Full description

Saved in:
Bibliographic Details
Published in:Journal of hydrometeorology 2015-12, Vol.16 (6), p.2559-2576
Main Authors: Ménard, Cécile B., Ikonen, Jaakko, Rautiainen, Kimmo, Aurela, Mika, Arslan, Ali Nadir, Pulliainen, Jouni
Format: Article
Language:eng
Subjects:
R&D
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A single-model 16-member ensemble is used to investigate how external model factors can affect model performance. Ensemble members are constructed with the land surface model (LSM) Joint UK Land Environment Simulator (JULES), with different choices of meteorological forcing [in situ, NCEP Climate Forecast System Reanalysis (CFSR)/CFSv2, or Water and Global Change (WATCH) Forcing Data ERA-Interim (WFDEI)] and ancillary datasets (in situ or remotely sensed), and with four time step modes. Effects of temporal averaging are investigated by comparing the hourly, daily, monthly, and seasonal ensemble performance against snow depth and water equivalent, soil temperature and moisture, and latent and sensible heat fluxes from one forest site and one clearing in the boreal ecozone of Finnish Lapland. Results show that meteorological data are the largest source of uncertainty; differences in ancillary data have little effect on model results. Although generally informative and representative, aggregated performance metrics fail to identify “right results for the wrong reasons”; to do so, scrutinizing of time series and of interactions between variables is necessary. Temporal averaging over longer intervals improves metrics—with the notable exception of bias, which increases—by reducing the effects of internal data and model variability on model response. Model evaluation during shoulder seasons (fall minus spring) identifies weaknesses in the reanalyses datasets that conventional seasonal performance (winter minus summer) neglects. In view of the importance of snow on the range of results obtained with the same model, let alone identical simulations using different temporal averaging, it is recommended that systematic evaluation, quantification of errors, and uncertainties in snowcovered regions be incorporated in future efforts to standardize evaluation methods of LSMs.
ISSN:1525-755X
1525-7541