Arthur Bayle

and 3 more

Remote sensing is an invaluable tool for tracking decadal-scale changes in vegetation greenness in response to climate and land use changes. While the Landsat archive has been widely used to explore these trends and their spatial and temporal complexity, its inconsistent sampling frequency over time and space raises concerns about its ability to provide reliable estimates of annual vegetation indices such as the annual maximum NDVI, commonly used as a proxy of plant productivity. Here we demonstrate for seasonally snow-covered ecosystems, that greening trends derived from annual maximum NDVI can be significantly overestimated because the number of available Landsat observations increases over time, and mostly that the magnitude of the overestimation varies along environmental gradients. Typically, areas with a short growing season and few available observations experience the largest bias in greening trend estimation. We show these conditions are met in late snowmelting habitats in the European Alps, which are known to be particularly sensitive to temperature increases and present conservation challenges. In this critical context, almost 50% of the magnitude of estimated greening can be explained by this bias. Our study calls for greater caution when comparing greening trends magnitudes between habitats with different snow conditions and observations. At a minimum we recommend reporting information on the temporal sampling of the observations, including the number of observations per year, when long term studies with Landsat observations are undertaken.

HAFSA BOUAMRI

and 5 more

Estimating snow water equivalent (SWE) and snowmelt in semi-arid mountain ranges is an important but challenging task, due to the large spatial variability of the seasonal snow cover and scarcity of field observations. Adding solar radiation as snowmelt predictor within empirical snow models is often done to account for topographically induced variations in melt rates, at the cost of increasing model complexity. This study examines the added value of including different treatments of solar radiation within empirical snowmelt equations. Three spatially-distributed, enhanced temperature index models that respectively include the potential clear-sky direct radiation (HTI), the incoming solar radiation (ETIA) and net solar radiation (ETIB) were compared with a classical temperature-index model (TI) to simulate SWE within the Rheraya basin in the Moroccan High Atlas Range. Extensive model validation of simulated snow cover area (SCA) was carried out using blended MODIS snow cover products over the 2003-2016 period. We found that models enhanced with a radiation term, particularly ETIB which includes net solar radiation, better explain the observed SCA variability compared to the TI model. However, differences in model performance were overall small, as were the differences in basin averaged simulated SWE and melt rates. SCA variability was found to be dominated by elevation, which is well captured by the TI model, while the ETIB model was found to best explain additional SCA variability. The small differences in model performance for predicting spatiotemporal SCA variations is interpreted to results from the averaging out of topographically-induced variations in melt rates simulated by the enhanced models, a situation favored by the rather uniform distribution of slope aspects in the basin. Moreover, the aggregation of simulated SCA from the 100 m model resolution towards the MODIS resolution (500 m) suppresses key spatial variability related to solar radiation, which attenuates the differences between the TI and the radiative models.