Model fitting
I fitted the models to the lag time data in a Bayesian framework in R
version 4.0.3 (R Core Team 2017) using a Metropolis-Hastings algorithm
(modified from the R package MHadaptive; Chivers 2012) to estimate the
posterior distributions of the parameters. Being a Bayesian model, I
specified a prior distribution for all parameters using relatively
uninformative priors: a normal distribution with mean zero and standard
deviation 10. I ran the models in an adaptive phase for 20000
iterations, using the final 2000 iterations to determine the parameters
for the proposal distribution and initial parameter starting values. I
then ran three chains for 10000 iterations using the proposal
distribution and initial parameter values from the adaptive phase, and
checked the resulting parameters for convergence using the Gelman-Rubin
statistic (Gelman & Rubin 1992), which was less than 1.1 for all
parameters indicating adequate convergence. I identified the best
performing model using the approximate leave-one-out cross-validation
(LOO) criteria (Vehtari et al. 2017), which estimates the
predictive accuracy of each model. LOO is considered an improvement on
other information-criterion based model selection measures such as AIC,
WAIC and DIC that are widely used to compare model performance (see
Vehtari et al. 2017 for details). I used Pareto smoothed
importance sampling to estimate LOO (PSIS-LOO) and compared models by
calculating the difference in expected predictive accuracy (ΔPSIS-LOO)
between each model and the best-performing model on the deviance scale
using the loo package in R (Vehtari et al. 2019), with
smaller PSIS-LOO values implying a model with better predictive
accuracy.