Lesson preamble

Learning objectives

  • Learn the benefits of model selection and how it differs from traditional inferential statistics
  • Understand the use of AIC and AICc
  • Use AICc to perform model selection on the RIKZ data
  • Use AICc to perform model selection and model averaging on a more complicated ecological dataset

Lesson outline

Total lesson time: 2 hours

  • Brief intro to model selection (10 min)
  • Understanding AIC and AICc (20 mins)
  • Model selection of RIKZ dataset (30 mins)
  • Model selection and model averaging of more complicated ecological data (60 mins)

Setup

  • install.packages('dplyr') (or tidyverse)
  • install.packages('ggplot2') (or tidyverse)
  • install.packages("lme4")
  • install.packages("lmerTest")
  • install.packages("MuMIn")

Introduction to model selection

Up to now, when faced with a biological question, we have formulated a null hypothesis, generated a model to test the null hypothesis, summarized the model to get the value of the test-statistic (e.g. t-statistic, F-value, etc.), and rejected the null hypothesis when the observed test statistic falls outside the test statistic distribution with some arbitrarily low probability (e.g. P < 0.05). This low probability then allows us to reject the null hypothesis in favour of the more biologically interesting alternative hypothesis. This is an inferential or frequentist approach to statistics.

An alternative approach is to simultaneously test multiple competing hypotheses, with each hypothesis being represented in a separate model. This is what model selection allows and it is becoming increasingly used in ecology and evolutionary biology. It has a number of advantages:

  1. It does not rely on a single model.
  2. Models can be ranked and weighted according to their fit to the observed data.
  3. The best supported models can be averaged to get parameter estimates

The most challenging part of model selection is coming up with a series of hypothesis-driven models that that adequately capture the processes and patterns you are interested in representing. In an ideal world, this would be based on detailed knowledge of the system you are working in and on prior work or literature reviews. However, detailed knowledge is often unavailable for many ecological systems and alternative approaches exist. For example, we can compare models with all possible combinations of the predictors of interest (AKA the all-subset approach) rather than constructing models with only particular combinations of those predictors. Note this approach has been criticized as “data-dredging” or “fishing” (Burnham and Anderson 2002, but see Symonds and Moussalli 2011) and is nicely summarized by this often-quoted line from Burnham and Anderson (2012).

““Let the computer find out” is a poor strategy and usually reflects the fact that the researcher did not bother to think clearly about the problem of interest and its scientific setting (Burnham and Anderson, 2002)."

The next step is to decide how we select the “best” model or set of best models. One approach would be to use a measure of model fit or explanatory power. For example, we could use the model R2, which we covered in lecture 9 and represents the amount of variation in our response variable that is explained by the predictor variables in the model. However, this is not a parsimonious solution since it inevitably favours more complex models (i.e. models with more predictors). Thus, what we really need is an approach that examines model fit while simultaneously penalizing model complexity.

Information Theory and model selection criteria

There are numerous model selection criteria based on mathematical information theory that we can use to select models from among a set of candidate models. They additionally allow the relative weights of different models to be compared and allow multiple models to be used for inferences. The most commonly used information criteria in ecology and evolution are: Akaike’s Information Criterion (AIC), the corrected AICc (corrected for small sample sizes), and the Bayesian Information Criterion (BIC, also known as the Schwarz Criterion) (Johnson and Omland, 2004). Here we will focus on AIC and AICc. Here is how AIC is calculated:

\[ AIC = -2Log\mathcal{L} \ + \ 2p \] The lower the AIC value, the better the model. \(-2Log\mathcal{L}\) is called the negative log likelihood of the model, and measures the model’s fit (or lack thereof) to the observed data: Lower negative log-likelihood values indicate a beter fit of the model to the observed data. \(2p\) is a bias correcting factor that penalizes the model AIC based on the number of parameters (p) in the model.

Similar to AIC is AICc, which corrects for small sample sizes. It is recommended to use AICc when \(n/k\) is less than 40, with \(n\) being the sample size (i.e. total number of observations) and \(k\) being the total number of parameters in the most saturated model (i.e. the model with the most parameters), including both fixed and random effects, as well as intercepts (Symonds and Moussalli 2011). AICc is calculated as follows:

\[ AIC_c = AIC+\frac{2k(k+1)}{n-k-1} \]

Example of model selection

In lecture 8, we used mixed-effect models to determine whether species richness was influenced by NAP (i.e. the height of a sampling location relative to mean tidal level) across 9 beaches. We generated 3 models: (1) A random intercept model with NAP as a fixed effect and the random effect allowing the intercept (i.e species richness) to vary by beach; (2) A random intercept and slope model with NAP as a fixed effect and a random effect allowing both the intercept ( i.e. richness) and slope (i.e. response of richness to NAP) to vary across beaches; and (3) An intercept only model with no fixed effects but allowing for variation in richness across beaches. In this lecture, we’ll create similar models with random intercepts and random intercepts + slopes but we’ll additionally include Exposure and the NAP by Exposure interaction as additional fixed-effects. Let’s load in the RIKZ data.

Given these 3 models, we may be interested in knowing which one best fits our observed data so that we can interpret this model to draw inferences about our population. To do this, we can follow the guidelines laid out in Zuur et al. (2009):

  1. Create a saturated model that includes all fixed effects (and their interactions) and random effects. If you can’t include all fixed effects, you should select those that you think are most likely to be important based on your knowledge of the system at hand.
  2. Using the saturated model, optimize the random-effect structure of the model. Compare models with saturated fixed effects structure with models of differing random effect structure. Models should be fit using Restricted Maximum Likelihood (i.e. REML = TRUE). The optimal random effect structure is the one that provides the lowest AIC. Note that some common sense is needed here: you should not remove random effects if they are included to specifically account for non-independence in your data (i.e. nestedness, see lecture 8)
  3. Optimize the fixed-effect structure by fitting the model with optimized random effects to models with differing fixed effect structures. These models should be fit with Maximum Likelihood (i.e. REML = FALSE) to prevent biased fixed-effect parameter estimates. Models can be selected on the basis of AIC (lowest is best) or by comparing nested models using Likelihood Ratio Tests (LRTs). Important: You cannot include models that contain interactions if the main effects involved in the interactiond are not present in the model.
  4. Run the final model with optimized fixed and random effects using REML.

Note that this approach to model selection can also be applied to models that lack random effects (e.g. simple linear regression). In such cases, you don’t need to worry about random effects and can go ahead and just optimize the fixed effects. You also don’t need to worry about ML vs. REML.

Let’s try this with some real data. Let’s create 2 models, but this time let’s include Exposure and the interaction between NAP and Exposure as additional effects.

Step 1: Create saturated model

This is already done. The saturated model is mixed_model_IntSlope created in the code chunk above.

Step 2: Optimize random-effect structure

To optimize the random effects, we compare the mixed_model_IntSlope with the mixed_model_IntOnly. This will determine whether including a random slope for each beach improves the fit of the model to the observed data. Note: We are not testing the mixed_model_IntOnly model against one in which there is no random effect since including a random intercept for each beach is required to account for the non-independence in our data. Let’s get the AICc for these two models below. We will use AICc since \(n/k\) is equal to nrow(rikz_data)/3 = 1.67, which is below 40.

##                      df     AICc
## mixed_model_IntOnly   6 235.2327
## mixed_model_IntSlope  8 237.2527

Based on the output above, it seems including a random intercept only is a beter fit to the data (i.e. lower AICc). The optimal random-effect structure is thus one that includes only a random intercept for each beach but does not include a random slope.

Step 3: Optimize the fixed effect structure

We now need to refit the model with the optimal random-effect structure using ML and compare different fixed effect structures. Let’s fit these models below and check their AICcs.

##                             df     AICc
## mixed_model_IntOnly_Full     6 236.5947
## mixed_model_IntOnly_NoInter  5 238.1467
## mixed_model_IntOnly_NAP      4 250.8291
## mixed_model_IntOnly_Exp      4 261.7996
## mixed_model_IntOnly_NoFix    3 269.8889

Based on the output above, it looks like the model that includes NAP, Exposure, and their interaction provides the best fit to the data.

Step 4: Interpret model output

Summarizing the output, we see that increasing both NAP and Exposure results in a decrease in species richness (P < 0.05). There is also a nearly significant interaction between NAP and Exposure (I wouldn’t interpret this since P > 0.05). Finally, while Beach is included in our model as a random effect, notice how little variation is attributed to differences between beaches relative to the model we ran in lecture 8. The only difference is that our current model includes Exposure as a fixed effect. This suggests that much of the variation between beaches in lecture 8 was likely attributable to differences in exposure, which is now being captured by the fixed effects.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Richness ~ NAP * Exposure + (1 | Beach)
##    Data: rikz_data
## 
## REML criterion at convergence: 221
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -1.4139 -0.4394 -0.1064  0.1511  4.3635 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Beach    (Intercept) 0.3061   0.5533  
##  Residual             8.7448   2.9572  
## Number of obs: 45, groups:  Beach, 9
## 
## Fixed effects:
##              Estimate Std. Error       df t value Pr(>|t|)    
## (Intercept)   40.7149     5.6162   7.9355   7.250 9.18e-05 ***
## NAP          -13.5865     5.4298  36.5641  -2.502 0.016947 *  
## Exposure      -3.3385     0.5485   7.9964  -6.087 0.000294 ***
## NAP:Exposure   1.0625     0.5279  36.6831   2.013 0.051505 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) NAP    Exposr
## NAP         -0.300              
## Exposure    -0.996  0.301       
## NAP:Exposur  0.302 -0.997 -0.306

A more realistic example

In Assignment 5, you used data from Santangelo et al. (2019) who were interested in understanding how insect herbivores and plant defenses influence the expression of plant floral traits. While that was one component of the study, the main question was whether herbivores, pollinators, and plant defenses alter the shape and strength of natural selection on plant floral traits. In other words, which of these 3 agents of selection (plant defenses, herbivores, or pollinators) are most important in driving the evolution of floral traits in plants?

The motivation for that experiment actually came a few year prior, in 2016, when Thompson and Johnson (2016) published an experiment examining how plant defenses alter natural selection on plant floral traits. They found some interesting patterns but it was unclear whether these were being driven by the plant’s interactions with herbivores, pollinators, or both. This was because they didn’t directly manipulate these agents: pollination was not quantified in their experiment and herbivore damage was measured observationally and thus these results were correlative. However, their experimental data provides a prime use of model selection in ecology and evolution.

The data consists of 140 observations (rows). Each row in the dataset corresponds to the mean trait value of one plant genotype (they had replicates for each genotype but took the average across these replicates) grown in a common garden. They measured 8 traits and quantified the total mass of seeds produced by plants as a measure of absolute fitness. Genotypes were either “cyanogenic” (i.e. containing plant defenses) or were “acyanogenic” ( i.e. lacking plant defenses). In addition, they quantified the amount of herbivore damage (i.e. percent leaf area lost) on each plant twice throughout the growing season, although here we will only focus on the effects of plant defenses and avoid their herbivore damage measurements. We are going to conduct a genotypic selection analysis to quantify natural selection acting on each trait (while controlling for other traits) and assess whether plant defenses alter the strength or direction of natural selection imposed on these traits. While this may sound complicated, it turns out that a single multiple regression is all that’s required to do this: relative fitness is the response variable and standardized traits (i.e. mean of 0 and standard deviation of 1), treatments, and their interactions are the predictors. This multiple regression approach is a common way of measuring natural selection in nature (see Lande and Arnold 1983, Rausher 1992, Stinchcombe 2002).

Let’s start by loading in the data.

## Observations: 140
## Variables: 38
## $ Genotype     <dbl> 2, 4, 5, 6, 8, 9, 10, 11, 13, 15, 16, 17, 18, 19, 21, 22…
## $ cyanide      <dbl> 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0,…
## $ linamarin    <dbl> 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0,…
## $ linamarase   <dbl> 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0,…
## $ VegBiomass   <dbl> 1.16840, 4.42100, 0.79500, 4.44740, 1.17575, 2.70160, 2.…
## $ LatSpread    <dbl> 283.500, 995.400, 298.000, 1019.450, 271.000, 887.800, 7…
## $ FrstFlwr     <dbl> 2.200000, 11.250000, 7.666667, 32.000000, 8.500000, 6.80…
## $ BnrLgth      <dbl> 6.326250, 6.472500, 5.712500, 5.422500, 5.565000, 5.8862…
## $ BnrWdt       <dbl> 3.568750, 3.700000, 2.961250, 3.222500, 3.350000, 3.2937…
## $ InflHt       <dbl> 23.21750, 20.62000, 20.09500, 25.13500, 14.93000, 19.711…
## $ InflWdt      <dbl> 22.67250, 17.72833, 21.44250, 25.01000, 15.37000, 19.391…
## $ InflNum      <dbl> 4.20, 4.00, 3.00, 1.00, 1.75, 3.40, 2.20, 1.50, 1.80, 12…
## $ FlwrCt       <dbl> 32.75000, 29.50000, 21.50000, 59.25000, 56.00000, 35.444…
## $ SeedBm       <dbl> 0.030000, 0.038625, 0.004375, 0.014960, 0.025875, 0.0309…
## $ AvgDMG1      <dbl> 43.26000, 21.95000, 43.17500, 21.72000, 30.16250, 27.400…
## $ AvgDMG2      <dbl> 21.3700, 17.5500, 24.4500, 15.9200, 37.3375, 20.2400, 22…
## $ VegGrowth    <dbl> -1.12177796, 0.88565970, -1.23438613, 0.92433034, -1.134…
## $ RFSeed       <dbl> 0.9282121, 1.1950731, 0.1353643, 0.4628684, 0.8005829, 0…
## $ FrstFlwr.T   <dbl> 1.483240, 3.354102, 2.768875, 5.656854, 2.915476, 2.6076…
## $ InflHt.T     <dbl> 539.0523, 425.1844, 403.8090, 631.7682, 222.9049, 388.54…
## $ SeedBm.T     <dbl> 0.17320508, 0.19653244, 0.06614378, 0.12231108, 0.160857…
## $ InflNum.T    <dbl> 1.6486586, 1.6094379, 1.3862944, 0.6931472, 1.0116009, 1…
## $ AvgDmg1.T    <dbl> 6.577233, 4.685083, 6.570769, 4.660472, 5.492040, 5.2345…
## $ AvgDmg2.T    <dbl> 4.622770, 4.189272, 4.944694, 3.989987, 6.110442, 4.4988…
## $ VegGrowth.T  <dbl> 0.7926046, 1.6234715, 0.7180626, 1.6353380, 0.7845251, 1…
## $ VegBiomass.T <dbl> 1.0809255, 2.1026174, 0.8916277, 2.1088860, 1.0843201, 1…
## $ LatSpread.T  <dbl> 16.83746, 31.54996, 17.26268, 31.92883, 16.46208, 29.795…
## $ FlwrCt.S     <dbl> -0.51866068, -0.84578217, -1.65100431, 2.14863763, 1.821…
## $ BnrLgth.S    <dbl> 0.98570003, 1.28769651, -0.28165272, -0.88048334, -0.586…
## $ BnrWdt.S     <dbl> 0.91616830, 1.41223922, -1.37993138, -0.39251403, 0.0893…
## $ InflHt.S     <dbl> 0.7190552, -0.3496320, -0.5502468, 1.5892241, -2.2480905…
## $ InflWdt.S    <dbl> 0.83894602, -0.80610308, 0.42969396, 1.61669129, -1.5907…
## $ FrstFlwr.S   <dbl> -1.54639903, 0.34224409, -0.24854531, 2.66688196, -0.100…
## $ SeedBm.S     <dbl> 0.08114206, 0.44458767, -1.58689750, -0.71179816, -0.111…
## $ InflNum.S    <dbl> 0.94431501, 0.85825229, 0.36860437, -1.15238099, -0.4535…
## $ AvgDmg1.S    <dbl> 0.66702301, -1.29690435, 0.66031291, -1.32244855, -0.459…
## $ AvgDmg2.S    <dbl> -0.29425351, -0.87159469, 0.13449201, -1.13700594, 1.687…
## $ VegGrowth.S  <dbl> -1.18918210, 0.91239380, -1.37772698, 0.94240864, -1.209…

We will now generate the global model. Remember, this should be a saturated model with all of the fixed effects and their interactions. We are including the presence of hydrogen cyanide (HCN, cyanide in model below), all standardized traits and the trait by HCN interactions as fixed effects in this model. There are no random effects in this model so we can go ahead and use lm().

Next, we will perform our model selection based on AICc (due to low sample sizes). We automate this process using the dredge() function from the MuMIn package. dredge() offers a ton of flexibility in how model selection is done. You can customize the criterion used (i.e. AIC, BIC, etc.), how the output is reported, what’s included in the output (e.g. do you want F-stats and R2 to be included?), whether some terms should be represented in all models and even only include some terms in models if other terms are included (AKA Dependency Chain). For our purposes, we will perform an all-subsets model selection, comparing models with all combinations of predictors (but not those where main effects are absent despite the presence of an interaction!). I warned earlier that this approach has been criticized. However, in this case it’s reasonable: we know from work in other systems that all of these traits could conceivably experience selection, and we know that that selection could vary due to plant defenses. In other words, all terms in this model represent biologically real hypotheses. Let’s go ahead and dredge.

Let’s have a look at the first few lines returned by dredge(). Let’s also print out how many models were compared.

## Global model call: lm(formula = RFSeed ~ VegGrowth.S * cyanide + BnrLgth.S * cyanide + 
##     BnrWdt.S * cyanide + FrstFlwr.S * cyanide + InflNum.S * cyanide + 
##     FlwrCt.S * cyanide + InflWdt.S * cyanide + InflHt.S * cyanide, 
##     data = Thompson_data)
## ---
## Model selection table 
##        (Int)  BnL.S    BnW.S     cyn  FlC.S    FrF.S  InN.S   VgG.S BnL.S:cyn
## 3920  0.9783 0.2297 -0.09086 0.04292 0.1671          0.4958 0.07497   -0.3245
## 3664  0.9668 0.2597 -0.10380 0.05271 0.1756          0.5099           -0.3513
## 2894  0.9895 0.1861          0.02887 0.1721          0.4916 0.09900   -0.1837
## 69456 0.9809 0.2151 -0.08489 0.04193 0.1624          0.4992 0.10790   -0.3086
## 3936  0.9825 0.2238 -0.09101 0.04230 0.1687 -0.02619 0.4860 0.08231   -0.3215
## 2910  0.9943 0.1794          0.02828 0.1739 -0.02989 0.4806 0.10700   -0.1834
##       BnW.S:cyn cyn:FlC.S cyn:VgG.S df  logLik  AICc delta weight
## 3920     0.2273    0.2328           11 -85.102 194.3  0.00  0.254
## 3664     0.2633    0.2437           10 -86.353 194.4  0.14  0.237
## 2894               0.2200            9 -87.650 194.7  0.42  0.206
## 69456    0.2391    0.2521   -0.0917 12 -84.648 195.8  1.49  0.121
## 3936     0.2238    0.2311           12 -84.880 196.2  1.95  0.096
## 2910               0.2181           10 -87.370 196.4  2.18  0.086
## Models ranked by AICc(x)
## [1] 6817

The output tells us the original model and then provides a rather large table with many rows and columns. The rows in this case are different models with different combinations of predictors (n = 6,817 models). The columns are the different terms from our model, which dredge() has abbreviated. The numbers in the cells are the estimates (i.e. beta coefficients) for each term that is present in the model; blank cells mean that term was not included in the model. The last 5 columns are important: they give us the degrees of freedom for the model (a function of the number of terms in the model), the log-likelihood of the model, the AIC score, the delta AIC, and the AIC weights. The delta AIC is the difference between the AIC score of a model and the AIC score of the top model. The weight can be thought of as the probability that the model is the best model given the candidate set included in the model selection procedure.

Given this output, we may be interested in retrieving the top model and interpreting it. Let’s go ahead and to this. We can retrieve the top model using the get.models() function and specifying that we want to top model using the subset argument. We need to further subset this output since it returns a list.

## 
## Call:
## lm(formula = RFSeed ~ BnrLgth.S + BnrWdt.S + cyanide + FlwrCt.S + 
##     InflNum.S + VegGrowth.S + BnrLgth.S:cyanide + BnrWdt.S:cyanide + 
##     cyanide:FlwrCt.S + 1, data = Thompson_data)
## 
## Coefficients:
##       (Intercept)          BnrLgth.S           BnrWdt.S            cyanide  
##           0.97834            0.22967           -0.09086            0.04292  
##          FlwrCt.S          InflNum.S        VegGrowth.S  BnrLgth.S:cyanide  
##           0.16709            0.49578            0.07497           -0.32449  
##  BnrWdt.S:cyanide   cyanide:FlwrCt.S  
##           0.22730            0.23284

This output above shows us the top model from our dredging. What if we want to interpret this model? No problem!

## 
## Call:
## lm(formula = RFSeed ~ BnrLgth.S + BnrWdt.S + cyanide + FlwrCt.S + 
##     InflNum.S + VegGrowth.S + BnrLgth.S:cyanide + BnrWdt.S:cyanide + 
##     cyanide:FlwrCt.S + 1, data = Thompson_data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.20717 -0.24783 -0.02084  0.26346  1.25135 
## 
## Coefficients:
##                   Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        0.97834    0.05049  19.376  < 2e-16 ***
## BnrLgth.S          0.22967    0.05807   3.955 0.000125 ***
## BnrWdt.S          -0.09086    0.05758  -1.578 0.116972    
## cyanide            0.04292    0.08416   0.510 0.610918    
## FlwrCt.S           0.16709    0.05256   3.179 0.001845 ** 
## InflNum.S          0.49578    0.04865  10.191  < 2e-16 ***
## VegGrowth.S        0.07497    0.04898   1.531 0.128237    
## BnrLgth.S:cyanide -0.32449    0.11197  -2.898 0.004409 ** 
## BnrWdt.S:cyanide   0.22730    0.10567   2.151 0.033313 *  
## cyanide:FlwrCt.S   0.23284    0.09003   2.586 0.010806 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4612 on 130 degrees of freedom
## Multiple R-squared:  0.6198, Adjusted R-squared:  0.5935 
## F-statistic: 23.55 on 9 and 130 DF,  p-value: < 2.2e-16

But how much evidence do we actually have that this is the best model? We have over 6,000 models so it’s unlikely that only one model explains the data. From the dredge output we can see there is little difference in the AIC and weights of the first few models. Is there really much of a difference between two models who’s AIC differ by only 0.14 points? How do we decide which model(s) to interpret? Statisticians have thought about this problem and it turns out that models with delta AIC (or other criterion) less than 2 are considered to be just as good as the top model and thus we shouldn’t just discount them. Alternatively, we could use the weights: if a model has weight greater or equal to 95% then it is likely to be the top model. Otherwise we can generate a “credibility” set consisting of all models whose cumulative sum of AIC weights is 0.95. In any case, the point is that we have no good reason to exclude models other than the top one when the next models after it are likely to be just as good. To get around this, we can perform what’s called model averaging (AKA multi-model inference), which allows us to average the parameter estimates across multiple models and avoids the issue of model uncertainty. Let’s do this below by averaging all models with a delta AIC <= 2.

## 
## Call:
## model.avg(object = GTmodel_dredge, subset = delta <= 2)
## 
## Component model call: 
## lm(formula = RFSeed ~ <5 unique rhs>, data = Thompson_data)
## 
## Component models: 
##                       df logLik   AICc delta weight
## 1/2/3/4/6/7/8/9/10    11 -85.10 194.27  0.00   0.28
## 1/2/3/4/6/8/9/10      10 -86.35 194.41  0.14   0.26
## 1/3/4/6/7/8/10         9 -87.65 194.68  0.42   0.23
## 1/2/3/4/6/7/8/9/10/11 12 -84.65 195.75  1.49   0.13
## 1/2/3/4/5/6/7/8/9/10  12 -84.88 196.22  1.95   0.10
## 
## Term codes: 
##           BnrLgth.S            BnrWdt.S             cyanide            FlwrCt.S 
##                   1                   2                   3                   4 
##          FrstFlwr.S           InflNum.S         VegGrowth.S   BnrLgth.S:cyanide 
##                   5                   6                   7                   8 
##    BnrWdt.S:cyanide    cyanide:FlwrCt.S cyanide:VegGrowth.S 
##                   9                  10                  11 
## 
## Model-averaged coefficients:  
## (full average) 
##                      Estimate Std. Error Adjusted SE z value Pr(>|z|)    
## (Intercept)          0.978654   0.051169    0.051637  18.953  < 2e-16 ***
## BnrLgth.S            0.225054   0.062200    0.062681   3.590  0.00033 ***
## BnrWdt.S            -0.072942   0.064461    0.064835   1.125  0.26058    
## cyanide              0.042088   0.084783    0.085570   0.492  0.62282    
## FlwrCt.S             0.169975   0.052873    0.053364   3.185  0.00145 ** 
## InflNum.S            0.497923   0.049505    0.049957   9.967  < 2e-16 ***
## VegGrowth.S          0.066116   0.060037    0.060342   1.096  0.27321    
## BnrLgth.S:cyanide   -0.297258   0.124060    0.124926   2.379  0.01734 *  
## BnrWdt.S:cyanide     0.186530   0.137538    0.138125   1.350  0.17687    
## cyanide:FlwrCt.S     0.235112   0.091214    0.092057   2.554  0.01065 *  
## cyanide:VegGrowth.S -0.012129   0.047872    0.048135   0.252  0.80106    
## FrstFlwr.S          -0.002749   0.015496    0.015604   0.176  0.86018    
##  
## (conditional average) 
##                     Estimate Std. Error Adjusted SE z value Pr(>|z|)    
## (Intercept)          0.97865    0.05117     0.05164  18.953  < 2e-16 ***
## BnrLgth.S            0.22505    0.06220     0.06268   3.590  0.00033 ***
## BnrWdt.S            -0.09420    0.05800     0.05853   1.609  0.10753    
## cyanide              0.04209    0.08478     0.08557   0.492  0.62282    
## FlwrCt.S             0.16997    0.05287     0.05336   3.185  0.00145 ** 
## InflNum.S            0.49792    0.04951     0.04996   9.967  < 2e-16 ***
## VegGrowth.S          0.08921    0.05295     0.05341   1.670  0.09487 .  
## BnrLgth.S:cyanide   -0.29726    0.12406     0.12493   2.379  0.01734 *  
## BnrWdt.S:cyanide     0.24090    0.10646     0.10743   2.242  0.02494 *  
## cyanide:FlwrCt.S     0.23511    0.09121     0.09206   2.554  0.01065 *  
## cyanide:VegGrowth.S -0.09170    0.10015     0.10110   0.907  0.36438    
## FrstFlwr.S          -0.02619    0.04092     0.04131   0.634  0.52601    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The first part of the output breaks down which terms are part of which models and gives some nice descriptive statistics for these models. The next part is the important bit: the actual parameter estimates from the model averaging. The estimates are those that were averaged across all models with a delta AIC <= 2. Note there are two sets of estimates: the “full” coefficients set terms to 0 if they are not included in the model while averaging, whereas the “conditional” coefficients ignores the predictors entirely. The “full” coefficients are thus more conservative and it is best practice to interpret these. Finally, the last part of the output tells us in how many models each of the terms was included.

Caveats to model selection

  1. Depends on the models included in the candidate set. You can’t identify a model as being the “best” fit to the data if you didn’t include the model to begin with!
  2. The parameter estimates and predictions arising from the “best” model or set of best models should be biologically meaningful.
  3. Need to decide whether to use model selection or common inferential statistics (e.g. based on P-values). Techniques that rely on both approaches are possible (e.g. backward variable selection followed by averaging of top models), such as the example provided above.

Formal test between two models

Throughout the early parts of the lecture, we made qualitative decisions on which model was best based on model AIC scores. However, it is also possible to statistically test whether one model fits the data better using a likelihood ratio test. This test compares the goodness of fit of two models by testing the ratio of their log-likelihoods against a Chi-squared distribution. Importantly, this approach requires that the two models be nested (i.e., one model must be a subset of the other). This can be implemented in R using the anova(model1, model2) syntax.

Additional reading

  1. Johnson, J. and Omland, K. 2004. Model selection in ecology and evolution. Trends in Ecology and Evolution 19: 101-108.
  2. Burnham K.P., Anderson D.R. 2002. Model selection and multimodel inference, 2nd edn. Springer, New York.
  3. Symonds, M. and Moussalli, A. 2011. A brief guide to model selection, multimodel inference and model averaging in behavioural ecology using Akaike’s information criterion. Behavioural Ecology and Sociobiology 65: 13-21.
    1. Zuur, A. et al. 2009. Mixed effects models and extensions in ecology with R. Springer
  4. Thompson, K.A. and Johnson, M.T.J. 2016. Antiherbivore defenses alter natural selection on plant reproductive traits. Evolution 70: 796-810.
  5. Lande, R. and Arnold, S. 1983. The measurement of selection on correlated characters. Evolution 37: 1210-1226.
  6. Rausher, M. 1992. The measurement of selection on quantitative traits: biases due to environmental covariances between traits and fitness. Evolution 46: 616-626.
  7. Stinchcombe, J.R. et al.. 2002. Testing for environmentally induced bias in phenotypic estimates of natural selection: theory and practice. Am. Nat. 160: 511-523.

This work is licensed under a Creative Commons Attribution 4.0 International License. See the licensing page for more details about copyright information.