Ask An Expert

Taking Stock of New England Fish: Part 3

Figure 1. This plot shows both the fall and spring indices of abundance that are derived from the Northeast Fisheries Science Center’s bottom trawl survey for Gulf of Maine haddock. The shaded areas represent the 80% confidence interval, which basically means that there is an 80% probability that the true value is within the shaded band. The width of the band provides an indication of the level of precision, or uncertainty there are in the data; wider bands imply greater uncertainty and indicate imprecisely estimated values. If these data are used in a model that can incorporate this uncertainty, it will use this information to determine how well to fit the data; years that are more precisely estimated will carry greater weight during the model fitting process.

Mike Palmer is Research Fisheries Biologist in the Population Dynamics Branch of the Northeast Fisheries Science Center.

TalkingFish.org: Despite our best efforts, there must always be some uncertainty about how many fish there are in the ocean. How does this uncertainty come into play in stock assessments, and how is it accounted for?

Mike Palmer: There are two primary types of uncertainty in stock assessments: data uncertainty and model uncertainty. Data uncertainty describes the level of certainty we have that the data we are using for the assessment is correct. It can often be quantified, and in some cases directly incorporated into and used by the model to determine how closely to fit the data. For example, say a model has been constructed that uses two survey time series; one that is very precisely estimated and the other that is less precise. When using a model that can incorporate the level of precision in the survey data, the model will tend to ‘believe’ the survey that has less uncertainty and allow more flexibility in fitting the survey series with higher uncertainty.

When data uncertainty can’t be directly incorporated into a model, we can often characterize the sensitivity of the model to uncertain data. A good example of this is the assumption of what percentage of fish survive after they are discarded.  Suppose the assumption made in the ‘best’ model assumes that 20% of the discarded fish survive. If the true survival was somewhere between 0% and 40% we could evaluate how sensitive the model is to this assumption by running alternate model runs using the upper and lower ranges of likely survival. If the results of sensitivity analyses are similar, this shows us that the model is insensitive to alternate assumptions. It’s important that when we evaluate a sensitivity analysis we examine a wide range of model outputs and diagnostics. A different assumption may have little impact on one particular output, say spawning stock biomass, yet have a considerable impact on some other result, such as fishery selectivity.

Model uncertainty describes the level of certainty we have in the model results given the data used. Generally speaking, the best models are both accurate and precise. It’s also possible for a model to be precise, but not accurate. Consider a baseball pitcher who is attempting to throw the ball in the strike zone, but instead throws it a foot above the strike zone on every pitch. The pitcher is precise because the ball is always thrown to the same spot, but not very accurate because his pitches are biased high, well out of the strike zone. A model that is consistently inaccurate in the same direction is also said to be biased.

Model precision can be evaluated by running the model many times, often a thousand or more, and evaluating the range of results achieved – like asking a pitcher to throw 1000 pitches and evaluating the distribution of pitches. If the results between model runs are very similar then we consider the model results to be precise and typically we will have high degree of confidence in these model results.

Determining the accuracy of an assessment model is a little more difficult. In baseball, the strike zone is the target. In stock assessments the true population size is the target.  While we know exactly where the strike zone is, we don’t know what the true population size is and are therefore using a model to estimate it. We can get an idea of the accuracy of a model by looking at how it performs as more information on the size of fish cohorts are added to it.  This analysis is called a ‘retrospective analysis’. If we observe that the estimate of population size drops every time another year of data is added, we know that the model has a tendency to consistently overestimate stock size; so the information on the current year’s stock size that we’re getting from the model is likely to be biased high.

Stock assessment reports document the sources of uncertainty along with a quantitative description of the uncertainty (amount and direction), and fishery managers should consider uncertainty when setting catch limits.


Comments

Talking Fish reserves the right to remove any comment that contains personal attacks or inappropriate, offensive, or threatening language. For more information, see our comment policy.

Leave a Reply

Your email address will not be published. Required fields are marked *