Opinion

The Beer-Reviewed Stock Assessment: A Fisheries Phenomenon

Peer reviewed data show Gulf of Maine cod stocks at severely depleted levels, but "beer review" says there are plenty of fish.

Peer reviewed data show Gulf of Maine cod stocks at severely depleted levels, but "beer review" says there are plenty of fish.

This article was originally posted on Charles Witek’s blog One Angler’s Voyage. Charles Witek is a recreational fisherman and a former member of the Mid-Atlantic Fishery Management Council.

We’ve all heard about peer-reviewed stock assessments.  That’s what you get when a team of biologists assesses the health of one stock of fish, and another panel of expert scientists, unrelated to the first, reviews that team’s work and determines whether it is good enough to use for fisheries management purposes. If it is, it represents a sort of “gold standard” for fisheries managers, who can then establish regulations based on the assessment, and be reasonably certain that they’re doing the right thing.

However, if you go down to the docks, pick up a press release put out by one of the anglers’ rights groups or read some of the comments on Internet chat boards, you’ll find that a lot of people don’t give the peer-reviewed assessments, or the scientists who provide them, much weight.

In those venues, the folks with the most authority—which generally equates to the guy with the loudest voice in the bar, the underemployed guy who spends his whole day on chat boards and the guy who publishes the local outdoor magazine—prefer a somewhat different analysis of a stock’s health, which might be called a “beer-reviewed stock assessment,” given where such contrary assessments, once issued, are often discussed.

A proper beer-reviewed assessment begins with complete contempt for everything that’s required to pass peer review. Beer-reviewed assessments lack any sort of numbers, objective data or population models, which make them pretty easy to put together, and just as easy to understand.

Even though they’re usually wrong.

For example, in 2011, a peer-reviewed stock assessment of Gulf of Maine cod determined that“the stock is overfished and overfishing is occurring,” and found that “the stock does not rebuild by the current rebuilding date of 2014.” The same assessment determined that while it was safe to harvest a little under 20% of the cod population each year, fishermen were actually taking more—perhaps much more—than 55% of the population annually.  As a result, a population that should comprise about 61,000 metric tons had been reduced to somewhere between 9,500 and 16,500 metric tons, and thus was badly overfished.

In response to that assessment, the National Marine Fisheries Service cut harvest by 77% in an attempt to rebuild the stock and prevent its collapse.  And that’s when the beer-review began, with reviewers claiming that the harvest cuts were “based on flawed science.”

Because, as any beer-reviewer knows, the science is always “flawed” (or “bad”) when it doesn’t let you kill enough fish. It doesn’t matter what species is involved, what state you’re in or whether the guy who’s talking runs a charter boat or a commercial trawler.  They’ll explain that [fill in the precise words and the type of fish as needed…] “We’re out there every day.  There’s a lot of commercial fishermen who will tell you they’ve had a hard time staying away from the codfish.  The charter boats do not see a downfall on the codfish.  In fact, it’s probably some of the best fishing in 2011 and 2010 that I’ve seen in 35 years.”

Yes, they’ll tell you that.  No numbers or population models or analysis needed.  Just take their word for it, there’s plenty of cod…

And, of course, cod aren’t the only species subject to beer-reviewed assessments.  You can mention just about any stock of fish out there, and if NMFS wants to conserve or rebuild it, the beer-review panel will tell us that the peer-review panel was wrong.

For another example, we can look at red snapper in the Gulf of Mexico, where last year’s peer-reviewed stock assessment panel found that the population, though recovering, still has a lot of rebuilding to do.The spawning stock is just about half of the target level, and far below where it was as recently as 1970 or so—far, far below where it was before commercial exploitation began in the late 1800s.

However, the beer-reviewed assessments said otherwise, saying “…red snapper are more plentiful, bigger and further spread out than at any time in the past,” and “They don’t really know what’s off the Alabama coast and on our reefs.  I’ve been fishing in the Gulf 25 years and I can tell you what’s not out there anymore:  there are no beeliners, no triggerfish and very few if any grouper.  All you can catch out there right now are snapper.”

In the South Atlantic, where stock assessments indicate that the species is far more depleted than it is in the Gulf, they said the same thing about red snapper a few years ago, noting “They still keep claiming that we’re fishing at eight times the sustainable rate, and I just don’t see that being true.  If that was the case … the amount of fish we’re catching per trip would fall. It would not be consistently the same over the last 10 years. And it definitely would not be getting better.”

That’s similar to this complaint about South Atlantic black sea bass, where “[A charter boat captain] takes issue with data used to close black sea bass fishing, saying the species is actually overpopulated because of past closures. He claims that is resulting in black sea bass eating different species of fish, like grouper, and causing disruption for other ecosystems..”

Does anybody notice a pattern here?

The essence of science—including peer-reviews—is that it is objective, data-driven and subject to verification; that is, other folks can take the same data and come up with the same results or, at the worst, confirm that the data doesn’t include any calculation or sampling errors, and so is statistically valid. Any biases that might be included (in fisheries management, they show up as “retrospective changes” in the population model when new data is added) is recognized and accounted for.

The essence of the beer review is that it is subjective, not based on data and cannot be verified by independent, objective observers.  Sampling is biased—that is, fishermen who issue beer-reviewed assessments don’t make random samples or try to verify the “null hypothesis” that the peer-reviewed assessment was right—but rather go to places where they are most likely to catch fish with the express intent of showing why the peer-reviewed assessment is wrong.  And since the beer-reviewers will probably increase their incomes—or at least their catch, if purely recreational—by discrediting the science, their motive to do so is strong and the likelihood of bias, which they never admit to, is even greater.

The other problem with beer-reviewed assessments is that they take the very localized experiences of individuals, who may have relatively little historical knowledge of a fishery, and trying to extrapolate that limited experience to the entire stock.

Even when a stock is badly depleted, a chance combination of circumstances can lead to pockets of local abundance, and some folks will still be putting plenty of fish on the dock when everyone else is suffering through a real drought.  That can be particularly true in commercial fisheries, where the skill of the fishermen, coupled with their willingness to switch ports in order to follow what’s left of the resource, can keep catches high and convince them that there’s still plenty of fish left to catch, even when the overall population is down.

However, it can also occur in recreational fisheries.  Perhaps the best example occurred decades ago; the Atlantic striped bass stock was beginning to collapse, and fishermen from Maine to North Carolina were beginning to notice the decline.  But on Cape Cod, at the core of the striper’s summer range, a lot of big fish were still available, and many fishermen denied the truth of stock assessments showing that the bass were in rapid decline, merely because their limited experience didn’t support it.

Something similar can occur when a badly depleted stock—particularly one that’s been down for a very long time—begins to recover.  Fishermen start encountering a lot more fish than they had before, and begin to declare that the stock’s health is good based on their own subjective experiences, when the objective truth is very different.  Roy Crabtree, Director of NMFS Southeast Regional Office, noted this phenomenon with respect to South Atlantic red snapper. “It looks like we had a strong year class about five years ago or so where we had good reproductive success by red snapper and so we have a lot of young fish that are being caught right now.  But that’s a far cry from recovery and rebuilding.”

Yet to fisherman who never saw so many fish before, all is well.

Such misconceptions can do real harm to fisheries management efforts, because fishermen often question solid, peer-reviewed stock assessments and accept beer-reviewed versions that lead to more palatable conclusions.

A lot of that results from a natural tendency to cherry-pick information that supports what you want to believe. I think that there are very few fishermen who want to go out and intentionally fish a stock into collapse; after all, that would destroy either their business or their avocation.

However, there is little question that fishermen want to go out and catch fish, and faced with government regulators, who insist that fish stocks are bad, and fishermen (and fishermen’s organizations) who use their reputations and experience on the water to add credibility to claims that the stocks are OK, they tend to credit the folks who they know—and who say what they want to hear—over the remote and sometimes standoffish “experts” from some lab in another state.

And then, of course, there are also the people, businesses and organizations who elevate their own short-term concerns over the long-term health of the stocks, and thus profit from sowing distrust and even contempt for professional fisheries managers.

So the question is, what are fisheries managers, conservation advocates and well-intentioned fishermen to do about beer-reviewed assessments that turn public opinion against science and threaten the health of fish stocks?

For fishermen, the best approach is to take up the scientists’ strongest weapon—simple skepticism.  Don’t take anyone’s word for anything; make them show you the data.  When somebody tells you that “there are more red snapper than there’s been for 100 years,” as the irate employee of an advocacy outfit wrote to  me just a few days ago, the response should be “show me the numbers” which, of course, indicate just the opposite.  Anglers may not be able to confirm the results of a stock assessment from the data provided—I know that I lack the math skills to do it—but they can take comfort from the fact that it has passed a rigorous and detailed peer review.

Conservation groups that engage in fisheries issues are the natural allies of fishermen; both want healthy and abundant stocks.  But, sorry to say, the trust isn’t there.  Some of that is due to a few fishing industry groups, mostly based in the north, who have spent the last couple of decades trying to poison the waters and foster angler distrust of conservation efforts.  Some of it is due to the conservation groups themselves, who have occasionally embarked on campaigns that made anglers feel threatened (marine protected areas, anyone?), fail to engage in sufficient consultation and collaboration, and so lend credence to claims that they were “anti-fishing” and intended to force folks off the water.  A reconciliation, involving real and sincere outreach, is badly needed here.

The problem with fisheries managers is much the same.  Although some managers are better than others when it comes to user-group outreach (the Mid-Atlantic Fisheries Management Council deserves particular plaudits here), too many times agencies don’t make enough of an effort to explain—in simple terms and in the right forums—why unpopular but necessary decisions are made.  Such standoffishness serves to make fishermen hostile, and more willing to believe the demagogues’ claims that the fisheries management system, which has already rebuilt many stocks and is poised to rebuild even more, is “broken” and needs to be scrapped.

For scrapping the system would be a bad thing.  If that happens, the beer-reviewed assessments will give us a hangover that we may not cure in our lifetimes.


Comments

One Response to The Beer-Reviewed Stock Assessment: A Fisheries Phenomenon

Talking Fish reserves the right to remove any comment that contains personal attacks or inappropriate, offensive, or threatening language. For more information, see our comment policy.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>