Pages

Saturday, 19 July 2014

Apocalypse in world fisheries? The reports of their death are greatly exaggerated - another view from ICES

Another report - this time from ICES whose data was used for the 'scraping the barrel' report - a cautionary tale!


"The catch-based methods underlying the forecast that by 2048 all commercially exploited stocks will have collapsed have been severely criticized, and a recent and more-elaborate analysis by a group of scientists that included the lead author of the original article has led to a quite different interpretation. Nonetheless, the 2006 forecast of a forthcoming apocalypse in the oceans is still uncritically referred to by critics of current management and fisheries science. In the title, the quote by Mark Twain is paraphrased to underline the fact that this prediction is both technically and conceptually flawed: (i) any series of random numbers subjected to the algorithm underlying the prediction will show a pattern similar to that observed in catch statistics; (ii) this pattern should be accounted for in making predictions; and (iii) interpreting the period of maximum harvest in a time-series as generally reflecting a period during which a stock was fully exploited is incorrect, because history often has shown that these maximum yields were taken during a period of overexploitation and could not have been sustainable.



There is worldwide public concern, supported by scientific publications, that fisheries are depleting marine resources and that fisheries management is globally ineffective at halting this process. We concur that there is ample hard evidence that unsustainable fishing and management practices are widespread (Worm et al., 2009; Hutchings et al., 2010; FAO, 2011), but the solution will not be found in soundbytes based on unsound evidence and erroneous interpretation. The political response to evidence of technical mistakes in the report of the Intergovernmental Panel on Climate Change, and the media attention these errors received, illustrates the counterproductive impact of even relatively small scientific errors, if the subject matter itself is important to policy.

The prediction by Worm et al. (2006) that by 2048 all commercially exploited stocks will have collapsed through continuing overexploitation became a focal point for public and media concern about the state of world fisheries. Several basic objections have been raised against the methods underlying that prediction (Hilborn, 2007; Hölker et al., 2007; Jaenike, 2007; Longhurst, 2007; Wilberg and Miller, 2007; Branch, 2008), and more recently, the interpretation of stock-status development has been revised substantially after collaborative analyses by some of the original authors and their critics (Worm et al., 2009). Nonetheless, rebuttals published with the criticisms of the 2006 prediction continued to argue that evaluation of trends in stock status based only on trends in catch statistics is scientifically sound (Worm et al., 2007; Froese and Kesner-Reyes, 2009). Moreover, the original findings are still being used to ring alarm bells for rapidly dwindling marine resources (Pauly, 2007, 2008, 2009; Pauly et al., 2008; Zeller et al., 2008), without paying due attention to the objections raised.

Worm et al. (2006) based their evaluation on the premise that a reported catch that is 10% of the historical maximum is a valid criterion for designating a stock as being in a collapsed state. A full description of the algorithm for a more elaborate catch-based stock classification (of which “collapsed status” was only one component) has been published by Froese and Kesner-Reyes (2002) and again by Zeller et al. (2008). This classification interprets the catch in a particular year (relative to the historical maximum in a time-series) as being indicative of stock status, taking into account whether that year happened to be before or after the year of the maximum catch. We contend that the method is both technically and conceptually flawed and that any predictions derived from it represent flawed prophecies.

Technical flaws

The algorithm used by Zeller et al. (2008) for defining five levels of stock status is simple. Representing catch (reported landings) in year YC by CY and maximum landings by Cmax taken in year YCmax, the following definitions apply:

1 undeveloped: YC < YCmax and CY < 0.1 Cmax;

2 developing: YC < YCmax and 0.1 Cmax < CY < 0.5 Cmax;

3 fully exploited: CY > 0.5 Cmax;

4 overexploited: YC > YCmax and 0.1 Cmax < CY < 0.5 Cmax;

5 collapsed: YC > YCmax and CY < 0.1 Cmax.

In a comment on the prediction by Worm et al. (2006), Wilberg and Miller (2007) applied the definition for collapsed stocks (status class 5) to series of simulated catch numbers fluctuating randomly around a stationary mean of a lognormal distribution with varying degrees of autocorrelation. They showed that the proportion of stochastic series classified as collapsed necessarily increased over time and depended on the coefficient of variation (CV) of the random errors. We extend their approach to evaluate the entire algorithm using series of random numbers (in the example running over 50 “years” for 100 “stocks”). As our approach is intended to be illustrative, we made what we consider the simplest plausible assumptions about the nature of the distribution, degree of autocorrelation, and CV, simulating a base case where the numbers varied randomly using a uniform distribution between 0 and 1. Many factors affect how reported catches are distributed, because trends in actual catches depend on the specific investment history within each fishery as well as the response of each stock. Moreover, reported catches may depend on management measures such as input or output controls, as well as on compliance. Therefore, we argue that the time-series of reported catches from fisheries, whether or not they have gone through the entire cycle from undeveloped to collapsed, cannot be characterized by a single arbitrarily chosen statistical distribution, level of autocorrelation, or CV.




Figure 1 (top panel) shows that under the conditions of a uniform distribution of random numbers, the algorithm inherently leads to linear temporal trends in the fraction contributed by the five status classes. At each point in the time-series, approximately half the series have a value >50% of the greatest value up to that point and would be classified as fully exploited. By definition, overexploited and depleted stocks cannot exist in the first year and undeveloped and developing stocks cannot exist in the last year. These built-in trends have important consequences, because they indicate that the statistics commonly used to determine the significance of trends are invalid: the statistical significance suggests that a finding is unlikely to be a result of the null hypothesis of random cause, yet the trends seen in the top panel of Figure 1 are just that!

Figure 1

Results of applying the algorithm for defining status classes: (top) to simulated random numbers in 50-“year” time-series for 100 “stocks”; and (bottom) to FAO catch statistics for various LME's, 1950–2004 (reproduced from Pauly, 2008, with permission from the Journal of Biological Research — Thessaloniki).

For comparison, the results based on FAO catch statistics for selected species from various large marine ecosystems (Pauly, 2008) are also given in Figure 1 (bottom panel; apparently, undeveloped and developing stocks have been combined). The similarity between the two plots is striking. This is not to suggest that FAO catch statistics represent random numbers, though indeed they may vary over time for many different reasons (e.g. exploitation, environmental, and political; Branch, 2008). Rather, it is to say that deviations should be evaluated against the built-in patterns caused by the algorithm applied and not against the null hypothesis of no-trend (Wilberg and Miller, 2007). Therefore, the analysis of the FAO statistics can only conclude that the fraction of collapsed stocks increases faster—and the fraction of underdeveloped and developing stocks is higher initially—than predicted by time-series of random numbers. However, finding an appropriate statistical test for the significance of this observation would be difficult because the underlying distributions are not known a priori.

Conceptual flaws

Using catch (the weight of fish taken out of the sea) as a proxy for stock biomass (the weight of fish in the sea) is a major conceptual flaw. Put simply, catch is the product of biomass and a variable harvest rate, so changes or trends in either can or do affect catch. No harvest means no catch regardless of the state of the biomass.

Moreover, no rationale for choosing the specific 10 and 50% criteria (relative to the maximum historical catch) to define the stock-status classes has been provided by Worm et al. (2006) or Zeller et al. (2008), nor in the original document where the algorithm was first presented (Froese and Kesner-Reyes, 2002). As noted by Wilberg and Miller (2007), Worm et al. (2006) “seem to suggest that maximum historic catch represents an achievable and sustainable target for fisheries management”. The validity of this assumption has been rightly questioned by Wilberg and Miller (2007), because for many stocks, the maximum historical catch has been proven to be unsustainable. More commonly, these maximum catches coincide with a period of rapid development of the fishery that results in a temporary bonanza while the biomass is being depleted. Therefore, they represent a period of overexploitation of a stock rather than one of full exploitation.

Flawed prophecies

If the catch-based evaluation of the current status of global fish stocks is flawed, what can we expect of extrapolations far beyond the time horizon of the time-series? As pointed out by Hölker et al. (2007), a causal correlation between stock collapses and time itself has not been demonstrated. The algorithm predicts for uniformly distributed random numbers that 50% of the “stocks” will be fully exploited and 50% will be overexploited or collapsed at the end of any time-series, irrespective of length. These figures obviously depend on the level of autocorrelation and CV in simulated data, but it is important to note that the rate of increase in the number of “stocks” classified as overexploited or depleted will depend on the length of the time-series (cf. Branch et al., in press). Therefore, the trend observed when the algorithm is applied to real data should not be extrapolated without first accounting for the change in steepness caused by random variation in the data over the entire period, relative to the historical period."

See the full report page here: