Badger culls, fishing quotas, global warming and freakish weather. All things which have been in the news recently; and all things which require computer models to make predictions to inform decision making.
If we take the badger cull, there has been huge criticism over whether the cull will work. Targets indicated that Gloucestershire badger populations needed to be cut by 70%, otherwise TB could be spread more than if there was no cull. Apparently only 708 badgers were killed, an estimated 30% of the population (does this mean there are only 2360 badgers in Gloucestershire?).
Is it practically possible to provide more certainty about population numbers, effective cull percentages and even whether badger culls will work?
Perhaps using modern computational methods will help.
For example, fishing quotas (or at least the scientific targets, before the politicians become involved) largely rely on estimations of population size derived from accounting methods, developed many decades ago. If a fish species typically lives five years, and you estimate the number of each age group caught each year, then you can work backwards to estimate the population size five years ago. These methods, however, don’t consider biological effects such as competition and predation from other fish, because these mathematical parameters are hard to define (some methods do, and are based on repeated sampling of fish stomach contents to find out what they eat).
However, given most fish eat things which generally fit into their mouths, a computational approach, perhaps involving an agent-based model where fish move around randomly interacting with other fish, and consuming smaller ones, may give a good indication of predation rates?
Of course, scaling up an individual-based model to predict species interactions in the North Sea fish stocks is a massive task involving huge amounts of computing time, and even so, without an accurate knowledge of how fish behave, may prove much less accurate than the simple accounting methods currently used.
The key is, different predictive models do different things. They may be able to provide parameter estimates, but not predict population change; or they may provide good short-term predictions but not long-term; or excellent temporal predictions but with no spatial information.
So how do you know what to choose?
That question is the focus of a workshop to take place at Microsoft Research (Cambridge) on Wednesday 12th March. We want scientists involved in predictive ecology to attend, and to give an honest critique of their methods. What are the advantages? What are the disadvantages and the limitations of these approaches? We’d also like the end users to attend, whether you are policy makers, conservationists or industry based. What do you need from predictive ecology? The aim of the workshop will be to develop an honest ‘field-guide’ to predictive methods in ecology. This will make it easier for people to identify suitable methods for particular problems, know the benefits of particular methods and their pitfalls, and make it much easier to know how such predictions stack up against end user requirements for these predictions.
For more information, or to register interest in attending please contact email@example.com