There was a time when considerable effort was spent trying to convince independent advisers and researchers that low doses cause pesticide resistance. There wasn’t much scientific rigour in the argument; apparently target organisms ‘got used to’ low doses and that was the start of resistance development.
That argument never struck a cord with me. My first experience of resistance was with powdery mildew to Bayleton (triadimefon), in the late 1970s/early 1980s after only a few years of use. We knew from trials that a quarter dose initially controlled the disease in the field, and most growers at the time were using doses well above this. So, resistance wasn’t a result of incomplete control.
The science on the causes of resistance has been transformed by biotechnology. Typically, target site resistance is caused by genetic mutations that block the activity of pesticides at the single site of action. Enhanced metabolism is often caused by the over-expression (hyperactivity) of gene(s) that in susceptible target organisms would only result in the slow breakdown of the pesticide.
This suggests that typically it is high selection pressure (high doses, particularly if regularly repeated) that leads to the most rapid increase in resistance, as only the most resistant target organisms will survive a high dose. Provided these survivors have roughly the same fitness as the susceptible organisms to compete and multiply then they will slowly dominate the population - following the principles of evolution, first described by Charles Darwin.
[You will have noticed that I’ve extensively used the word ‘typically’ so far in this blog. This is because over the years I have learnt that whilst simple principles apply in the vast majority of occurrences of resistance, there may be some exceptions...]
So far, so good.
Then recently I read some scientific papers suggesting that low, rather than high, doses were causing resistance to weeds in Australia. My first thought was ‘how dare these colonials challenge our cherished Darwin’ - so when I visited earlier this year I disputed their conclusions.
It was easily resolved - the argument was that low doses led to higher numbers of weeds that had a level of resistance. However, there was an acceptance that high doses may result in a more rapid increase in resistance, so it became an argument about populations and the resistance development rate.
In the UK we have experience of statements saying that less effective treatment caused more resistance. It is said that it was the spring application of ‘fops’ and ‘dims’ that caused more resistance in black-grass than autumn applications.
Autumn applications typically (that word again) gave more effective control and so only the most resistant black-grass survived. This would, as a consequence, lead to a more rapid increase in resistance. Despite resistance increasing more slowly as a result of spring applications, after several years it may still have increased to the same maximum level as that from winter applications. However, being the less effective timing also meant that there would be a lot more black-grass plants present - hence the assertion that the spring timing caused more resistance that the autumn timing.
In practical agriculture the rate of development of resistance and populations is perhaps a rather pedantic argument. Nobody intentionally uses doses that will provide inadequate control. If lower than recommended doses are used, which is rare for black-grass, they should be at an appropriate dose to provide sufficient control in both the current and future crops.
Hence, Darwin still rules OK! However, there are apparently many in the US Republican Party who still disagree!