We present the links to recent scientific pieces on doping in sports. These articles take some time to analyze, and thus no analysis from us right now, because franklyy we aren't smart enough.
1. The Canadian Medical Association editorial.
It is almost 20 years since the Ben Johnson scandal at the Olympics in Seoul, Korea, drew attention to the issue of doping. Have we made progress since then in addressing this important sport and public health concern? Will the Olympic ideal survive, or will it be lost in a sea of hormone, steroid and stimulant abuse...
Athletes are now competing on an Olympic stage. Some performances may be suspect and proven to be illegitimate. But the overwhelming majority of athletes will compete fairly and cleanly. Sadly, their accomplishments are sometimes overshadowed by the debased conduct of a minority. Athletes from Canada and from other countries are the product of sport systems that work tirelessly to address the doping scourge.
Antidoping programs in an Olympic setting ensure fair play and maintain public trust. A concern for drug use in community sport is equally important; the implications of such behaviour compel thoughtful, strategic interventions to ensure sporting integrity and public health.
2. Donald Berry in Nature says that the anti-doping labs are going down the wrong path.
In my opinion, close scrutiny of quantitative evidence used in Landis's case show it to be non-informative. This says nothing about Landis's guilt or innocence. It rather reveals that the evidence and inferential procedures used to judge guilt in such cases don't address the question correctly. The situation in drug-testing labs worldwide must be remedied. Cheaters evade detection, innocents are falsely accused and sport is ultimately suffering.
Nature believes that accepting 'legal limits' of specific metabolites without such rigorous verification goes against the foundational standards of modern science, and results in an arbitrary test for which the rate of false positives and false negatives can never be known. By leaving these rates unknown, and by not publishing and opening to broader scientific scrutiny the methods by which testing labs engage in study, it is Nature's view that the anti-doping authorities have fostered a sporting culture of suspicion, secrecy and fear.
Detecting cheats is meant to promote fairness, but drug testing should not be exempt from the scientific principles and standards that apply to other biomedical sciences, such as disease diagnostics. The alternative could see the innocent being punished while the guilty escape on the grounds of reasonable doubt.
4. However, The Questionable Authority begs to differ about exposing all the pimples and warts...the charlatans will develop a blueprint to beat the testers.
From Berry's article:
Whether a substance can be measured directly or not, sports doping laboratories must prospectively define and publicize a standard testing procedure, including unambiguous criteria for concluding positivity, and they must validate that procedure in blinded experiments. Moreover, these experiments should address factors such as substance used (banned and not), dose of the substance, methods of delivery, timing of use relative to testing, and heterogeneity of metabolism among individuals.
In an ideal world, this is exactly the way things should work. Unfortunately, we don't live in an ideal world. There's a very real problem that will arise if the exact methods and criteria are publicized. As the folks at Nature point out in their editorial, there is an intense ongoing arms race between the people who make the drugs and the people who design tests. If the exact testing criteria are publicized, the drug makers will know exactly what they need to do to beat the tests.
That's a problem, and it's not an insignificant one.
If you provide all the testing details, it will stimulate the development of new methods for evading the testing, and make it much more difficult for the testing to achieve its goal. If you don't provide assurances that the testing methods are objective and reliable, it will continue to inject elements of distrust and paranoia into athletics. It's a delicate problem.
5. In another post, Mike Dunford takes on the Berry article on practical validity.
To put it another way, if the A and B samples both test positive for the same substance, there's very, very little chance that it's the result of anything other than something that is actually in the sample. At this point, the question becomes somewhat different - are the markers that the test looks for conclusive proof that a banned substance has been used? If they're not, they shouldn't be used in tests that can break someone's career.
That last is a harder question, and it's one where there really is the need for much more scientific examination of the testing procedures, as well as much more openness on the part of the testing authorities. That concern is very valid, and should be addressed. But on the whole, things are not as grim for athletes as Berry's article implies.
6. A comment on The Questionable Authority concludes this:
After reading your comments, it seems that Berry's analysis is much more than misleading, it's downright wrong.
My understanding of Landis' number is: T/E ratios in normal folks are about 1:1. It takes a ratio of 4:1 to be guilty (so figure that includes 99.5% of folks). Landis was 11:1.
If 4:1 is 3 standard deviations, 11:1 is likely in the neighborhood of 12 standard deviations. To get this far away from normal without cheating would be a medical 'miracle'.
I'm not saying that it couldn't happen but you would think that something so extreme would have shown up in at least one of the numerous other samples he's given over the years.
Posted by: David C. Brayton | August 11, 2008 8:26 PM
A note on the problem with multiples by Berry in Nature:
The problem with multiples
Landis seemed to have an unusual test result. Because he was among the leaders he provided 8 pairs of urine samples (of the total of approximately 126 sample-pairs in the 2006 Tour de France). So there were 8 opportunities for a true positive — and 8 opportunities for a false positive. If he never doped and assuming a specificity of 95%, the probability of all 8 samples being labelled 'negative' is the eighth power of 0.95, or 0.66. Therefore, Landis's false-positive rate for the race as a whole would be about 34%. Even a very high specificity of 99% would mean a false-positive rate of about 8%. The single-test specificity would have to be increased to much greater than 99% to have an acceptable false-positive rate. But we don't know the single-test specificity because the appropriate studies have not been performed or published.
However, the 'p's at the 0.95 level are determined in a population sample. Let's say a scientist looks at blood pressure difference between two drugs in 2 experimental groups: Group A mean = 100/70 and Drug B= 130/90 (OK no SDs, shoot me). If the scientist draws 100 samples from a single population could this difference be due to chance alone. The statistics give you a clue on this. (significant at the 5% level says 5 of 100 times this result in differences between means would be due to chance sampling)
However, Landis's sample was not drawn from a population. Landis's samples were drawn from one man's blood. Thus, the statistical thinking may not be valid here. If we as physicians see a patient bleeding and take a HCT of 45 one minute and a HCT of 26 the next minute, we don't run around thinking this could be sampling variation...hell the patient will die if we think it could be a false positive.
Moral: statistics are a tool to be used, not to use you as a tool. When you see an impossible athletic feat, followed by a high T:E ratio, and synthetic carbons in a steroid, common sense leads you to a conclusion.