When to Switch Depends on What You Can Switch to
There is an open question about how to monitor antiretroviral (ARV) therapy in the developing world. In rich countries, HIV RNA and CD4+ cell counts are routine. RNA tells the amount of virus detectable in the blood, and it gives a rapid readout on the success of therapy. CD4 count tells about the health of the immune system and usually responds in a positive direction after a period of successful ARV treatment. These tests -- and others -- are routinely done before starting therapy and every few months thereafter. They are performed in central laboratories and cost hundreds or thousands per year.
But for resource-limited settings, where spending even $100 per year on drugs is a burden, such complete monitoring of therapy is not affordable. Efforts are being made to reduce the cost of diagnostic tests and make them more practical, but for the near future, most agree that only limited monitoring is feasible. The question for the authors of simplified treatment guidelines is: what should be monitored when you can only monitor one parameter?
Theoretically, any amount of viral replication allows the risk of a random drug resistance mutation to occur, which in turn allows more replication, thus increasing the chances that resistance mutations accumulate, ultimately leading to complete loss of suppression by the initial drug regimen. If the increasing viral load can be detected soon enough, then one or more drugs in the regimen can be switched and the virus resuppressed. The most sensitive viral load assay in common use detects HIV RNA starting at 50 copies per mL of blood. Because of the phenomenon of transitory but harmless blips in viral load up to 1,000 copies, most clinicians who take this aggressive approach would want to confirm a viral breakout above several hundred copies with two HIV RNA determinations a few weeks apart. At this point they might perform a genetic or phenotypic resistance assay and either intensify the regimen or switch one or more drugs. More conservative physicians may prefer to intensify adherence education and wait until they see a sustained trend of detectable virus approaching or passing 1,000 copies before making a change. With over 20 approved ARV agents in the US, there are options to explore.
But treatment options in the developing world are limited, and the question of "when to switch" may ultimately be determined by what is available to switch to. There may be one first-line regimen that is routinely prescribed because it is generally safe and effective, easy to dispense, and affordable to use in mass treatment programs. A second-line therapy, if one is available, will likely be much more expensive, and may be withdrawn due to futility once it has failed. For patients in these settings a second chance could be their last chance.
Diagnostic assays are likely to be limited also. An ideal product for the developing world might be a disposable, point-of-care dipstick that only returns a semi-quantitative result, say green if above 200 CD4+ cells and red if below. The assay would be used by workers in the field to make treatment decisions according to guidelines.
So what are the likely long-term outcomes of using various decisions points for switching under these circumstances? Using a highly sensitive HIV RNA assay will force the earliest regimen switch. While this may avert HIV resistance, the patient will also be consuming additional resources. Viral load determinations must be performed fairly frequently to catch early failures, and if the expensive second-line therapy also fails, then therapy may be withdrawn and clinical deterioration will progress to mortality.
Using a less sensitive cut-off point for viral load (say 1,000 or 5,000) will require fewer determinations, delay triggering an initial switch, extend the time -- and the number of patients on their initial regimen, and preserve resources. Since disease progression rarely tracks viral replication closely, overall survival may be significantly extended compared to using an earlier switch point under these resource-limited circumstances.
Using CD4 count to guide regimen switching is yet less sensitive to regimen failure than using HIV RNA but some would argue that it is a more direct marker of the health of the patient. CD4 counting may also be cheaper and more feasible to deploy than RNA tests. Switching is delayed until immune deterioration is evident, although at a cost of the likely accumulation of multiple resistance mutations. But if the second regimen can suppress the virus for a second round of immune recovery, then the overall time to clinical failure may be much longer than that obtained by aggressively switching based on viral load.
The least sensitive method for determining when to switch would depend solely on diagnosing clinical symptoms. In some settings, and on a population basis, this might be effective, although it appears likely that immune recovery is often less successful after symptoms have appeared. Since clinical trials are unlikely, perhaps modeling these scenarios with an eye to maximizing long-term survival can offer some guidance.
Finally, if monitoring and switching rules tailored to available resources are able to produce better long-term outcomes than those based on virologic abstractions, then universal treatment guidelines for resource-limited settings may not be advisable. "When to switch" may be best decided by program and national policy makers based on what is available, what is affordable, what produces the best outcome for the greatest number of people under those specific conditions.
This article was provided by Gay Men's Health Crisis. It is a part of the publication GMHC Treatment Issues. Visit GMHC's website to find out more about their activities, publications and services.