The First Era of HIV treatment employed a single agent that, in early trials, prolonged the lives of those who received it compared to those who received placebo. Subsequently it was shown that single agent therapy was able to delay the onset of symptoms of AIDS and temporarily arrest the decline of immune surrogate markers but did not extend the survival time of those who started treatment earlier compared to those who started treatment later. Finally, at the end of the First Era, it was shown that combining two agents was more effective than using one agent alone.
The Second Era of HIV treatment extended the strategy of combination therapy by adding a new generation of drugs that inhibited replication of HIV in distinctly different ways. This has produced a dramatic reduction in AIDS-related death and disease wherever it has been applied. However, these treatments are not a cure and life-long therapy currently appears necessary.
As the number of people surviving HIV rises and their time on treatment lengthens, a new syndrome of symptoms and disease has begun to appear. Lipid abnormalities, body shape changes, cardiac disease, myopathy, liver and pancreas disease, insulin resistance and metabolic abnormalities -- as well as treatment fatigue, drug interactions and viral resistance have emerged as intractable problems that may be having increasingly fatal consequences.
It is this background of unavoidable long-term treatment, with its unknown consequences, that has led many to consider using existing drugs in novel strategies designed to reduce toxicity and improve quality of life while continuing to inhibit disease progression. At the same time, many are beginning to ask what would constitute a truly effective long-term therapy.
Effectiveness sounds a lot like efficacy, and, in common usage, the efficacy of HIV drugs is measured by their ability to lower plasma viral load and raise CD4+ cell counts, the two most important surrogate markers of disease progression. Surrogate markers can be influenced quickly and directly, enabling trials of new drugs to find results within twenty-four weeks. Even long-term trials out to two years continue to place emphasis upon sustaining viral load below levels of quantification.
Other, more subjective measures of treatment effectiveness should be discussed, such as the incidence of new AIDS defining illness, quality-of-life (QOL) measures, the incidence of drug-related adverse events, and economic costs among them. But the significance of these measures are difficult to agree upon, and their collection is difficult to obtain.
Unfortunately, death is the significant endpoint. Death is why most people care about AIDS. It is a singular event and an objective fact that is simple to ascertain. A few of the first drugs of the Second Era were proved in trials that measured the number of deaths in both arms. When the ability of these drugs to suppress viral load was seen, it was accepted that suppressed virus correlated with survival, and "death trials" were seen as unethical and unnecessary. But a "death trial" is a survival study, and after the first flush of success with HIV, long-term survival is still unknown.
This probably sounds confusing. Longer-term survival is clearly accomplished with treatment. The death rate has plummeted in countries with access to new drugs. No one would want to go back to the days before effective combination therapy. But as the survival horizon has extended, serious questions must be asked about sustaining survival benefit for as long as possible. It is possible that new drugs with greater potency and less toxicity will come along to inaugurate a Third Era in HIV treatment, but nothing is guaranteed. Until then, the prudent course is to presume that people will still be taking AZT and the others well into the new century and to continue learning how best to maximize their effectiveness.
We are possibly in the early stages of an epidemic of treatment-related disease. It is important that we respond to this threat before the survival gains made in the nineties are erased by a new wave of mortality due to avoidable toxicity. Whether this can be done with a trial of "when-to-begin-therapy" should be investigated.
One controversial treatment decision concerns the best time to begin taking combination drug therapy: Should one begin treating as soon as possible to prevent the risk of progression to AIDS? Or can one improve his or her ultimate survival chances by delaying treatment until immune deterioration is imminent, thus minimizing extended exposure to the harmful effects of drug therapy?
In the First Era of HIV treatment, several large randomized clinical trials investigated the question of when to begin treatment. In these trials of AZT given immediately or deferred, despite a reduction of the number and severity of opportunistic infections in those who started treatment early, no survival benefit was seen.
Recently, treatment activists have proposed organizing a study to investigate the long-term effectiveness of Second Era therapy, whether started immediately or later. The call for this research addresses concerns people have about the feasibility of taking HIV drugs for perhaps decades, and a fear that, in the long run, the drugs may be as deadly as the disease.
Designing a trial: "The Effect of Immediate versus Deferred Antiretroviral Therapy on Long-Term Survival in HIV Infection"
Sometimes the question of when to start treatment has been posed as: "Early versus Late." But when exactly is "early?" Is it time after infection? Few know precisely when they were infected and people progress at different rates. So, although there are trials of treatment during primary infection which have their own rationale, it might be better to wait until later in the progression of the infection to establish a practical time for early treatment.
Is "early" some time after testing positive? This is easier to determine and it might be a very common real-world point to start therapy, but it doesn't respect the correlation between CD4+ count and the risk of AIDS.
Is "early" some stage of disease as stratified by surrogate markers? To be truly early in the disease, perhaps only people should be admitted with CD4+ over 500 cells. But as early AZT trials found, if everyone in the trial is early in disease (> 500 cells) at the outset, then the trial will take a long time to produce endpoints.
If you want to compare the difference between treating at high or low CD4+ counts, you need to have subjects in a range of CD4+ cells where there is some controversy about whether treatment is a good idea or not, yet be in a range where epidemiology suggests disease progression can be expected. This begins to restrict the entry criteria. Ethically you can only admit people with more than 200 CD4+ cells because there is good evidence that people with fewer than 200 CD4+ should be treated right away before they enter a zone of risk for opportunistic infections. Many questions can complicate the issue: Should the lower limit for entry actually be 300 cells to allow a cushion for avoiding the 200 region? Do you need to consider the rate of change of CD4+ or average the last two or three T-cell counts to establish a baseline? How about percentage? Does viral load have a role?
But "Early versus Late" probably describes a scientific question: "When in the disease (as measured by CD4+ count) is it best to start treatment in order to suppress the virus?" This concept concerns an idealized virus and an idealized disease and may be an artifact of a strategy that considered eradication possible. It doesn't reflect the real-world problem individuals face when they ask "When should I start treatment?"
Immediate means starting treatment right away. In a trial, this means as soon as randomized. This is a simple concept and reflects a real-world practice and recommendation that probably won't change soon. Many accept this as the standard of care in the U.S. for those who are treated. It allows inclusive criteria that could be appropriate for someone at any stage of disease. It is also open as to the kind or quality of treatment, Hard or Soft, etc.
Deferred means not starting right away. This is also a simple concept and reflects a widely practiced approach that is attractive because the transition from person to patient is postponed. But, again, it introduces restrictions on who can enter the trial. People with CD4+ below 200 cells can't ethically enter because they shouldn't wait to start treatment. People around 300 are likely to be in mid-trend towards 200 and thinking about starting. The lowest number many people feel may comfortable with is 350 because it leaves a safety margin.
What about people over 500 CD4+ cells? They can go for years without progression. What is standard practice now? If it is to start people over when 500 cells, then they may be appropriate for inclusion in the trial. But this may make the trial slow to give answers because there will be less disease and fewer endpoints from healthy people. The practical and ethical entry criteria is probably: CD4+ count between 300-350 to 500-600. Other questions must be answered to design this trial, its sample size and its duration:
Since the question is when to start and starting can only happen once, no one in the study should have used antiretroviral treatment in the past. This is because starting treatment incurs the possibility of resistance, the possibility of immune activation, the possibility of a drug sensitivity or hidden toxicity. But mostly, starting treatment before entering the trial introduces the possibility of the unknown. And it is the unknown that we are trying to tease out. This is done by proposing a hypothesis for what is happening in people who stay on long-term therapy then testing the hypothesis.
From the literature we can probably establish that:
But Early Starters may also:
We can also probably expect that Late Starters:
But they also:
But which ones live longer? We will need a clinical trial to find out. A survival trial for this question can be thought of as a structured observational trial that will not be blinded and will have no prescribed regimen. Patients will be randomized to one of two prescriptions: Start Now or Start Later. Actually, patients are free to do whatever they want to do after they are assigned a strategy. It's really a trial of a randomized prescription that tries to model the risks faced by two differently treated populations so that individuals can one day be guided by the results. A trial is like a Frog Jumping Race. Once the race starts the frogs can jump any way they want to, but what makes it a race is the fact of a starting line. That's the important thing about randomized trials, everyone starts at the same place. If you don't have a starting line, then the study is simply an observational exercise, akin to a weather report without predictive value.
People in a trial like this will have some similar characteristics. They should be:
These criteria should be as simple and as inclusive as possible.
We'd also like the study to enroll quickly so that everyone starts around the same time (~1 year). The sooner full enrollment is reached, the sooner answers are gotten. If conclusions are delayed, the relevance of the answers are put at risk. There may be new treatments, breakthroughs in toxicity management, and the virus itself may evolve to become more or less dangerous. Because treatments and standards of care continue to change, the results will be more meaningful if the participants are of the same generation of care at the baseline. When new treatments become available they are available to everyone in the trial and the study retains relevance.
Many practical questions have to be considered: How long will it take to find out if people are hurt or helped by immediate treatment? Is there a danger of stopping the trial too soon? Will the answer be relevant if there is a new generation of treatments or a new understanding of how to treat? How many people will need to be randomized? How long will it take to get full enrollment?
These and others are questions that must be considered by trialists, statisticians, clinicians, potential subjects, people who have been on treatment and people who are on their way. If a large trial like this is not feasible, work should continue on evaluating alternate strategies until new agents make a Third Era of less problematic HIV treatment possible.
Reprinted by permission of the amfAR HIV/AIDS Treatment Directory (www.amfar.org/td).
Back to the TAGline January, 2000 contents page.