Table of Contents
Many already do and if you don't now, you will. Testing for drug-resistant HIV among treatment-naive patients, once thought to be an exercise in futility, is becoming commonplace. This follows several studies from the United States and Europe demonstrating that a significant proportion of such patients harbors detectable levels of drug-resistant virus.1-3
The transmission of HIV that has reduced susceptibility to antiretrovirals has been well described and resistance testing of patients with acute HIV infection has become standard practice.4-11 Until recently, the dominant paradigm held that following primary HIV infection, in the absence of drug selection pressure, the relatively fitter "wild-type" virus would overgrow resistant variants and only wild type virus would be sampled during subsequent resistance testing. However, several recent studies of cohorts of chronically infected individuals have challenged the conventional thinking regarding the detection of drug resistance and have revealed that certain resistance mutations can persist for much longer than previously believed.1,12
The results of 2 studies focusing on the prevalence of drug resistance among HIV-infected persons, previously presented during conferences, were published this month and are described below. Although they examine different but overlapping patient populations, both studies add to an emerging picture of resistance rates among patients in clinical care.
Richman DD, Morton SC, Wrin T, et al. The prevalence of antiretroviral drug resistance in the United States. AIDS. July 2, 2004;18(10):1393-1401.
To estimate the prevalence of drug resistance among persons with HIV in the United States during the years following the availability of highly active antiretroviral therapy (HAART), investigators of the HIV Cost and Service Utilization Study (HCSUS) examined phenotypic resistance patterns in study participants who had received care in 1996 and survived at least until 1998 when their blood was drawn during the study.
HCSUS is a large cohort study comprised of HIV-infected persons under care in urban clinics nationwide. Its participants are considered to be representative of persons with HIV infection in the United States. Therefore, HCSUS investigators frequently extend the findings from their sample to the population of HIV-infected persons at large. The 2,864 study participants thus represent 231,000 individuals under HIV care in the contiguous United States, which makes for some impressive-sounding results although it should remain clear that 231,000 people did not actually participate in the study.
Among the cohort, 1,797 of the original 2,864 participants had survived from 1996 to 1998; of these, 1,099 had a viral load >500 copies/mL and phenotypic drug resistance testing results available. Among the 1,099 subjects, 76% had evidence of resistance by phenotype to 1 or more antiretrovirals.
Broken down by drug class, the rates of resistance were 71% for nucleotide/nucleoside reverse transcriptase inhibitors (NRTIs), 41% for protease inhibitors (PIs) and 25% for non-nucleoside reverse transcriptase inhibitors (NNRTIs). Lamivudine (3TC, Epivir) was the agent for which there was the highest rate of resistance (68%), which is not unexpected given the low resistance barrier of this ubiquitous antiretroviral. Almost half (48%) of the viremic subjects had multiple-class resistance, with 13% having resistance to all 3 drug classes. Extrapolating to the larger population, the investigators estimated that over 100,000 HIV-infected persons with a viral load >500 copies/mL have detectable drug resistance.
|Prevalence of Estimated HIV Drug Resistance in the Represented Populations|
|* Represents 63% of total study population. PI, protease inhibitor; NRTI, nucleoside reverse transcriptase inhibitor; NNRTI, non-nucleoside reverse transcriptase inhibitor.|
Several factors were identified as being associated with resistance, including, as can be expected, the use of HIV therapy and NRTIs in 1996, when the HAART era began. Not surprisingly, current use of antiretroviral therapy was associated with drug resistance among the patients with a detectable viral load, as was a higher current viral load, lowest self-reported CD4+ cell count and advanced disease stage.
Germane to the argument regarding resistance testing among patients not on HIV therapy, 30% of the participants who were not receiving antiretrovirals had evidence of drug resistance (22% for NRTIs, 11% for NNRTIs and 11% for PIs) and 11% were resistant to agents in 2 drug classes. Of the patients who had never been on an antiretroviral, 10.9% had drug resistance and almost all of these cases were due to NNRTI resistance (9.3%).
These data are interesting in a number of ways. Foremost, the characterization of the HCSUS cohort permits a quantitative reflection of how people with HIV currently in treatment, in general, are doing. A majority of the cohort (63%) had detectable viremia and, of these, 80% were receiving antiretrovirals. The finding that drug resistance was prevalent among those with viremia comes as little surprise to those of us who see patients, because patients with detectable viral loads who are on therapy are generally expected to have drug resistance. The high rates, however, are sobering and point to the need for thoughtful use of HIV therapy, including the need to properly balance potency, tolerability, salvage options and convenience, as well as the need for new agents that are effective against resistant variants. Fortunately, since the study period (1996-1998), HIV therapy has evolved and is arguably more potent and convenient.
In fact, data from on-going studies indicate that resistance rates among recently infected persons are actually starting to decline, presumably since improved therapeutics have led to a decrease in viremia and, therefore, reduced rates of transmission of resistant HIV by those on therapy.13,14
That said, the potential for viremic patients with drug-resistant variants to transmit drug-resistant virus to others remains a concern and, although not a focus of this particular report, these data do illustrate the degree to which drug resistance can be detected even among patients naive to HIV therapy -- findings that are consistent with other studies.1,15-17
The resistance seen in treatment-naive patients is likely to be the tip of the proverbial iceberg, as NNRTI-associated mutations have less impact on viral fitness and, therefore, are relatively more stable than mutations associated with other treatment classes. These other mutations may not persist in sufficient quantities following infection to be detected by resistance testing, even though the resistant strains remain archived in some infected cells and can become dominant under treatment selection pressure. Also, this study was not able to assess the presence of resistant virus among patients with viral loads <500 copies/mL who may have acquired resistant virus at the time of infection but had too few circulating virus for resistance testing to be performed.
These results support resistance testing among treatment-naive patients -- particularly given the current reliance on NNRTIs as initial HIV therapy. The detection of NNRTI resistance in an individual naive to treatment and about to start an antiretroviral regimen would steer most clinicians away from prescribing an NNRTI. But, even if therapy is not being considered at the time of presentation to clinic, resistance testing can be useful to document any resistance that is present and may wane with time. Therefore, it is this author's opinion that this study, the study below and those referenced, support resistance testing of persons naive to HIV therapy, regardless of duration of HIV infection.
Weinstock HS, Zaidi I, Heneine W, et al. The epidemiology of antiretroviral drug resistance among drug-naive HIV-1-infected persons in 10 US cities. J Infect Dis. June 15, 2004;189(12):2174-2180.
Similar to the HCSUS study, this Centers for Disease Control and Prevention (CDC) investigation aimed to characterize antiretroviral resistance in urban populations in the United States. However, unlike HCSUS, the CDC study enrolled only patients naive to therapy. The study was conducted between 1997 and 2001 in 39 clinics in 10 cities: San Francisco, San Diego, Denver, Detroit, Grand Rapids, Houston, New York, Newark, New Orleans and Miami. A total of 1,104 adult patients were enrolled, of whom 1,082 had genotypic resistance assays successfully performed. Three quarters of the participants were male, 73% were non-white and 60% were men who have sex with men. Approximately 19% were determined to be recently HIV infected (within the prior 4-6 months) using a de-tuned antibody assay.
Not dissimilar to the HCSUS results, overall, 8.3% (90 patients) of the cohort harbored mutations detected by genotypic resistance testing. The breakdown of the prevalence of resistance mutations by antiretroviral class was 6.4% for NRTIs, 1.9% for PIs and 1.7% for NNRTIs. Only 1.3% had evidence of drug resistance to 2 or more antiretrovirals. Men who have sex with men were more likely to have resistance mutations than women and heterosexual men (12% versus 6.1% and 4.7%, respectively). Likewise, whites had higher rates of resistance (13%) compared to African-Americans (5.4%) and Hispanics (7.9%). These demographic factors are likely to be indicators of access to HIV care as groups with relatively less exposure to antiretrovirals have fewer opportunities to develop resistance to transmit to others.
The most commonly observed NRTI-associated mutations were M41L (19 patients), K70R (9 patients), M184V (9 patients) and D67N (7 patients). However, 27 patients had one of the T215D/S/C/E/I mutations; these are mutants of the 215 codon that have back-mutated from the thymidine analogue mutation T215Y and which can revert back to T215Y under treatment pressure. K103N was the predominant NNRTI mutation and L90M the most common PI mutation.
Among patients with recent HIV infection, the prevalence of genotypic resistance during the study period ranged from 7.1% in 1998 to 14% in 1999 to 8.9% in 2000 -- differences that were not statistically significant. However, resistance in patients not recently infected increased from 3.2% in 1998 to 9.0% in 1999 to 12% in 2000 (p = 0.004). Notably, resistance was detected more often among patients who reported having sexual partners who were themselves receiving antiretroviral therapy, suggesting a potential route for the acquisition of drug resistance.
This study complements the HCSUS study and, with previous reports, indicates that drug resistance is present in about 1 out of 10 treatment-naive patients. While the impact of pre-treatment resistance on response to therapy is being studied, many clinicians are deeming it prudent to test patients at baseline. Continued monitoring of resistance patterns in the treatment-naive will be essential to understanding the need for on-going pre-therapy testing and gauging whether drug resistance patterns are evolving with treatment trends.
Resistance to HIV therapies can develop for a number of reasons, including nonadherence, sub-optimal plasma concentrations of antiretrovirals due to malabsorption or drug interactions, and, as is evident from the above discussion, infection with drug-resistant virus. Most clinicians blame nonadherence for the lion's share of resistance seen. However, how much nonadherence it takes to produce drug resistance is still unclear. Very low adherence to therapy may actually carry little risk of resistance as there is insufficient pressure placed on viral strains to select for resistant mutants. Likewise, high rates of adherence may shut down viral replication and, subsequently, cut back opportunities for mutants to emerge. These considerations have led to the conception that the relationship between adherence and antiretroviral resistance is bell shaped, with the risk of resistance low during low levels of adherence, peaking with intermediate adherence (not too little, not too much) and falling with high-level compliance.
Many researchers suspect, however, that the relationship between adherence and resistance is more complex and that the risk of resistance may actually increase well beyond the midpoint of adherence to include rates that are clinically common and actually considered fairly good. This is most evident when a patient's viral load remains detectable despite taking combination antiretroviral therapy and when HIV replication persists at a level sufficient to produce resistance mutations. In such a situation, it is not clear precisely where is the tipping point beyond which more adherence will risk more resistance. Certainly, more adherence may increase the rate of achieving an undetectable viral load, but for patients falling short of this goal, more adherence, although laudable, means more drug exposure in the face of on-going replication and, as a consequence, drug resistance. This is not to say that clinicians should not be egging their patients on to better levels of adherence. Rather, in certain circumstances in which viral replication continues, more adherence is a double-edged sword in that although it may help shut down the replicative machinery, it may also make resistance more likely. The study below demonstrates these points nicely, taking advantage of the unique opportunity to study cohorts with well characterized adherence and virological data.
Bangsberg DR, Porco TC, Kagay C, et al. Modeling the HIV protease inhibitor adherence-resistance curve by use of empirically derived estimates. J Infect Dis. July 1, 2004;190(1):162-165.
Can the risk of developing PI resistance at varying levels of adherence be predicted? That's what investigators from California, with expertise in studies of antiretroviral adherence, tried to do by creating a model based on data from their clinical research. The data were generated from 2 cohorts of patients receiving antiretroviral therapy who had viral load determinations and unannounced pill counts performed regularly. One of the cohorts also had genotypic resistance testing performed. Importantly, both of the populations who were used to create the model were heavily treatment experienced.
Based on their study of these cohorts, the authors calculated that the probability of viral suppression below 50 copies/mL increased with greater adherence. This meant, for example, that at 100% adherence, close to half of the treatment-experienced patients would have undetectable viral loads. Among viremic patients, the risk of drug resistance actually increases with adherence. That is, for patients with detectable virus, the more drug they took, the greater their risk of resistance -- with the greatest risk being at their highest level of adherence. When the model considered all patients -- viremic and aviremic -- the maximal rate of new PI mutation acquisition occurred at a level of adherence of 87%, with the risk of resistance fading with additional adherence. The graph of the adherence-resistance relationship was therefore slanted considerably to the right and looked more like a plot of the Dow Jones Industrial Average from 1970 to 2003 than a bell.
The data used to generate the model were derived from patients who were receiving unboosted PIs, which are clearly less potent than boosted PIs. However, adding to the model the regimen's effectiveness as determined by the percentage of patients on that regimen who had undetectable viral loads at 100% adherence, resulted in the shifting of the peak of the adherence-resistance curve back to the left and down, with the most potent regimens yielding resistance less often and maximally at adherence rates of around 50%. More potent regimens therefore reduce the overall prevalence of resistance mutations and are more likely to suppress viral replication at lower levels of adherence than less potent regimens.
This model is useful in several ways because the adherence rates of most clinical populations that have been studied to date are at the range of 80 to 90%, which has been found to be most associated with PI resistance.18 That the levels of adherence which were calculated in this study to be associated with the greatest risk of the development of resistance in treatment-experienced patients are quite similar to those seen clinically provides a potential explanation for the high prevalence of resistance seen in the HCSUS cohort.
The results are also provocative because they suggest that increasing adherence in treatment-experienced patients on regimens may paradoxically increase the risk of resistance if viral suppression is not achieved (i.e., according to the model, improving a patient's adherence from 70 to 87% would appreciably increase the patient's risk of cultivating resistance mutations to his or her PI).
These results have several implications for clinical practice. First, they make plain that greater adherence, if it is unable to drive down a patient's viral load to very low levels, is not necessarily always better. Second, clinicians need to remain vigilant in monitoring for resistance even when a patient's viral load has fallen but remains above the limits of detection despite the patient's high-level swear-on-a-stack-of-bibles adherence. Lastly, to avoid further resistance, a low threshold for changing therapy (when possible) is advisable in cases where viremia persists.
Certainly, the problem with such models is simply that they are models. Further, this particular model is based on data from patients on therapies that are less potent than those being commonly employed today. It also assumes virologic failure to be a gradual process, in which resistance mutations accumulate, which is not always the case. Despite these limitations, the model is thought provoking and challenges our assumption that greater adherence is always better in patients who are heavily treatment experienced. As the potency of regimens increase, the odds shift and adherence pays off but high levels of adherence are still required to suppress viremia and prevent the cultivation of resistant virus.
Exactly what do patients desire in an antiretroviral regimen? What keeps them from the kinds of adherence that makes treatment success likely? Most of us think we know the answers. Some of us consider what we would want if we were faced with having to start an antiretroviral regimen. Others base their opinions on what their patients tell them. However, specifically what patients themselves desire in a long-term regimen may surprise those on the other end of the prescribing pen.
Stone VE, Jordan J, Tolson J, Miller R, Pilon T. Perspectives on adherence and simplicity for HIV-infected patients on antiretroviral therapy: self-report of the relative importance of multiple attributes of highly active antiretroviral therapy (HAART) regimens in predicting adherence. J Acquir Immun Def Syndr. July 1, 2004;36(3):808-816.
In a rather straightforward study, conducted in 6 cities across the United States and sponsored by GlaxoSmithKline (the manufacturer of popular twice-a-day regimens), 299 patients who were receiving a minimum of 3 antiretrovirals were surveyed regarding their HIV treatment preferences.
Participants were asked to evaluate 10 therapy attributes and predict their adherence to each of 7 actual regimens. The 10 treatment attributes were: total pills per day, pill size, side effects, dietary restrictions, dosing frequency, number of prescriptions, number of refills, number of copayments, number of medication bottles and whether bedtime dosing was required. The 7 regimens all contained 3 active antiretrovirals and were dosed either once a day (QD) or twice a day (BID). Participants were told to assume the potency of the regimens was equal but the actual names of the components were not provided.
The 3 QD regimens, although never disclosed, are presumed to have contained efavirenz (EFV, Sustiva, Stocrin) since 1 agent was always required to be taken at bedtime. Only 1 QD regimen had all 3 agents taken simultaneously and this sounded a lot like efavirenz + didanosine QD (ddI-EC, Videx EC) + lamivudine. None of the QD regimens were described as 3 pills before bed with little or no food (i.e., tenofovir [TDF, Viread] + lamivudine [or emtricitabine (FTC, Emtriva)] + efavirenz). The BID regimens ranged from a zidovudine/lamivudine/abacavir (ZDV/3TC/ABC, Trizivir)-like regimen, to a nelfinavir (NFV, Viracept) + tenofovir + lamivudine combo weighing in at 13 pills with food requirements.
The subjects were mostly male (76%), African-American (45%) and men who have sex with men (57%). Two thirds were on their third or more antiretroviral regimen. Only 26% said they had missed no doses of their HIV therapy during the preceding 3 months.
All 10 attributes were deemed to negatively affect adherence. Mean number of pills per day having had the greatest impact followed by dosing frequency, adverse events and dietary restrictions. Bedtime dosing was rated by patients as having the least impact on adherence. When evaluating actual regimens, respondents rated the zidovudine/lamivudine/abacavir regimen the most likely to be adhered to and the most convenient. The runners-up included 4 regimens that had received similar ratings, of which 3 were the QD regimens. As can be expected, the regimens with the most pills fared the worst.
|Relative Impact of Attributes on Predicted Adherence|
|Mean scores for relative impact of attribute features on adherence.|
Women rated dosing frequency and side effects as less of an issue than men, food restrictions were more of a concern among whites compared with non-whites and African-Americans reported less of an impact from side effects than whites or Hispanics.
This study, despite some limitations described below, provides an interesting perspective on what matters most to patients who are confronted with the need to take daily life-long therapy. Pill count, frequency and side effects were considered the most troublesome aspects of therapy and the greatest threat to adherence. Bedtime dosing, interestingly, despite the potential inconvenience of having to take a medication at a specific time of day, was not considered a relatively significant treatment liability.
The flaws? For one thing, the comparison of regimens was, unfortunately, somewhat stacked in favor of the 1-pill, BID option (the pill count and frequency of Trizivir, manufactured by the sponsor of this study). The QD regimens described had more negative attributes than can be found in the currently popular QD combination of tenofovir + lamivudine + efavirenz. Additionally, the soft-pedal warning regarding the "very slight chance your body will react to this medicine, which would require you to stop this medicine" -- otherwise known as the abacavir (ABC, Ziagen) hypersensitivity reaction -- which was included in the 1-pill, BID regimen did not provide participants with a completely balanced picture of this option. Lastly, the assumption of equal potency, which was understandably made for the purposes of this study, nevertheless should not extend to the interpretation of the results when crafting an antiretroviral regimen. When the chips are down, potency counts and I suspect many patients would be willing, to a point, to trade some convenience for efficacy, but fewer would agree to the reverse. Obviously, when potency and convenience are combined, everyone wins and, as mentioned above, powerful once-a-day therapies with low pill burdens are available today.
The range of responses in this study also demonstrate how individual treatment preferences can be. For some, pill count is paramount to frequency, for others food restrictions were a more important attribute. These results serve to remind clinicians not to assume we know exactly what regimen our patients would desire. Despite our understandable embrace of once-a-day regimens, a potential treatment regimen is not only about frequency. A frank discussion of the pros and cons of the treatment options is essential prior to handing over those prescriptions.