Table of Contents
Bacchetti P, Gripshover B, Grunfeld C, et al. Fat distribution in men with HIV infection. J Acquir Immune Defic Syndr. October 1, 2005;40(2):121-131.
While no one disputes the fact that highly active antiretroviral therapy (HAART) has greatly improved survival and quality of life for persons living with HIV, this success has not come without costs. For many HIV-infected patients, the dearest of these costs are disfiguring body morphologic changes. Patients may feel great and feel thrilled to be well enough to go back to (or continue to) work, but in the minds of many patients, lipodystrophic changes impair self-confidence by advertising their disease to the world.
Despite all the literature published on lipodystrophy to date, clinicians still lack a clear understanding of which body shape changes actually occur because of HAART.
Carl Grunfeld and the Fat Redistribution and Metabolic Change in HIV Infection (FRAM) study group headline the Oct. 1, 2005 issue of JAIDS with a well-designed study that tackles the issue of body morphologic changes in a scientifically rigorous and unbiased manner.1 Grunfeld's study goes a long way towards addressing the issue of whether antiretrovirals cause body shape changes. In the process, it also debunks the myth that peripheral lipoatrophy is associated with central lipohypertrophy.
This study was started in June of 2000, soon after the initial reports of body dysmorphic changes appeared in medical literature, and completed in January of 2002. A total of 452 HIV-infected patients were selected from 16 HIV or infectious disease clinics across the United States and matched to 152 control participants from the Coronary Artery Risk Development in Young Adults (CARDIA) study. (CARDIA started in 1986 and was designed to longitudinally examine cardiovascular risk factors in 18- to 33-year-old Caucasians and African Americans. HIV seroprevalence among the study participants is believed to be less than 1%.)
Both groups underwent identical standardized questionnaires and exams performed by uniformly trained research associates. Trial participants were asked to report whether they had experienced any changes in peripheral or central body fat over the past five years. "Yes" responses were qualified and graded; equivocal responses were pooled with participants who responded "no change." In addition to self-report, research associates assessed body fat changes in all study participants using HIV Outpatient Study criteria: "mild" (only seen if looked for), "moderate" (easily seen) or "severe" (immediately obvious). Clinical lipoatrophy or lipohypertrophy were reported if there was concordance between the self-report and the examiner's report. Magnetic resonance imaging (MRI) scans were also conducted for the HIV-infected and control groups to calculate tissue volume in several body sites.
Multivariate analysis was performed to determine whether factors unrelated to HIV and its treatment could account for the observed differences in MRI measurements among the control patients, the HIV-infected patients with clinical peripheral lipoatrophy and the HIV-infected patients without clinical lipoatrophy. Separate analyses were performed for five body sites: arms, legs, visceral adipose tissue (VAT), upper trunk and lower trunk. To control for body size, logarithm of total lean body mass by MRI was used as a predictor in all models.
The variables controlled for in the models included age, ethnicity, level of physical activity, tobacco use, alcohol use, illicit drug use and adequacy of food intake. Separate analyses were performed to determine which factors related and unrelated to HIV infection were predictive of lipoatrophy, as assessed by leg subcutaneous adipose tissue (SAT), or of lipohypertrophy, as assessed by VAT. Total duration of use of each individual antiretroviral and antiretroviral class were also examined.
Notable baseline data included the following:
Reported lipoatrophy. Significantly more HIV-infected men than controls self-reported fat loss in peripheral sites: cheeks, face, arms, buttocks and legs (all P P = .025) and chest fat (P = .004). Nonsignificant decreases were reported in upper back fat (P = .053), neck fat (P = .058) and abdominal fat (P = .22). Exam-based findings were similar to those obtained by self-report. Clinical lipoatrophy was more common in all peripheral sites and in the abdomen, but the prevalence of central lipoatrophy was much lower than peripheral lipoatrophy.
Self-reported lipohypertrophy. While some HIV-infected men self-reported fat gain, the prevalence was significantly lower than in control patients. In fact, fat gain was significantly more likely among control participants for most body sites, including the arms (P = .003), buttocks (P < .001), cheeks (P = .011), chest (P = .048), face (P < .001), neck (P = .001) and waist (P = .003). In only one body site were HIV-infected men more likely to report lipohypertrophy: the upper back, at approximately 10%. However, this was not a statistically significant difference (P = .40).
Site prevalence. Separate assessment of the prevalence of lipoatrophy and lipohypertrophy at any peripheral or central site showed that HIV-infected study participants were significantly more likely than controls to have lipoatrophy in at least one site (38% versus 5%, P < .001). Peripheral lipoatrophy in the male HIV-infected patients usually occurred at multiple sites. Clinical central lipoatrophy occurred in 8% of HIV-infected participants versus 3% of controls (P = .03), although the finding was less common than peripheral lipoatrophy. Controls were more likely to have lipohypertrophy at one or more sites, be they central or peripheral.
SAT and VAT. HIV-infected men with clinical peripheral lipoatrophy had less SAT in peripheral arm and leg depots than HIV-infected men without peripheral lipoatrophy as imaged by MRI. They also had less, not more, SAT in central depots of the lower trunk/abdomen and upper trunk/chest. Even after adjustment for lean body mass, age, race, physical activity and substance abuse, this did not change. These findings suggest that, even if an HIV-infected study participant felt that he had lipoatrophy in only one body compartment (or none at all), there was often lipoatrophy in additional sites, even if it could not be detected by the patient or the examiner.
HIV-infected men who did not have lipoatrophy also had significantly less adipose tissue in the lower trunk, leg and possibly VAT than controls. There was no evidence for a reciprocal increase in VAT in HIV-infected men with peripheral lipoatrophy.
Lipoatrophy/lipohypertrophy association. Clinical peripheral lipoatrophy was not associated with central lipohypertrophy, regardless of whether these complications were assessed by MRI, self-report or clinical examination. Loss of fat peripherally tended to be associated with loss of fat centrally. Among HIV-infected men, the clinical syndromes of central lipohypertrophy and peripheral lipoatrophy had no positive association with each other (OR: 0.71). In contrast, the presence of central lipoatrophy was strongly associated with peripheral lipoatrophy (OR: 18.9).
Association of leg SAT and VAT with various factors. The authors examined factors associated with leg SAT (the most affected subcutaneous depot in this study) and VAT (an important central depot associated with metabolic changes). Age was associated with less leg SAT (-5.2% per decade; P = .012) but more VAT (+32.8% per decade; P < .0001) in HIV-infected men. Higher current HIV-RNA levels were associated with more leg SAT (+6.3%; P = .010) but not VAT, probably reflecting a treatment effect on leg SAT -- i.e., HAART controls viral load as it decreases leg SAT. African-American participants had significantly more leg SAT than Caucasian participants (+16.9%; P = .001), but significantly less VAT (-55.9%; P < .0001). No significant difference was found in leg SAT or VAT between Caucasians and other ethnic groups.
Impact of antiretroviral use on leg SAT and VAT. No entire class of antiretrovirals was found to have a significant effect on leg SAT or VAT (and this study did not analyze the impact of multi-class HAART regimens), but three specific drugs did have an effect: indinavir (IDV, Crixivan), nevirapine (NVP, Viramune) and stavudine (d4T, Zerit). When the impact of specific antiretrovirals on SAT/VAT was compared to the SAT/VAT of controls, duration of stavudine use was strongly associated with less leg SAT (-6.0% per year; P < .0001), as was indinavir use (-3.9% per year; P = .062). A trend toward an association existed for didanosine (ddI, Videx) as well, but it was not statistically significant (-3.2% per year; P = .062). Meanwhile, duration of nevirapine usage was associated with less VAT (-13.2% per year; P = .012). No other antiretrovirals had a statistically significant impact on leg SAT or VAT, although it should be noted that drugs approved since late 2001, such as atazanavir (ATV, Reyataz) and fosamprenavir (FPV, 908, Lexiva, Telzir), were not included in this study.
Study Strengths and Weaknesses
This study's ability to combine subjective reports, physical examination and MRI results, and to consider these findings alongside an analysis of other variables known to affect fat distribution, makes this study a superior research work. Additional aspects of the study also testify to its reliability, including:
Despite the extensive positive qualities of this study, it does have several limitations. For one thing, it focused only on men. In addition, the study was cross-sectional, so it can tell us only about fat distribution in men at one specific point in time. (In defense of this fact, though, that time point was years after the introduction of HAART, so it should have been an adequate amount of time in which to see the emergence of associated syndromes.) And finally, newer drugs such as atazanavir were not examined here, as the study concluded in 2002.
The Bottom Line
The authors of an accompanying editorial,2 Milan Khara and Brian Conway, point out that body shape changes have received the most press in the lay literature, often leading patients to make unwise decisions, such as avoiding treatment initiation or not continuing a regimen containing protease inhibitors (PIs) solely because they feared developing "Crix belly." While this phenomenon of fat loss and accumulation has yet to be totally elucidated, Grunfeld's paper adds tremendously to our knowledge on this subject and will help us better educate our concerned patients.
The key finding was that a significantly greater proportion of HIV-infected men lost peripheral and central fat compared to controls. This lipoatrophy was mild in over half of the cases. No reciprocal fat accumulation was seen in the VAT, and fat accumulation was actually more prevalent in the control group. This debunks a common misconception among many providers and patients that peripheral fat loss is associated with central fat gain as VAT in patients on HAART. In reality, lipoatrophy is the key issue; disproportionate fat loss at different sites probably leads to some misperceived fat gain.
Fat loss was associated with two agents that are used less and less commonly: indinavir and stavudine. By contrast, nevirapine may exert a protective effect, a finding that is consistent with known lipid data regarding the drug.3 While these data are intriguing, larger prospective studies involving nevirapine and newer antiretroviral agents are needed before specific treatment recommendations can be made.
In sum, this excellent methodological study shows that there is no single morphologic syndrome in which an HIV-infected individual develops peripheral fat loss and central fat accumulation. While the two complications may still coexist, lipoatrophy appears to be far more common -- at least among this population of primarily Caucasian men who have sex with men (MSM). While these results are most applicable only to MSM who received first- and second-generation HAART regimens, they still provide an excellent jumping-off point for future research efforts involving women, ethnic minorities and the impact of newer antiretroviral agents. (The authors report that they have collected data on women, and will report it in the future.)
In addition, now that we know what to look for and how best to measure it, clinicians should be better able to choose and monitor antiretroviral regimens for HIV-infected patients while accounting for concerns about morphologic changes.
Hopefully, the knowledge gained through this study will also help as we work to solve the second piece of the lipodystrophy puzzle: What are the biochemical mechanisms of this disease? As the epidemic unfolds before us and we commit patients to years of continuously evolving treatments, the basic definition and study framework for lipodystrophy provided by Grunfeld et al should provide a sound foundation on which to build.
NNRTI-Resistant Virus Commonly Shed in Breast Milk of HIV Subtype C-Infected Mothers Who Received Single-Dose Nevirapine
Lee EJ, Kantor R, Zijenah L, et al, and the HIVNET 023 Study Team. Breast-milk shedding of drug-resistant HIV-1 subtype C in women exposed to single-dose nevirapine. J Infect Dis. October 1, 2005;192(7):1260-1264.
Many HIV-1 subtype C-infected women develop resistance to non-nucleoside reverse transcriptase inhibitors (NNRTIs) following administration of single-dose nevirapine, and may be likely to pass this resistant virus on to their baby during breast-feeding, according to the results of a small study published in the Oct. 1, 2005 issue of the Journal of Infectious Diseases.4 The findings present a new warning regarding the use of single-dose nevirapine in sub-Saharan Africa and nations where HIV-1 subtype C is prevalent, such as China and India.
In 2003 alone, more than 600,000 infants in sub-Saharan Africa were infected with HIV via mother-to-child transmission (MTCT). In this resource-poor region, most HIV-infected women breast-feed, which directly leads to as much as one third of all MTCT cases. Mastitis, or breast tissue inflammation, which is associated with elevated sodium in breast milk, has been linked to increased HIV viral levels in breast milk, thus increasing transmission risk.
While single-dose nevirapine is a low-cost intervention and clearly reduces MTCT of HIV, the selection of NNRTI-resistant mutations via this transient, non-suppressive therapy is a common problem with far-reaching ramifications.5 Not only can the mother develop drug-resistant virus that may forever limit her treatment options, she may then transmit this virus to her infant via breast-feeding. While the effects of single-dose nevirapine on plasma HIV-1 RNA levels in the mother have been described, this is the first publication to date to characterize HIV-1 RNA levels in breast milk after the administration of single-dose nevirapine.
This study is part of HIVNET 023, a randomized, open-label, phase 1/2 trial conducted to examine the safety and trough concentrations of nevirapine in breast-feeding infants in Chitungwiza, Zimbabwe. In this study, single-dose nevirapine tablets (200 mg) were dispensed to 36 HIV-1 subtype C-infected women between April 2000 and October 2001 so the drug could be taken at the onset of labor. Levels of HIV-1 RNA in plasma and breast milk were then quantified and correlated with breast milk sodium levels and HIV viral shedding. NNRTI-resistant mutations in the plasma and breast milk compartments were identified at eight weeks postpartum.
Of the 36 women, only 32 were actually enrolled in the study, as four were excluded because they received zidovudine (AZT, Retrovir) in addition to nevirapine. Two of the 32 women received two nevirapine doses because of false labor; they were still included in the study.
Viral load. Eight-week postpartum plasma HIV-1 RNA levels were 400 copies/mL or higher in 91% of the women, with a median viral load of 4.57 log copies/mL. All women had at least one breast milk sample available; a total of 62 samples were analyzed, 32 from the right breast and 30 from the left. Virus was detected at 50 or more copies/mL in the breast milk of 72% of the women, with a median viral load of 2.2 log copies/mL for all 62 samples. A correlation between plasma and breast milk viral load was observed (r = 0.44; P = .036).
Elevated sodium levels consistent with mastitis were identified in 43% of the 56 breast milk samples tested (the test was performed for 17 of the 30 women with samples available from both breasts). Breast milk samples with elevated sodium were 5.4 fold more likely to have HIV-1 RNA levels above the study median.
NNRTI resistance. At the time of study entry, no NNRTI-resistance mutations were found in any patient's virus. However, when plasma virus from all 32 women was amplified and analyzed at eight weeks postpartum, 34% of the women were found to have at least one NNRTI-resistance mutation. Most of the time (82%) this was the K103N mutation, the primary mutation associated with NNRTI resistance. Women with at least one NNRTI mutation had a baseline CD4+ cell count that was significantly lower than those of the women without NNRTI-resistance mutations (323.5 cells/mm3 versus 408 cells/mm3). No other associations were found between resistance mutations and baseline enrollment characteristics.
Sixty-three percent of the women had virus amplifiable from breast milk, yielding 20 paired samples of amplified virus in plasma and breast milk. Within this group, NNRTI resistance, usually the K103N mutation, was found in 50% of plasma samples and 65% of breast milk samples. A comparison of drug-resistant mutation patterns between plasma and breast milk showed that 50% had a different pattern with respect to NNRTI resistance, including one quarter of the paired samples in which an NNRTI-resistance mutation was found in only one compartment (plasma or breast milk). The odds ratio of a detectable NNRTI-resistance mutation in either the left or right breast milk sequence was estimated to be 13.5 fold higher in women with at least one plasma NNRTI-resistance mutation than in women with no mutations. Of the 14 available paired left and right breast milk samples, 36% had a different NNRTI-resistance mutation found in the right versus the left breast milk sample. The researchers also noted that five women had developed nucleoside/tide reverse transcriptase inhibitor-resistance mutations.
MTCT. Four infants received a diagnosis of HIV-1 infection (defined as two or more HIV-1 RNA-positive PCR assays). Two were positive at birth. The third was negative at two weeks but positive at eight weeks (intrapartum/early breast milk infection). The fourth infant was negative at 24 weeks but positive at 32 weeks, suggesting the infant was infected by breast milk -- although, interestingly, the virus in the mother's plasma and breast milk was entirely wild type at eight weeks postpartum.
The Bottom Line
This study casts a new shadow on the use of single-dose nevirapine in regions where HIV-1 subtype C is prevalent. The majority of the HIV-1 subtype C-infected women in this study who were breast-feeding their infants had detectable virus in one or more breast milk samples at eight weeks postpartum. These women with detectable virus often had mastitis, which is associated with a high breast milk viral load.
The 34% prevalence of NNRTI resistance in these women is consistent with recent research showing that HIV-1 subtype C may be associated with higher resistance rates.6 This may make single-dose nevirapine a less-viable option in the countries and regions where it could do the most good: China, India and much of sub-Saharan Africa, where subtype C is the most common strain of HIV.
In addition, this study found that plasma NNRTI resistance was associated with a lower maternal CD4+ cell count, making it riskier to use single-dose nevirapine to prevent MTCT in the very people who are most likely to transmit HIV to their babies: mothers with advanced HIV and a high viral load. Given that breast milk is the likely mode of transmission in one third of MTCT cases in this study, the higher resistance rate in breast milk as compared to plasma makes transmission of resistant virus all the more likely.
Also worth noting is that, in one quarter of the cases, NNRTI-resistant virus was found in only one compartment, rather than in both plasma and breast milk. Since we know that baseline plasma sequences did not have NNRTI resistance, the inter- and intracompartmental differences between compartments were likely due to differences in viral replication and selective pressure, where decreasing nevirapine levels may be maintained for weeks after dosing.
The presence of NNRTI-resistant virus in a high proportion of breast-feeding mothers in sub-Saharan Africa who received single-dose nevirapine is of great concern. Specters loom from all sides. How will NNRTI resistance affect future treatment options for both mother and child? What will NNRTI resistance mean for future unborn children, if the mother conceives a second or third time and again receives single-dose nevirapine? What if the mother transmits resistant virus via breast milk to a child who otherwise made it through the birth process unscathed? How might viral load level and other host factors impact a mother's risk of developing resistance? We need to understand all of the implications of single-dose nevirapine if we are to use it most wisely.
If there is any positive information to provide on this subject, it is that there are methods -- both available and under study -- that can reduce the risk that a mother will transmit NNRTI resistance to her child after the use of single-dose nevirapine. Interventions are ongoing to decrease mastitis, sterilize breast milk and use alternative antiretroviral regimens, all of which can help prevent MTCT. A study involving the administration of daily nevirapine to infants in order to prevent MTCT via breast milk is also underway and should provide enlightening data.
Richardson JL, Nowicki M, Danley K, et al. Neuropsychological functioning in a cohort of HIV- and hepatitis C virus-infected women. AIDS. October 14, 2005;19(15):1659-1667.
HIV/hepatitis C-coinfected women appear more likely to have neurocognitive impairment than women infected with either virus alone, according to an intriguing study by U.S. researchers published in the Oct. 14, 2005 issue of AIDS.7 The study is the first to focus on neuropsychiatric functioning in a large sample of women infected with HIV alone, hepatitis C (HCV) alone, both viruses or neither virus.
HCV infects 1.8% of the U.S. population, and anywhere between 16% and 90% of U.S. HIV-infected patients, depending on transmission risk category.8 Although HCV is best known for its adverse effect on liver function, research also suggests that the virus has neurocognitive effects. Given HCV's high rate of prevalence, particularly among HIV-infected patients, it is crucial that these neurocognitive effects are further investigated.
Neuropsychiatric research has shown that patients infected with HIV experience a higher degree of psychological distress and often exhibit neurocognitive impairment, especially if their disease is at an advanced stage and is not being treated with antiretrovirals. Past research has also shown that patients with HCV experience neurocognitive impairment similar to those with HIV; HCV-infected patients also experience fatigue, depression, anxiety and poorer quality of life than those who are HCV uninfected.9 To date, there has been only a small body of literature regarding neurocognitive function in HIV/HCV-coinfected patients. This research has shown neurocognitive function to be more impaired in coinfected patients than in patients with either HIV or HCV alone, possibly due to the additive effects of both viruses on the same frontal-subcortical brain areas.10 This study, conducted by Jean Richardson and colleagues at the Keck School of Medicine, succeeds in building our knowledge on this topic.
Study Construction and Patient Characteristics
Patients in this study were part of the Women's Interagency HIV Study (WIHS), a large, ongoing U.S. National Institutes of Health-funded study that has been following approximately 2,000 HIV-infected women and 550 HIV-uninfected women in multiple U.S. cities. Between April 1995 and April 1997, 231 English-speaking women in Chicago and Los Angeles who were enrolled in WIHS were screened for Richardson et al's study. Of those screened, 220 had complete data and met the following criteria: no history of AIDS-defining neurological conditions, including clinical dementia; no history of schizophrenia, bipolar disorder or epilepsy; and no evidence of alcohol intoxication at testing. All 220 women were ambulatory and without acute illness. An index of lifetime drug and heavy alcohol use was constructed, with patients graded as low (0 to 9 years), medium (10 to 30 years) or high (30-plus years). Patients were stratified by HIV status, HCV status, CD4+ cell count (above or below 200 cells/mm3) and HIV viral load by NASBA technique (less than 4,000, 4,000 to 35,000, and over 35,000 copies/cm3).
Other baseline patient data, stratified by HIV and HCV status, appear in the chart below.
Of those with HIV, 52.7% were on antiretroviral treatment. None of the patients were receiving any type of interferon treatment (whether with or without ribavirin [Copegus, Rebetol]) for their HCV.
How Neurocognitive Impairment Was Assessed
A panel of neuropsychiatric tests was administered by a qualified psychologist or supervised psychometrician. Neuropsychiatric tests were selected based on sensitivity to HIV-related cognitive/motor impairment, brevity of administration and, wherever feasible, minimal dependence on literacy or language ability. Tests utilized included:
All neuropsychiatric tests were administered and scored according to standard protocol. The raw scores were then z-transformed using means and standard deviations from the HIV-uninfected control group.
A participant's score was classified as "impaired" (defined as an inability to perform an assigned task) if it fell at least one standard deviation below the mean of the control group; a score was classified as "abnormal" if two or more unique tests fell at least one standard deviation below the mean of the control group.
The researchers controlled for multiple variables: education, age, ethnicity, history of head injuries, use of potentially sedating medications in the past 24 hours, lifetime drug use, psychological distress using CES-D, estimated verbal IQ using the Quick Test, and testing site.
In all, 39.1% of participants were classified as "abnormal," with significantly more HCV-infected women (48.5%) than HCV-uninfected women (31.7%) meeting these criteria (P < .01). HCV-infected participants had scores in the impaired range in five of the 10 individual tests performed: Color Trails 1 and 2, Grooved Pegboard on the dominant and non-dominant hand, and the Symbol Digit Modalities Test.
All of the following were predictors of abnormal neuropsychological results in univariate analysis, using the non-infected group as the control:
Antiretroviral treatment appeared to have a protective effect on abnormal neuropsychological results: Among HIV-monoinfected patients, those on antiretroviral therapy had an OR of 1.87, compared to an OR of 2.19 for those not on therapy. Among HIV/HCV-coinfected patients, the difference was even more pronounced: Those on antiretroviral therapy had an OR of 2.14, compared to an OR of 7.03 for those not on therapy.
Using bivariate analysis, the odds of impairment associated with HCV remained significant after controlling for HIV status, treatment status, clinic site, age, education, IQ, CES-D, sedating medication history, head injury history, ethnicity and lifetime substance abuse. In multivariate analysis, controlling for all of these variables (except age) found that neuropsychiatric impairment in coinfected patients was about three times more likely than among patients who were infected with neither HIV nor HCV (OR: 3.07). Adding age to the model dropped the OR to 1.97. The interaction term between age and HCV/HIV status was of borderline significance, suggesting the need for more detailed study.
To further examine the combined effect of HIV/HCV coinfection and age, a computation of OR and confidence intervals across age and infection status ranges was done, controlling for education, ethnicity, IQ and CES-D. The results indicated abnormal neuropsychiatric performance was related both to age and infection status. Age stratification was then performed to more clearly understand the interaction. For patients under age 40 who were infected with HIV and/or HCV, there was an increased risk of neuropsychiatric impairment compared to those who were infected with neither HIV nor HCV. Specifically, OR broke down as follows:
No significant differences were found for patients over age 40, but sample sizes were small.
Evidence of psychological distress by CES-D scoring was not significantly different between HCV-infected and HCV-uninfected women. Psychological stress was associated with an increased risk of neuropsychiatric impairment, which persisted after controlling for HCV. Likewise, HCV status remained significant after controlling for psychological distress, emphasizing the independent contribution of each factor to neuropsychiatric functioning.
It is important to note that none of the women in this study were treated for their HCV. This takes away the possible interference of medication side effects from HCV therapy, which is known to cause psychiatric problems.
The Bottom Line
Past research has shown poor neuropsychiatric functioning in patients infected with HIV alone and HCV alone, but HIV/HCV coinfection had not been adequately addressed. This is the first study to investigate neurocognitive impairment in women infected with one, both or neither of these viruses.
Due to the cross-sectional design of this study, conclusions must be limited to the observation of a strong statistical association between neuropsychiatric functioning and HCV, and an additive effect between HIV and HCV. Specifically, this study found a substantially increased risk of neuropsychiatric impairment in psychomotor speed, fine motor control, set shifting, working memory, attention and concentration, regardless of a woman's CD4+ cell count. This additive effect may be related to HIV and HCV crossing the blood-brain barrier in a similar manner, possibly in a "Trojan horse" fashion in mononuclear cells, and then going on to affect the same frontal-subcortical regions of the brain.11
It is worth noting some potentially limiting factors: For one thing, the control group was better educated. This could have affected literacy levels and led to a difference in performance on neuropsychiatric tests, despite researchers' attempts to select tests with minimal reliance on literacy. In addition, the study was limited by a lack of HIV and HCV cerebrospinal fluid data. Past studies have shown a relationship between HIV in the cerebrospinal fluid and advancing dementia,12 but whether a similar relationship exists in the setting of HCV infection has yet to be proven. Lastly, patients' proven degree of hepatic damage was not assessed, which further limits the study. Given that advancing fibrosis has been associated with neuropsychiatric impairment in HCV-related liver disease, data regarding hepatic damage would have been helpful.
Nonetheless, this is an important study with clinically significant findings. The authors noted, for instance, that the limitations found in patients' ability to rapidly process multiple sets of information may have great implications for both HIV and HCV treatment adherence, as well as patients' ability to perform their daily activities.
As HCV coinfection moves to the forefront as a cause of morbidity and mortality in HIV-infected patients, and as complex HCV treatment regimens continue to evolve, our need to understand neuropsychiatric impairment in the HIV/HCV-coinfected population will grow dramatically. Hopefully we will see many more studies designed to elucidate questions raised by these researchers' landmark effort. In the meantime, clinicians will need to keep in mind that neuropsychiatric impairment in HIV/HCV-coinfected women is a significant problem that may affect adherence and quality of life. Screening for these impairments, and treating HIV and HCV when appropriate, may have great implications for successful, long-term patient care.
Hulgan T, Raffanti S, Kheshti A, et al. CD4 lymphocyte percentage predicts disease progression in HIV-infected patients initiating highly active antiretroviral therapy with CD4 lymphocyte counts >350 lymphocytes/mm3. J Infect Dis. September 15, 2005;192(6):950-957.
HIV-infected, treatment-naive patients with a CD4+ cell count over 350 may benefit from initiation of HAART if their CD4+ cell percentage is below 17%, researchers suggest in the Sept. 15, 2005 issue of the Journal of Infectious Diseases.13 Should the findings hold up to further study, they may add support for a return to the "hit hard, hit early" approach to HIV treatment -- and may secure a role for the use of CD4+ cell percentage in first-line treatment decisions.
While HAART has dramatically decreased morbidity and mortality in HIV-infected patients over the past 10 years, the debate over proper timing for treatment initiation rages on. How do we balance cost, toxicity and the risks of drug resistance with efficacy, immune reconstitution and the prevention of progression to AIDS? U.S. treatment guidelines generally say that, for asymptomatic patients, we should base this decision primarily on the patient's absolute CD4+ cell count. However, this is not the most reliable measure of a patient's immune health: Absolute CD4+ cell counts vary based on a patient's total absolute lymphocyte count. By contrast, a patient's CD4+ cell percentage is a constant and stable factor.
A pair of studies conducted during the pre-HAART era (one by J Burcham and colleagues,14 the other by JM Taylor and colleagues15) found that CD4+ cell percentage had greater prognostic significance than absolute CD4+ cell count. More recent research by Kelly Gebo and colleagues, however, seems to support the view that absolute CD4+ cell count is a more accurate guidepost than CD4+ cell percentage in determining whether HIV treatment should be initiated.16 U.S. treatment guidelines would seem to agree. They recommend treatment for patients with a CD4+ cell count below 200, and recommend deferral of treatment for asymptomatic patients with a CD4+ count above 350 and a viral load below 100,000 copies/mL. Nowhere in the guidelines is CD4+ cell percentage mentioned as a criterion for treatment initiation.
In the current study, Todd Hulgan and colleagues from the Vanderbilt University School of Medicine attempted to reexamine the issue of CD4+ cell percentage, in hopes of gauging whether its use could enable clinicians to more accurately determine which of their treatment-naive patients are in need of HAART.
A total of 788 patients from the Comprehensive Care Center in Nashville, Tenn., were included in this study. All patients were at least 16 years of age and initiated HAART between Jan. 1, 1998 and Jan. 31, 2003. All patients remained on their regimen for at least 30 days. One quarter of the enrollees were female, 41% were African American, and the mean age was 37 years.
Baseline CD4+ cell data were stratified by total count (less than 200, 200 to 350, 351 to 500, and more than 500 cells/mm3 ) and by percentage (less than 14%, 14% to 20%, 21% to 28%, and more than 28%). Also noted was whether the patient initiated therapy with an NNRTI-based regimen (23%), a PI-based regimen (53%) or the triple-nucleoside regimen of zidovudine/lamivudine/abacavir (AZT/3TC/ABC, Trizivir; 20%). Median CD4+ cell count at HAART initiation was 225 cells/mm3, median CD4+ cell percentage was 17%, and baseline viral load was 4.9 log10 copies/mL.
To gauge HIV disease progression, the occurrence of study events was measured. Based on the 1993 U.S. Centers for Disease Control and Prevention guidelines, a study event was considered to be any new opportunistic infection, AIDS-defining illness or death. All events were verified by manual chart review. The median length of follow-up was 103 weeks.
In total, 140 participants (18%) developed an AIDS-defining illness or died during the 103 weeks of follow-up. There were no differences in outcome based on category of HAART therapy. As expected, progression to AIDS-defining illness or death was significantly greater if the patient's baseline absolute CD4+ cell count was less than 200 cells/mm3 (P < .0001). Progression was also more likely among patients with a baseline CD4+ cell percentage below 17% (P < .0001).
Participants who initiated HAART with an absolute CD4+ cell count greater than 200 cells/mm3 generally went the same amount of time before experiencing a study event, regardless of how far above 200 cells/mm3 their CD4+ cell count was. However, an increased event rate was found among participants with a CD4+ cell percentage below 17% (P = .08). In particular, among patients with an absolute CD4+ cell count greater than 350 cells/mm3, those with a CD4+ cell percentage below 17% had a faster progression to AIDS-defining illness or death than those with a percentage of 17% or higher (P = .03).
Cox proportional hazard models were used to assess the effects of baseline characteristics (age, race, baseline viral load, prior antiretroviral therapy and CD4+ lymphocyte percentage) on subsequent progression to AIDS-defining illness or death. Among all study participants, regardless of baseline absolute CD4+ cell count, it was found that CD4+ lymphocyte percentage, HIV-1 RNA level and non-white race were independent predictors of subsequent disease progression or death. Among participants with a baseline absolute CD4+ cell count greater than 350, a CD4+ cell percentage below 17% predicted disease progression (HR 3.57).
The Bottom Line
U.S. HIV treatment guidelines are somewhat ambiguous about whether to initiate HAART for patients with an absolute CD4+ cell count above 200. This study showed that a CD4+ cell percentage cutoff may be able to help a clinician assess the need for treatment initiation, not only for the grey area between a CD4+ cell count of 200 and 350, but for a CD4+ cell count above 350 as well -- an area in which U.S. guidelines currently do not indicate a need for treatment, unless a patient's viral load is above 100,000 copies/mL.
In this study, patients with an absolute CD4+ cell count above 350 had a greater risk of disease progression and death if their CD4+ cell percentage was below 17%, rather than above 17%, with a significant HR of 3.57. By contrast, CD4+ cell percentage added no benefit to the decision-making process if a patient's total CD4+ cell count was less than 200, a finding that agrees with previous studies.
Although earlier studies have investigated the utility of CD4+ cell percentage, this current study is both more current and more expansive. Taylor and Burcham's pre-HAART era studies showed a benefit to using CD4+ cell percentage, probably because they were both done with patients who had a relatively high CD4+ cell count (much like the population of the current study). However, it did not take into account the impact of more modern therapies. The Gebo study did take place during the HAART era, but only looked at persons with a CD4+ cell count below 350, which would explain why it found no benefit to the use of CD4+ cell percentage.
Despite its notable findings, this study was limited by a lack of data on adherence and comorbid conditions. Larger studies are needed to verify its findings regarding the usefulness of CD4+ cell percentage in treatment-initiation algorithms for patients with an absolute CD4+ cell count above 350.
In the meantime, this study may offer support for a slight swing of the pendulum back to the "hit early and hard" approach to HIV treatment, especially if new regimens prove increasingly less toxic than their forebears. For patients with an absolute CD4+ cell count over 350 and a CD4+ cell percentage below 17%, earlier initiation of HAART may be a key to reduced disease progression and improved survival.
This article was provided by TheBodyPRO. It is a part of the publication HIV JournalView.