Can economic empowerment programs give women the skills and power to reduce their risk of HIV?
By Kristen Jill Kresge
Results from two recently-conducted clinical trials of different prime-boost AIDS vaccine regimens were presented at the 15th Conference on Retroviruses and Opportunistic Infections (CROI), held February 3-6 in Boston.
The first trial, conducted by the HIV Vaccine Trials Network (HVTN) at multiple sites in the US, tested an immunization regimen consisting of two injections of a DNA candidate followed by two injections of a modified vaccinia Ankara (MVA) vector-based candidate, both developed at the Emory Vaccine Center and now licensed to the biotechnology company GeoVax. Both candidates contain fragments of HIV to stimulate an immune response against the virus, but neither can cause an HIV infection. Harriet Robinson, who recently left Emory to join GeoVax, presented results from this trial, known as HVTN 065.
Researchers evaluated the safety and immunogenicity of two different doses of the DNA and MVA-based candidates, each in 30 volunteers (see VAX August 2007 Primer on Understanding Immunogenicity). Researchers assessed the immune responses induced by the candidates two weeks after each injection of the MVA candidate. Based on these results, Robinson said the higher dose of the prime-boost combination will be tested further. In a second phase of this study, two groups of 30 volunteers will receive either a single injection of the DNA candidate followed by two injections of the MVA-based candidate, or three injections of the MVA-based vaccine candidate.
Researchers also presented data from another Phase I/II trial in Mbeya, Tanzania at CROI. This trial tested the safety and immunogenicity of the DNA and adenovirus serotype 5 (Ad5)-based candidates developed by the Vaccine Research Center (VRC), part of the US National Institute of Allergy and Infectious Diseases. This trial was conducted by the United States Military HIV Research Program and was one of a series of Phase I and II studies with the VRC’s candidates in preparation for the originally-planned Phase IIb test-of-concept trial known as PAVE 100. The start of the PAVE 100 trial, however, was placed on hold after the results of the STEP trial were released (see VAX October-November 2007 Spotlight article, A STEP back?).
The majority of participants in this trial had high levels of anti-Ad5 antibody at the start of the trial, resulting from exposure to the naturally-circulating Ad5 virus. Yet all individuals mounted some level of HIV-specific immune responses following receipt of the Ad5 candidate. This indicates that pre-existing immunity to Ad5 did not completely mitigate immune response to the Ad5 vaccine candidate.
How are statisticians analyzing the data from the STEP trial?
AIDS vaccine candidates are tested in randomized, controlled, double-blind clinical trials to evaluate their safety and to determine whether or not a specific candidate induces immune responses against HIV (see VAX October-November 2007 Primer on Understanding Randomized, Controlled Clinical Trials). Late-stage clinical evaluation—including both Phase IIb test-of-concept and Phase III trials—look specifically at the efficacy of a vaccine candidate based on its ability to protect an individual against HIV infection or provide some degree of partial efficacy (see VAXMay 2007 Primer on Understanding Partially Effective AIDS Vaccines).
All of these trials are carefully planned by biostatisticians using mathematical formulas to determine key factors related to the design of the trial, such as the total number of volunteers that must be enrolled. Before a trial begins, biostatisticians also set an analysis plan detailing the types of statistical calculations that will be performed on the data. This is critical to the interpretation of the final results.
Once a trial is complete, researchers can compare the group of individuals who received the vaccine candidate to those who received an inactive placebo and see what effect, if any, the candidate had on either incidence of HIV infection or on certain markers of disease progression—such as the amount of virus in the blood, or viral load—in those individuals who were infected with HIV during the trial. If there is a difference between the two groups, statisticians can conduct a series of calculations to determine whether the difference was due to the vaccine candidate, or if it was merely the result of chance. This is referred to as determining the statistical significance of a result. A test of statistical significance provides a measure of credibility to the results. If the trial was designed and conducted properly, a statistically significant difference between the vaccine and placebo groups means the results were unlikely to have occurred by coincidence.
The STEP trial, which tested Merck’s AIDS vaccine candidate known as MRKAd5 in a Phase IIb test-of-concept trial involving 3,000 volunteers, is an example of a clinical trial in which further statistical analysis is required. In November 2007 researchers reported that this vaccine candidate offered no benefit. Data analysis indicated there was no statistically significant difference between the number of HIV infections or viral load levels in individuals in the vaccine and placebo groups. In addition, the data actually showed a trend toward more HIV infections occurring in individuals who received the vaccine candidate. This was an unexpected result. The initial statistical analysis plan for the trial was not designed to measure this effect and therefore statisticians could not rely on typical tests of statistical significance to determine if the vaccine enhanced the risk of HIV infection or if the difference occurred merely by chance. This makes interpretation of the observed trend very complicated.
Volunteers in AIDS vaccine trials are usually randomly assigned to either the vaccine or placebo group (see VAXOctober-November 2007 Primer on Understanding Randomized, Controlled Clinical Trials). This reduces the chance that variables, such as age, ethnicity, gender, or other baseline characteristics of the volunteers will impact the final results of the trial. After a trial is complete, researchers can look at the background characteristics of the volunteers and determine how well the trial was actually randomized.
Statisticians can also design a trial by randomizing volunteers based on a specific variable that they think may confound the results. In this process, known as stratification, a pre-specified number of volunteers with a previously-identified characteristic are randomly placed into the vaccine and placebo groups. In the STEP trial, volunteers were stratified based on their level of pre-existing immunity to the naturally-circulating cold virus (adenovirus serotype 5, or Ad5), which was used in a disabled form as the vector in this vaccine candidate (see VAX September 2004 Primer onUnderstanding Viral Vectors). Initial analyses showed that the trend toward a higher number of HIV infections in vaccine recipients was apparent in the sub-groups of volunteers who had pre-existing Ad5 immunity.
More complex analyses were then conducted to see how other factors, in addition to pre-existing Ad5 immunity, influenced the observed results. These so-called multivariate analyses allow statisticians to analyze several variables simultaneously. The most relevant risk factor identified so far for the STEP trial was male circumcision status. Volunteers who received the vaccine candidate were four times more likely than placebo recipients to become HIV infected if they were both uncircumcised and had some degree of pre-existing Ad5 immunity.
According to the investigators of the STEP trial, the trend toward an association between circumcision status and the risk of HIV infection seemed to be as strong, if not stronger, than the trend toward an association between HIV infection and pre-existing immunity to Ad5. However, these results must be interpreted with caution since the multivariate analyses were not part of the original statistical analysis plan for this trial, and were only performed because of the unexpected results. This is called a ‘post-hoc’ analysis, or one done after the fact. Post-hoc analyses provide much less reliable information.
Investigators are now in the process of analyzing the STEP data based on other variables as well. Information collected from these analyses may help researchers develop hypotheses that can then be investigated further.