Fact or Fiction: PFS to OS is different in immuno-oncology
October 18, 2016
Size, and Shape Matters
May 2, 2017
Was The Hype Justified?
May 25, 2018
Just over 10 years a new approach, Adaptive Trial Design, was going to revolutionise drug development but has it? Here are my reflections.
Often when move new agents into pivotal trials we often have some inkling about the patients in which we get greatest efficacy. We may also worry the efficacy in the remainder might dilute the effect sufficiently to result in a negative trial. In this situation we really can have our cake and eat it – well nearly!
There are designs where we can decide at an interim whether to:
Continue only in a sub-group
Blame the whole stupid subgroup idea on your predecessor, and only test the full population
Hedge your bets and split alpha between the full and subgroup population
Call the whole thing off and stop for futility
The design is published at Jenkins, Stone & Jennison Pharmaceutical Statistics 2011 4: 347-356 and could be adapted for multiple hypotheses.
There also versions called the Adaptive Signature Design (Clin Cancer Res 2005 7872-8) where in the first stage you can go fishing for a subgroup and test that subgroup in the second stage, the full population being tested amongst all patients recruited. This is also a neat idea but I worry how likely it would be that an enhanced subgroup effect found in such a way would be replicated in further patients.
Another potentially attractive idea is to resize a trial if interim results are promising but maybe not quite as good as hoped. Importantly the statistical considerations apply only when decisions are made on unblinded data.
This though is an option where you want to enquire about what’s going on in the statistical black box. There are two ways of analysing the data at the end.
Firstly, you can combine the p-values from data recorded before and after the potential sample-size adjustment which means there is no p-value penalty at the end. However, the big drawback here is if you increase the sample-size, patients in the 2nd stage carry less weight in the analysis. This is undesirable – why should later patients be less relevant and for sure you’ll be asked to do an analysis giving all patients equal weight and what if that’s not positive??
The second option gives all patients equal weight but this requires that the p-value at the end may need adjustment but that’s only necessary if you decided to continue when the chance of success was low (<30 to 40%). I would always recommend this option.
Having said all of that, a regular group sequential design (GSD), the one you’re used to with interim analysis and adjustment of significance levels, will almost always be more powerful: for the same number of patients will have a higher chance of success. For financial reasons a GSD may not always be optimal if investors cannot be persuaded to part with all of their cash without some more encouragement.
Another area where adaptive designs can be helpful, and this is the one that’s probably received most attention, and that’s dose selection.
Let’s first consider dose selection within a pivotal trial. Below is an example I was closely involved with. This saved a huge amount of development time but in hindsight and for reasons specific to the agent, I wonder if we’d have been better still performing a seamless design, but becoming unblinded to the PII data. The PII data would then need replacing but the treatment regimen could have been adapted – Pharmaceutical Statistics 2014 13: 229-237.
The focus has been on pivotal trials, but adaptive designs may play a key role in dose selection in earlier phase trials where, based on accumulating data, randomisation is adjusted to move more patients to the most ‘desirable’ doses and interims introduced to allow the trial to stop early. Another nice development but personally I would be tempted to initially randomise patients to the dose with best chance of efficacy and placebo to see whether you have any activity before trying to optimise benefit/risk by exploring multiple doses/schedules.
That’s all great but…
Whilst the statistics behind many of these designs is complicated, although most of that stats has been worked out now, the most difficult part is who gets to see what and when – to protect what’s called trial integrity. There are no black and white answers on this and different models have been proposed/tried. Trial integrity is all about ensuring that access to interim unblinded data does not bias the benefit and risk estimated from he trial.
An example where the desire for flexibility might backfire is when the sample size is increased when data are promising but not as good as initially assumed. If that happens will investigators/patients conclude this drug is clearly not as good as hoped for and change their behaviour – might recruitment suffer? might poorer prognosis be recruited who are more difficult to treat? In the face of side effects or presumed lack of response, might patients discontinue therapy more quickly??
So in conclusion…
There was a huge amount of hype around 10 years ago, and some of it was just that. Adaptive design can be complicated operationally and some of the time saved will be lost due to extra planning, however, there are a few opportunities that we tend to miss and I think Adaptive Designs are here to stay as useful addition to the design toolbox.