top of page
Horiz_red_podcast.png

#432 - Are Adaptive Platform Trials the Future of Neonatal Research? (ft Dr. Brett Manley)


Hello friends 👋

In this interview episode, Ben and Daphna sit down with Professor Brett Manley to discuss a paradigm shift in neonatal research: adaptive platform trials. Frustrated by the inefficiencies and underpowered results of traditional RCTs, Dr. Manley outlines the ambitious Platypus Adaptive Platform Trial launching in Australia and New Zealand. They dive into how shared primary outcomes, novel consent models, and massive cross-center collaboration can answer pressing clinical questions—like optimal PPROM antibiotics and caffeine dosing—simultaneously. Tune in for a fascinating conversation on moving beyond medical dogma, embracing humility, and keeping families at the center of NICU research!


Learn more about the Platipus trial here: https://www.platipustrial.org/


Link to episode on youtube: https://youtu.be/jNPeOQRAgno


----


Short Bio: Professor Brett Manley is a consultant neonatologist and inaugural Professor/Director of Newborn Research at the Mercy Hospital for Women in Melbourne, Australia, and a Professor in the Dept of Obstetrics, Gynaecology and Newborn Health at The University of Melbourne. Brett leads and collaborates on large trials to improve the health of preterm and sick newborn infants. Currently he co-leads the implementation of the PLATIPUS adaptive platform trial in Australia and New Zealand that aims to change the way we do perinatal clinical trials.


----


The transcript of today's episode can be found below 👇


Ben Courchia MD (00:01.026) Hello everybody, welcome back to the Incubator Podcast. We're back today for an interview with a very special guest. Daphna, you are here in the studio with me today. Good morning. How are you doing?


Daphna Yasova Barbeau MD (00:11.206) Good morning. We've been looking forward to this. We always look forward to interviews, but we have Dr. Manley in with us and we always learn so much when we have the opportunity to hear from him.


Ben Courchia MD (00:26.42) Correct. We have Professor Brett Manley in the studio. Brett, welcome back to the podcast. You've been on the podcast intermittently, but this is our first full-length interview with you. Welcome to the incubator.


Dr. Brett Manley (00:39.468) Hi Ben, hi Daphna and hi everyone. Always a pleasure.


Ben Courchia MD (00:43.116) You are a consultant neonatologist, inaugural professor, and director of newborn research at the Mercy Hospital for Women in Melbourne, Australia, as well as a professor in the Department of Obstetrics, Gynecology and Newborn Health at the University of Melbourne. Brett, you lead and collaborate on large trials to improve the health of preterm and sick newborn infants. You currently co-lead the implementation of the Platypus Adaptive Platform Trial in Australia and New Zealand, aiming to change the way we do perinatal clinical trials. That topic really led us to chat with you today. The other incentive for us to talk to you today is because we were looking forward to your talk at the Delphi conference that just passed. Obviously, the weather played some nasty tricks on all of us, but you specifically. We didn't get to hear your presentation at Delphi. This was the incentive for us to make this episode happen and fill the gap to discuss clinical trials, which is a topic that definitely needs discussion.


Daphna Yasova Barbeau MD (01:59.654) I think we have to highlight how much Brett tried to get to us. More than anybody could have imagined. Brett kept taking delayed flights, making it all the way to Texas, but not quite to Florida.


Ben Courchia MD (02:04.376) Yes, I'm almost embarrassed.


Dr. Brett Manley (02:16.482) If anyone out there would like some travel tips about how to spend your time at Dallas Airport, if you've got a few days in Dallas Airport, let me know. I've got some great tips.


Daphna Yasova Barbeau MD (02:20.9) Mmm!


Ben Courchia MD (02:26.358) Haha.


Daphna Yasova Barbeau MD (02:29.158) Gosh, Brett, well, we really appreciate that, and we're looking forward to hearing all the details of the talk you were going to give.


Ben Courchia MD (02:30.167) Yeah.


Dr. Brett Manley (02:37.261) Thank you.


Ben Courchia MD (02:39.552) The logical place to start for me is to ask you a little bit about what interests you in how we design trials. We have people who came before us and outlined a path that maybe we could follow. But what got you interested in the way in which we do clinical trials? Can you tell us where that's coming from?


Dr. Brett Manley (03:08.588) Thanks, Ben. The passion that drives all of this is the same as everybody, which is finding ways to improve the health of the patients we look after—the preterm babies, their families, and the sick term babies as well. That's what totally drives me. I feel so frustrated, like a lot of us do, that we don't seem to be making large gains in the 2020s like we have in the past. Let's be honest, the next surfactant or antenatal steroid is not obvious at the moment, though maybe it will come. I'm frustrated just as much by the fact that most of our trials end with null or negative results, or are underpowered, not finished, or underfunded. It just seems like a cycle that goes round and round. Like everybody, I started small. I've done small projects, small bits of research, and small trials, and they've gradually gotten bigger. Now I find myself involved regularly with very large trials, which I think is the starting point: powering trials and designing them well enough that they are actually able to answer an important question. But still, I'm so frustrated by how inefficient the system is and how poorly we collaborate as a community to answer the big questions. The fact that I'm sitting in Melbourne, Australia, and you're both in Florida, just shows how small the world is now. We know each other, we meet each other, and we have to work better together, either regionally or internationally, to answer some of these questions. I've done trials that have taken a long time to finish. Then you've got the long-term follow-up of the patients. It costs a lot of money, and it's a very inefficient system. I've become obsessed with finding ways to try and improve that system.


Ben Courchia MD (05:21.686) It does feel like the age of big trials like the CAP trial—trials that come out and answer a question in a very significant manner—has gone a little bit by the wayside. A lot of trials take a long time to plan and conduct, and when the results come out, we're eagerly waiting for them, only to be disappointed finding out there was not enough enrollment or funding got cut. We don't get the answer we were hoping for. I am wondering if this is a factor of time. Do you think 20 years ago, our field was still in its infancy and performing more traditional trials was feasible, but now we've moved into the adolescent phase of our field and this is no longer possible? What do you think is the reason why traditional trials are not what they used to be?


Dr. Brett Manley (06:34.328) I think a lot of it is the lack of collaboration and the power of the trials. Any differences we are going to find in trials now are likely to be incremental, unlikely to be absolutely world-changing. But you never know. People are looking at amazing new things—the artificial womb, stem cells—things on the horizon that may make a big difference. If we look at BPD, for example, it's unlikely we're going to halve BPD with one intervention or one drug. I could give the example of intratracheal budesonide. We've done the big trials now—the PLUSS trial led from Australia and the big trial in the States—showing absolutely nothing going on in reducing death or BPD. It's frustrating. I think we're also learning from others. Fields like oncology, infectious diseases, and stroke have really led the way with looking at innovative trial designs to answer important questions. But those fields have an advantage that we don't: numbers. It's hard to do our research. It's hard to get consent during a stressful time when the patient can't consent for themselves. If we think about the smallest, sickest babies we're looking after now—the 22- and 23-weekers—there simply are not enough of them to do a trial in a single center or even a region to answer these questions. We're going to have to come together and prioritize big questions. A lot of that will involve putting ego aside and deciding together that we need to answer these questions and this is how we're going to do it. There's still plenty of room for other randomized trials and research going alongside a bigger collaboration that can answer things we haven't been successful in answering until now.


Daphna Yasova Barbeau MD (08:52.102) I love that. We're definitely going to get into some of these up-and-coming types of research design, but something we've talked about in the past underscores what you said about ego. There are a lot of things in neonatology that are dogma that we don't have great evidence for. And there are lots of things we have some evidence for, but we aren't utilizing enough in the unit. I wonder if you can highlight some of those things where you feel we've been overconfident in our evidence or are underutilizing things we already know.


Dr. Brett Manley (09:35.502) I'm not quite sure where to start with that one, Daphna. There's so much there. We just had a meeting here in Melbourne where we brought together neonatologists from around Australia and New Zealand to discuss systemic postnatal corticosteroids. One of my close friends and mentors, Lex Doyle, who you've had on the podcast, led the DART trial more than 20 years ago. That established one way of giving low-dose dexamethasone to get babies off a ventilator, and that has become a standard of treatment. But I'm sure you know—and many of your listeners may not—that it was a trial that stopped early. It was unable to recruit effectively because of deeply held opinions and ended up only enrolling 70 babies. Would we base an entire way of giving postnatal steroids on 70 patients now? No, we would not.


Daphna Yasova Barbeau MD (10:19.312) Mm-hmm.


Dr. Brett Manley (10:30.422) That's nothing against the trial; it was an incredibly difficult trial to conduct at that time and remains some of the best evidence we have. But it's an example of how we continue to do things without really testing them and asking the question. Think about the things we worry about in the NICU. We worry about systemic steroids. We worry about giving too much to the wrong babies. Should we give them or not? Do we consent the families? What do we tell them? What are we really basing that on?


Daphna Yasova Barbeau MD (10:50.672) Mm-hmm.


Dr. Brett Manley (11:00.462) So few babies enrolled in a trial. And whilst we're on steroids, antenatal corticosteroids are the great success story of perinatology. Yet 50 years later, we still are unsure of the drug, the dose, the duration, and the frequency. That's another example of how, maybe if we had our time again, we could collaborate better. There are currently at least half a dozen large traditionally designed randomized controlled trials going on around the world to answer one or maybe two of those questions at a time. To me, that doesn't make a lot of sense when we think about limited resources. Could we not have somehow come together and tried to answer more than one question at a time in one research platform?


Daphna Yasova Barbeau MD (11:53.137) I love your highlighting of collaboration. That underscores almost every lecture you give, and we believe you're absolutely right about that. You're an expert in adaptive platform design, and some people are still a little bit confused about that. Maybe you can explain the utility of adaptive platform design.


Dr. Brett Manley (12:20.364) Thanks for calling me an expert. I'm as expert as you can be as a neonatologist who's been thinking about this for four or five years now. It's been quite the journey, and I'm still very much learning as we go along. Adaptive platform trials are one potential way to answer big questions and be more efficient while we do it. We can point to examples from other specialties, like the RECOVERY trial in the UK, which was a COVID trial that was able to enroll adult patients at a rate never before seen in a clinical trial in a Western country. They had 10,000 patients enrolled in a matter of months and were getting answers to whether therapies were effective within three or four months of their first enrollment. They were then able to continue in a perpetual way to answer multiple questions about therapies for COVID. That's an example of an adaptive platform trial at its most efficient: an entire region comes together, it's well-funded, there's an emergency, there's a simple primary outcome—in their case, mortality—and you've got a huge amount of data accruing to answer questions quickly. That really spiked my interest. We all sat around and wondered, why couldn't we do something like this in perinatology? That's where the idea for the Platypus trial came from. Platypus is an adaptive platform trial designed and being implemented in Australia and New Zealand, and also with some UK colleagues, possibly spreading around the world. We're trying to keep it a little bit under control at the moment while we actually get started and make sure everything works how we want to.


Ben Courchia MD (14:20.842) Before we get into the Platypus trial, I want to dive a little deeper into adaptive trial methodology. Either you've never heard of it, or the term has been dropped here and there, and it's still not exactly clear how it works. COVID created an environment where we needed answers fast. But when we talk about adaptive trials, there's flexibility in the trial design. How does that allow us to look at patients and outcomes, and pivot without having to wait for the publication of the initial trial, to keep looking at a question in a variety of ways? Can you tell us more about the science behind that?


Dr. Brett Manley (15:50.86) I think the key there is the "P" in APT: the platform. It's exactly what it sounds like. It's a collaboration, a network, a structure that underlies the ability to run multiple clinical trials at the same time using the same platform. That requires the syncing of governance, statistical support, data monitoring, safety monitoring, consent, ethics, and standardizing outcomes. That's where the collaboration comes in. We're getting all these centers in our region together to be part of this platform. Then there are specific design features required to undertake adaptations. The way the platform works is that every single trial, or "domain" as we call them, within the platform has the same primary outcome. That might be a bit weird to hear because people automatically think they have specific outcomes they want for their individual trials. But that's part of being in the platform. You have to have one outcome. In RECOVERY, their outcome was 28-day mortality. We can only dream of having such a simple primary outcome for trials in preterm babies in tertiary NICUs because we simply don't have enough babies to use such a simple outcome. That's why neonatal and perinatal trials always have complex or composite outcomes; they need more power. That requires coming up with one outcome for a whole lot of trials. In our case, across the spectrum of gestational ages and both antenatal and neonatal interventions, we had to come up with something unique that could fit everything.


Ben Courchia MD (18:08.588) Truthfully, we kind of are all looking at the same thing anyway. We might look at mortality, but ultimately every trial is judged on neurodevelopmental outcomes. It sounds difficult when you say we all have to agree on an outcome, but at the end of the day, we're always measuring interventions against how babies do at two years of age or at school age, tested on the same sort of neurodevelopmental tool. It is a constraint, but it doesn't seem overwhelming for our field since we share the goal of making sure babies do well down the line. When you talk about a platform, does that mean we can stack multiple questions and gather data in parallel to try to answer them simultaneously?


Dr. Brett Manley (19:43.779) That's correct. We've spent the last four years designing how this platform will work, not just thinking about the questions. We went into this with a pretty big task: could we design a platform that answers questions for all gestational ages, both antenatal and postnatal? We've got the mother, the offspring, and antenatal interventions given to the mother that affect both. We've got neonatal interventions given to the baby, who may or may not have already had their mother randomized in an antenatal domain. It emphasizes how difficult and complex it is to think about these questions all ending with one primary outcome and going into the same dataset.


Ben Courchia MD (20:52.844) In traditional research methodology, we study a single question through a trial, get an answer, and then do secondary analyses. With a platform, questions can come and go as they are being answered, and we can continuously use this data instead of being stuck waiting for a protocol to end in three years. Is doing everything in parallel instead of in sequence one of the main advantages?


Dr. Brett Manley (21:38.009) You put it very nicely. It remains to be seen how well this works, so we don't want to get ahead of ourselves, but that's a lot of the thinking behind it. Think about a drug dose: we're used to comparing one drug dose to a placebo or to another drug dose. Why not compare three or four drug doses at the same time? As it becomes clear that one is better or worse, you move forward with the better doses until you find the optimal dose. That's the thinking that goes into the adaptation part of adaptive platform trials. The ability is there to have multiple trials running, answering completely different questions, and within those domains, the ability to have three or more interventions tested against each other. One of the major limitations of standard traditional RCTs is that when you reach your sample size, that's the end. With an adaptive trial, statisticians look periodically at the data and can tell you when to stop. You keep going until you get the answer, theoretically.


Daphna Yasova Barbeau MD (22:55.373) I have a question about the logistics of the adaptive trials design. To play devil's advocate, some people have concerns about adaptive trial design. I wonder if you can speak a little bit about those concerns. Obviously, it's a lot of data, there's a lot of complexity, and they worry about types of errors.


Ben Courchia MD (23:39.01) The argument against adaptive trial design is: are you really going to be able to answer the question as well as if you just focused on a single question? It's a trade-off. I'm curious if you have an answer to those concerns.


Dr. Brett Manley (24:21.903) We certainly didn't go into designing an adaptive platform trial thinking it would be easy. They're expensive. We've required a lot of funding to get to the point where we're about to randomize a mother or a baby. There's a huge amount of statistical expertise required. Getting statisticians who understand adaptive trials, the need for Bayesian analysis, and the complexities of the primary outcome we've chosen has been huge. The governance around it is immense. HRECs and IRBs need to understand what this is. We've got to do the same thing for consumers who have never seen this before in our field. How do you approve a platform and then multiple trials happening within it? How do you approach families for consent when there might be two, three, or ten different trials their baby is eligible for? Another concern is that people might think we're trying to take all the credit for the research. We've been very careful to make sure that isn't the perception. People will bring their own ideas, their own funding, and they get the authorship. We simply provide the platform for them to run their trial in and hopefully improve efficiency.


Ben Courchia MD (26:42.656) The money upfront has to be mentioned because it offsets costs down the road. You're putting the cost upfront, but potentially there's a possibility of cost savings.


Dr. Brett Manley (26:55.791) That's what we've worked on, but the truth is we've not done this before as a community. We've brought in a specialist health economics team to look at this, to demonstrate whether it is actually more efficient and cost-effective. Ultimately, what we want is improved health outcomes, but there are a lot of uncertainties.


Ben Courchia MD (27:31.702) What is the difference between this and just building a massive database? Can you walk us through how the data is acquired?


Dr. Brett Manley (27:55.267) We've spent three years building a bespoke database. Your standard REDCap is not going to cope well with something like this. The key is to have data coming in for your primary outcome that really educates the adaptations. In our field, we know we need hundreds or thousands of babies to get answers. Having a shared protocol, a shared primary outcome, and a shared network of expertise—statisticians, clinicians, health economists, consent, and ethics—makes it a great place to bring your trial. It's not going to change the fact that we need other trials. This is just one way to improve efficiency. And if your hospital is participating and we test drug dose A against drug dose B, and A is better, it can immediately become your standard of care. Then you can bring in another dose or an adjunctive drug on top of that.


Ben Courchia MD (30:02.196) A great trial we recently reviewed was the TORPIDO trial in JAMA. They were testing two different oxygen targets. In the discussion, they wondered if they had tried different numbers, maybe they would have found something different. But doing another trial is so much work.


Dr. Brett Manley (30:22.383) I know Ju Lee well, and some of our centers were part of TORPIDO. That's exactly the right example. Or intratracheal budesonide. We've done the PLUSS trial, and then someone at PAS asks, "Did you think about doubling the dose?" Of course we thought about it, but it's another trial! In an adaptive trial, we could have potentially said that dose doesn't work and moved on to double the dose. Questions like caffeine dosing or steroid dosing fit very nicely in an adaptive trial.


Ben Courchia MD (31:08.812) For the Platypus trial, do you already have some clinical questions you'll be looking at?


Dr. Brett Manley (31:39.347) We wanted to start with something antenatal and something neonatal to ensure efficiency across that spectrum from mother to baby. We picked relatively low-hanging fruit that we knew was controversial. For the antenatal domain, we've gone with antibiotics for preterm prolonged rupture of membranes (PPROM). When we surveyed obstetric clinicians, there were more than 90 different antibiotic regimens being used for the same condition. Before we get smug as neonatologists, we're worse. For the neonatal domain, we went with caffeine dosing. A question people are interested in is higher doses of caffeine than were used in the CAP trial. We got those funded and they're about to start.


Ben Courchia MD (33:20.672) Your team is quite good at finding good names. PROMOTE to look at optimal antibiotic treatment for PPROM, and the BabyChino trial looking at caffeine citrate to improve neonatal outcomes. Love those names.


Dr. Brett Manley (33:44.239) That's actually what we spent four years doing. We haven't done much else, just picking the names!


Ben Courchia MD (33:49.08) People can find more information on platypustrial.org. What's interesting is that adaptive trials seem to be the solution to complexity. The complexity in designing this the right way requires a robust team and infrastructure. What have you learned so far in putting together the Platypus trial about setting it up in the right manner?


Dr. Brett Manley (35:15.053) Don't go into it lightly. It's getting harder to get these funded because people realize how much work goes into it. We were a little bit lucky that we got in early in the wave. We've learned hard lessons along the way. We went very broad trying to make this a platform for late preterm babies as well as 22-weekers. In retrospect, maybe if we'd concentrated on extremely preterm infants, it would have made our lives easier. There is a huge amount of statistical expertise needed, picking a primary outcome that fits everything, and a lot of validating of our chosen primary outcome—an ordinal ranked scale. It's not made to answer every question; there's still plenty of room for large, well-designed traditional RCTs.


Ben Courchia MD (37:28.79) It's not meant to be a replacement. It's an additional tool in our box. It doesn't eclipse the need for observational data or traditional randomized control trials.


Dr. Brett Manley (38:05.099) Exactly right.


Daphna Yasova Barbeau MD (38:06.561) I have a logistics question about the Platypus trial. You have this prenatal gamut of interventions with PROMOTE and then postnatal with BabyChino. Are you able to use them in tandem with the participants? Because babies have different prenatal exposures, managing both helps better describe the populations we see.


Dr. Brett Manley (40:10.379) The mother and the offspring are both eligible to be in one or multiple trials within this platform. That provides complexity, but also the opportunity to analyze how they interact. Statistically, you can 'borrow' from one group to compare to others. In the case of antibiotics, a fetus might have been exposed to many different regimens. At least we're standardizing that and able to see how they interact.


Daphna Yasova Barbeau MD (42:05.245) You've done a lot of work in ethics and family-centered care. How does adaptive trial design impact families, consent, and transparency?


Dr. Brett Manley (42:22.413) We have a consent and ethics committee, and a Lived Experience Committee of ex-preterms or parents of preterm infants involved right from the top of governance. We've been pushing the envelope with methods of consent. We're looking at e-consent, using animations and videos on an iPad. We're trying to reduce paperwork. It's ridiculous to expect families who've just had a 22-weeker to read a 50-page document. People with lived experience tell us they don't want to be asked about research at difficult times; if the research is comparative effectiveness and relatively low risk, they want us to get on with it and talk to them afterwards. For the BabyChino trial, if it's the middle of the night and parents aren't available, we are approved to give the first dose of caffeine and approach families within the coming days for consent to continue. If we keep doing what we're doing, we're causing harm because we can't get generalizable populations.


Ben Courchia MD (46:56.086) So adaptive trials could be more family-centric.


Dr. Brett Manley (47:01.251) If you design them that way. We've embedded consumers and Indigenous committees—Māori in New Zealand, Indigenous Australians—to address inequity. We're always astounded by how generous families are with research during stressful times. They want to contribute and improve outcomes for future generations.


Daphna Yasova Barbeau MD (48:15.46) How do you balance your responsibility to the patient in front of you, evidence-based practice, and keeping families centered in the treatment plan?


Dr. Brett Manley (48:56.975) Honesty and humility. Being honest about what you don't know. I'm yet to have a family tell me I'm an idiot because I don't know the best dose or why their baby got NEC. It's about admitting you're doing what you think is best based on what we currently know. With colleagues, I talk about evidence-based medicine a lot, but you can't live your clinical life harping on the lack of RCTs for everything. I've never enjoyed it when people have said they definitely know the right thing at all times. I struggle with that concept.


Ben Courchia MD (51:00.546) Brett, this was a pleasure. People can learn more at platypustrial.org and find you on LinkedIn. Thank you very much for your time today.


Dr. Brett Manley (51:43.545) Thanks Ben, thanks Daphna. Lovely to be here again.


Ben Courchia MD (51:46.188) Yeah, same.

 
 
 
bottom of page