Join us this week for a fascinating discussion about artificial intelligence in neonatology with doctors Kristyn and Andrew Beam.
Dr. Kristyn Beam is an attending neonatologist at Beth Israel Deaconess Medical Center in Boston, MA. She is also an Instructor in the Department of Pediatrics at Harvard Medical School. She recently completed her clinical fellowship in neonatal-perinatal medicine in the Harvard Combined Neonatal-perinatal fellowship as well as the Harvard Wide Pediatric Health Services Research Fellowship through which she obtained her Masters of Public Health with a focus on Quantitative Methods at the Harvard T.H. Chan School of Public Health. Her research focuses on machine learning applications for neonatal data with a focus on improving our decision-making in the NICU at the point of care and ultimately improving neonatal outcomes.
Dr. Andrew Beam is an assistant professor in the Department of Epidemiology at the Harvard T.H. Chan School of Public Health, with secondary appointments in the Department of Biomedical Informatics at Harvard Medical School and the Department of Newborn Medicine at Brigham and Women’s Hospital. His research develops and applies machine-learning methods to extract meaningful insights from clinical and biological datasets, with a special focus on neonatal medicine. He is the recipient of a Pioneer Award from the Robert Wood Johnson Foundation for his work on medical artificial intelligence. In additional to his academic work, Dr. Beam has been involved with several successful entrepreneurial ventures and has received several patents. He is the founding head of machine learning at Generate Biomedicines, Inc., a venture-backed biotechnology company that uses machine learning to improve our ability to engineer novel therapeutic proteins. To date Generate has raised over $400 million in venture capital and employees more than 80 people.
You can reach out to Kristyn or Andrew for questions/potential collaboration by email at: beam.andrew@gmail.com, kristyn.beam@gmail.com
If you are interested in responding to Dr. Beam's survey regarding clinical decision support tools, please email her (kbeam@bidmc.harvard.edu) or contact her on twitter (@swanbeams) for a link.
Please find below some of the links to resrouces discussed on this week's episode.
Conferences that Dr. Andrew Beam was referring to on the show:
https://www.chilconference.org/?ref=the-incubator.org
Beam, K.S., Lee, M., Hirst, K. et al. Specificity of International Classification of Diseases codes for bronchopulmonary dysplasia: an investigation using electronic health record data and a large insurance database. J Perinatol41, 764–771 (2021). https://doi.org/10.1038/s41372-021-00965-3
Yu, KH., Beam, A.L. & Kohane, I.S. Artificial intelligence in healthcare. Nat Biomed Eng2, 719–731 (2018). https://doi.org/10.1038/s41551-018-0305-z
Ghassemi M, Oakden-Rayner L, Beam AL.Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9
The transcript of today's episode can be found below 👇
SUMMARY KEYWORDS
nicu, clinicians, algorithm, ai, data, work, collecting, biases, machine learning, people, baby, good, bit, medicine, hospital, machine, sepsis, patients, outcomes, lots
SPEAKERS
Andrew Beam, Ben, Daphna, Kristyn Beam
Daphna 00:00
Ben 00:47
Hello, everybody, welcome back to the podcast. Dr. Barbeau. How was that call? Exhausted. But we're in a new hospital,
Daphna 00:55
but we're in the hospital. It's so exciting. You know, are we that's such a good team that it doesn't matter where really where we Yeah,
Ben 01:04
we're now at the we're now at the HCA Florida University Hospital located on the Nova Southeastern University campus. And, yeah, we're finally like, all the stuff, all the tools, all the cool equipment that we purchased that we're ready to use for Qi and research is finally here. So that's all very exciting. To that note, actually, if we are physician group here at the University Hospital has social media accounts where we're gonna sort of right document, the progress some of the cool things we're doing in the unit. So go follow us. We are on Instagram at NOVA Neos, an ova and E O. S. We are on Twitter at NOVA neonatology. And we're also on LinkedIn. Nova neonatology is our is our handle. And yeah, I mean, we have a lot of cool stuff planned. We're working with the innovation center of Nova Southeastern and, and it's kind of cool when when you see the possible applications of technology in the NICU, right
Daphna 02:04
we are, we are very excited, very excited about some of the things and and just as exciting is our ability to like collaborate with all of the other colleges on campus, which again, for academic centers, this is not rocket science, but yet, yet the medical center tends to function in a silo. And so being able to reach out to who have we reached out to engineering, the computer psychology, computer science, yeah, the language arts people, the early childhood development, the art colleges. And so we're really we're diving in.
Ben 02:42
And it's kind of nice to see that there's, there's excitement about our new new presence on campus, people want to want to work with us. And I think it's, it's a critical time where when there's momentum, you have to capitalize on it. Otherwise, people get discouraged and move on to something else. So we're hoping to make the most of that and that actually transition quite seamlessly into who our guests are today. So without further ado, I'm going to introduce both of our guests husband and wife couple. And yeah, I'm gonna have long bios, they're very accomplished people. So let's let's just get right into it. Dr. Kristen beam is an attending Neonatologist at Beth Israel Deaconess Medical Center in Boston in Boston, Massachusetts. She is also an instructor in the Department of Pediatrics at Harvard Medical School. She recently completed her clinical fellowship in neonatal perinatal medicine in the Harvard combined neonatal perinatal fellowship as well as the Harvard white Pediatric Health Services Research Fellowship, for which she obtained her Master's of Public Health with a focus on quantitative methods at the Harvard TH Chan School of Public Health. Her research focuses on machine learning applications for neonatal data, with a focus on improving our decision making in the NICU at the point of care and ultimately improving neonatal outcomes. Dr. Andrew beam is an assistant professor in the Department of Epidemiology at the Harvard TH Chan School of Public Health, with secondary appointments in the Department of Biomedical Informatics at Harvard Medical School and the Department of newborn medicine at Brigham and Women's Hospital. His research develops and applies machine learning methods to extract meaningful insights from clinical and biological datasets with a special focus on neonatal medicine. He is the recipient of a Pioneer Award from the Robert Wood Johnson Foundation for his work on medical artificial intelligence. In addition to his academic work, Dr. Bean has been involved with several successful entrepreneurial ventures and has received several patents. He is the founding head of machine learning at generate biomedicines Incorporated, a venture backed biotechnology company that uses machine learning to improve our ability to engineer novel therapeutic proteins to date generate has raised over $400 million in venture capital and employs more than 80 people. Chris and Andrew, thank you for being on the show with us today.
Kristyn Beam 04:59
it's so great to Be here. This is my first podcast I've ever done before. So it's really exciting. Yeah, I'm
Andrew Beam 05:06
excited to be here to longtime listener first time caller I've been keeping up with. Excited to be on.
Ben 05:13
Thank you. Thank you. So so so for the for the people who may not be aware you are husband and wife. And so you are the AI Artificial Intelligence couple.
Daphna 05:26
The image duo. Yeah,
Ben 05:27
the dynamic duo. So I guess where I wanted to start the interview were, from the standpoint of definitions, I think the concept of artificial intelligence has creeped up in our common vernacular, we hear it on the radio on TV. But I don't know if everybody really understand what practically AI or even machine learning for that purpose means. So So could you help briefly for audience define a little bit of what artificial intelligence is supposed to be and do and even if we can talk about machine learning at the same talk by the same token, then that would be I think, a good place to start?
Kristyn Beam 06:04
Yeah, so as the clinician, I'm gonna defer that answer to Andrew, and let him sort of dive into those definitions for you guys.
Andrew Beam 06:13
Yeah. So I'll give you sort of a brief history of AI and how we got here. And that will help define some of those terms. So AI as a field goes back to the 1950s. There was this sort of like, summer camp for nerds that happened in 1956, at Dartmouth, where lots of computer scientists got together and said, Hey, wouldn't it be cool if we could get machines to do things intelligently. And so that is sort of the term artificial intelligence refers to the goal of getting computers to behave in an intelligent way. So then what the field has done since then is trying to accomplish that goal. So in the 70s, and 80s, there were these things called expert systems, they were super popular in medicine, I can give you some references of expert AI systems in the 80s that people made. And really, it was computer scientists and clinicians sitting down together. And the computer scientists would try to elicit the reasoning process of the clinicians and then write that down as a program. So when you see a person with these conditions, what type of disease do you think they have? So it turns out that that approach, that expert system approach doesn't scale very well. And it's brittle, you can imagine, you know, I'm sure as clinicians like trying to be able to articulate the sort of entirety of your reasoning process would be something that you probably can't do, sometimes you know, it when you see it, you know, it's hard to sort of formalize those rules. So what has happened over the last 20 years, especially with the rise of bigger and bigger datasets, is, instead of trying to write down what intelligence is, we learned it from data. And so we have very powerful machine learning algorithms, that if you show them a dataset, where sort of those decision rules are implicitly encoded, they're able to extract those statistical patterns, to learn to do the task. So reading chest X rays is a great example of this. So spotting pneumonia and a chest X ray, it's something that doesn't lend itself to that sort of very rigid expert system approach. But if you gather up enough X rays of healthy people, and people with pneumonia, and you show them to a machine learning algorithm, then they can extract those decision rules automatically. And so when we say machine learning, it's really that statistical pattern recognition, big data approach to artificial intelligence, the goal hasn't changed. We still want intelligent computers. But how we accomplish that goal is what has changed over the last 20 years. And that's where machine learning comes in.
Ben 08:30
I think you're touching on a first point there that is that is so important is that I think, for the people who are familiar with what machine learning and artificial intelligence is, I think people think of a data set, right? They think of an Excel sheet that I go, I plug in the numbers, and this, this computer will tell me necrotizing enterocolitis. Right. And this is, but that's not really all that is and you mentioned, the reading of X rays. And I think conceptually for people, this may be difficult to understand in terms of what do you mean, like the computer is going to read an x ray like but the computer doesn't have eyes? So can you tell our audience about what computers can do when it comes to actually visualization of images, ultrasounds, and things like that? Because I think that's new for a lot of people.
Andrew Beam 09:13
Yeah, so just to sort of continue to thread. What has happened over the last 10 years, is the rise of a specific kind of machine learning approach called deep learning. And deep learning, has given us sort of artificial lies, like it's very good at ml analyzing images, is very good at analyzing texts. And actually sort of data as it exists in an Excel spreadsheet form isn't as good for deep learning models, you really need sort of rich structured data, like imaging, and like text where there's lots of information about the patient encoded in those datasets. So I think what has really happened over the last 10 years is we have had new types of data be unlocked for us. So traditionally, it's very hard to analyze images. But now sort of because of deep learning, it's very easy. And so one of the things that I always try to get my clinician friends to think about is, what types of types of things do you have imaging data on? Even if you don't think that the imaging data might be relevant for your research question, there could be sort of physiological signal in there that these deep learning algorithms can extract for us.
Kristyn Beam 10:21
And I think on that note, like, what, what we as clinicians often don't realize about images is like behind the image, there's just a whole slew of data that makes that image. And so it's not that the computer is actually looking at the picture and seeing what the picture looks like is that the, the algorithm is looking at the data behind the picture and thinking about relationships between each individual pixel in the picture to help build the algorithm and to help build the prediction model. So I think that's probably some of the confusion that exists when we're talking about using these algorithms for image analysis.
Ben 11:00
And I think that underscores the level of sophistication because we're thinking of a global image, but the algorithm will actually break it down by pixel, we'll look at the shades of each each pixel in relationship with surrounding pixel determined patterns from that standpoint, and then come up with with a structure that either goes for a certain diagnosis or not. And once you start appreciating that level of complexity, you can understand maybe some of the potential the potential abilities of AI and machine learning.
Kristyn Beam 11:29
Yeah, exactly. And that's where some of my research interests in this area has really come out of, because I think there are some signals in those images that we as clinicians are unable to appreciate.
Andrew Beam 11:42
Yeah, I think that there's like a bit. And I don't want to I don't want to jump ahead. And the the question was here, but I think that a lot of imaging stuff will end up being sort of Pan diagnostic. So like, not just dying, diagnosing a pneumothorax, or something on a chest X ray. But mortality prediction, there's lots of physiological information encoded in that. And so and, I mean, there was a, there was a paper that showed you could develop a cardiovascular risk score on the basis of retinal imaging. Because the vascular hell, the blood pressure, smoking status is encoded in people's eyes. And so there are all these like rich signals that have diagnostic and prognostic utility. And one of the things that I think me and Kristen are excited about, is bringing that to the NICU. And if you're gonna get a chest X ray anyway, what else can we use that data for to inform clinical decision making?
Daphna 12:34
My mind is already a little bit blown, I have to tell you, because I am one of those people who did not understand the terms. And then I'll tell you, we were looking for a virtual assistant. And I literally thought it was a robot assistant. So this is very earth shattering to me, I really liked that description about how it's really some of that data that like, I don't even understand how images are processed, right? So that I'll never be stick with that. Can you maybe bring us back a little bit and explain. So like, how do the computers, how do you even get the computers to learn, like to process that data?
Kristyn Beam 13:17
I can, I can take a stab at it. You can fill in some gaps. But, um, so I think what's really important is having the right data, and this is what all of these algorithms are built on. So to get an algorithm to, quote, learn something, you definitely do have to have your inputs, and you have to have your correct outputs. So most of these algorithms are trying to predict some sort of outcome. And so those outcomes, you know, in the NICU world, or outcomes that we're usually trying to predict are things like BPD, neck, sepsis, mortality, so we have to have those outcomes in some sort of gold standard form. And then you have to have data that may support those outcomes. So you then basically take some of these outcomes, and match it to your data and say, This is a true positive, this is a true bronchopulmonary, dysplasia, baby. And these are the factors that lead to that. And then you have another one that this is a baby that doesn't have bronchopulmonary dysplasia, and these are the factors associated with that. The algorithm then can sort of learn between those two and come up with basically a probability of a baby developing bronchopulmonary dysplasia or a baby not developing bronchopulmonary dysplasia. So I think the really important thing is those outcomes and how we label those outcomes in our large data sets.
Andrew Beam 14:48
And maybe it also be helpful to sort of give you a mental model for the mechanics of how so this is a tear free no map introduction to the
Daphna 14:57
morning. Yeah, let's let's thank you
Andrew Beam 15:01
So what I'd like for you to imagine is like, do you know like a sound engineers board that has all these dials, you can move certain levels up, you can move them down, they have knobs that you can turn, and it changes how the sound sounds even. So imagine, imagine that but you can you put an image in into the soundboard. And instead of the mix coming out the other side, a probability of disease comes out. Okay, so you put the image in, and a probability of disease comes out, and how you turn the knobs will dictate sort of the level of that probability. So all machine learning is is a way to set the values of all of those knobs such that you get the most accurate set of probabilities for a given data set. So the way the learning dynamic works is I will show the machine learning algorithm a single image, it will essentially guessed randomly at the beginning of what the probability of diseases, I will tell it the correct answer. So this actually was an ROP case, or this actually was a pneumothorax. And then the math is how we go and change all of those knobs, so that the probability that the model is giving me matches the correct answer that I've given it. And so we do that millions and millions and millions and times. And eventually we're left with a setting of the knobs that gives us super accurate predictions. And so when we say learning, I think people have this mental model of like, we have a toddler, and she likes to count it to six today. And like she's clearly like learning to count. But really what is going on? Is this kind of optimization procedure where we are setting the values in some of these networks have millions or billions of knobs that have to be set. Yeah,
Ben 16:38
I think I think for some people, one way was explained was explained the concept of of deep learning is, like you're studying for a test, you're going to do question after question after question. And after a while, you're gonna say, well, now I've done 17 Questions like this. And I know that when they asked me this, this is the answer. When on the test, a similar question shows up, you've been tuned to recognize the hints and the questions to say, Oh, I know the answer, because I've done it before. And the machine does exactly the same thing. Except that they will identify based on probability and statistics, what these factors are, like using those knobs in terms of oh, I should really that factor really is critical in the decision making process. So I think, I think this is a great way of explaining it. Sorry, definitely. You were gonna go.
Daphna 17:22
No, I, it sounds not unlike medical training. You have your first presentation, you make a guess? And then somebody tells you, yes, but and this is this is why that's not the right answer. And you kind of refine your way but but will the computers do it better than us eventually?
Kristyn Beam 17:42
I think that's kind of the ultimate question. I think that's the question that a lot of people are, some people are excited about, and some people are afraid of. And I think this is a chance to give you a little bit of background and how we sort of got to do so we, when I was in medical school, we were like out to dinner one night, getting some Mexican
Andrew Beam 18:08
food we had just started dating.
Kristyn Beam 18:12
And we were out to dinner in Mexican food. And Andrew tells me, well, yeah, I'm gonna just replace all the doctors one day. So you know, I'm in medical school, and then I won't have a job. So if we're gonna, like, Do this, do you want me to have a job? And he was like, Yeah, but I'm just gonna replace your job
Andrew Beam 18:36
with a bold strategy for a first date. Yeah, very bold.
Kristyn Beam 18:41
So I was like, okay, so I understand that you're interested in computer science and whatever this artificial intelligence thing is, like, 10 years ago. But I was like, I don't think you can approach it that way. You're not gonna get anywhere with clinicians. If you just go everywhere, and you say, I'm gonna replace your job, I'm gonna replace your job. Yeah. And so I think over time, maybe he still thinks that but I think over time, I've pulled I think he
Ben 19:08
was just crying, trying to create dependency at a time, you know.
Andrew Beam 19:13
You need if you're going to be replaced, need to hedge and like, someone who's gonna do the replacing
Ben 19:20
more than you know.
Kristyn Beam 19:23
So I think over time, we've sort of discussed how, okay, so if that's your ultimate goal, okay, you know that it's going to take a while. But I think what's important is what artificial intelligence machine learning deep learning all of these things are really doing right now is how can we as clinicians do our job better and get better outcomes for our patients? And so that's really the perspective that I'm using this to come at all of these questions. And I think that you would say you probably agree with that a little bit.
Ben 19:53
Before before I before we create a marital problem. I wanted to I wanted to orient this question a bit more, because I'm a big fan of Professor Hanna Frey, who wrote this book called HelloWorld, being human in the age of algorithms, and she talks about the relationship or at least the differences between machine and humans from the standpoint of sensitivity and specificity. And she's making the argument that machines are very good at one of them. And humans are good at the other. And she's making the case that Christian you're making where we could work together because we complement each other from that standpoint. And I wondering if you could go a little bit into that, in terms of how, when we're looking at looking at it from sensitivity versus specificity, we can see a bit of the difference between machines and humans.
Andrew Beam 20:42
So I think I would just reframe that and to like, answer darkness, question two, I think that algorithms are as good if not slightly better in very narrow tasks. So if you train them to do like, one thing, they're very good at that they don't get tired, they don't get hungry. And if the properties of the data don't change, they're very good at that what they're not good at doing is generalizing. So if any of the things about if the population that you're using the tool on changes, they sort of just fall on their face, if like the lighting conditions change, they like really fall on their face. So they're very sensitive to changes that humans are robust to. And so I think that there's still a really big gap, that sort of a really big generalization gap for the way we currently do AI when compared to their human counterparts. So I still do think that it's, you know, as Kristin said, like, not and also just, I feel like I have to clarify. I'm not, I'm not trying to replace the healthcare workforce. Just I want to state that clearly. But I still think that certainly like in the, you know, the near term there is this like, deep need for physician algorithm partnership, to sort of babysit the algorithms to make sure that they're not doing anything, obviously stupid, and also sort of I always sort of describe it as a superpower for clinicians. There's lots of sort of rote work that clinicians have to do that is there's no reason why we can't handle some of that stuff over to the algorithm, and then the clinician can focus on the the cases that really requires her attention.
Kristyn Beam 22:16
Yeah. And I think in that way, you know, humans are able to be a little more flexible with our decision making versus these algorithms that we develop. So I think that's how we can all work together.
Ben 22:32
I guess, when we're talking about I mean, I think I think this is a this is a discussion that could be several hours on its own in terms of human. I mean, there's a lot of interesting data when it comes to the judicial system about judges really having a lot of variability in their in their sentencing, depending on the lunchtime effect. Their lunchtime, depending on whether their sports team won or lost the day before, depending on whether the judge has a child that is male or female, and depending on what you accused of all these things. But somehow when we still ask people, would you rather be judged by a human versus a machine? There's that elements like, oh, maybe you want the compassion of the of the human and not the rigidity? So it's a very interesting discussion, obviously. And I don't think I think this is maybe beyond the scope of our of our interview today. But I guess my, my question is, when it comes, we've been talking about algorithms and AI in the NICU and in clinical decision making. I think for a lot of people, it's difficult to visualize. Well, how does that enter the NICU? Like, is this just going to be like, are you going to roll the new machine into the unit that like now is going to start talking to us on rounds? Is this something that is like a plugin that we're going to put in our EMR? How does that look in practice? I mean, where do How does AI enter our ICO?
Kristyn Beam 23:52
Yeah, so first, I think AI already exists, in a sense in our NICUs. And a lot of us probably just aren't even aware that that's true. And I think the way we can think about some machine learning, and even some aspect of AI and prediction modeling is logistic regression, which I think most of us are very familiar with. And so those exist in the NICU already. So if you think about the early onset sepsis calculator that a lot of us use, that is a prediction model, a multivariate logistic regression that helps us decide whether a baby needs to be evaluated for early onset sepsis or not. So we have these calculators that do exist. Another one is the BPD risk estimator. And interestingly, I have a project right now where we are talking we're asking the an otologist and neonatal care providers to tell us how they are using some of these calculators that we have in the NICU and how they affect our decision making. And it They're useful at all, because I think a lot of these calculators have been developed, even other ones that I'm not talking about right now. But they're just not integrated into our daily workflow. So instead of, so maybe there's a machine that comes in for certain things for AI, but I think what's gonna come first is really these clinical decision support tools that help us at the point of care everyday during rounds to help make a decision about whether we should give steroids for PPV. Whether we should draw blood culture right now, because this baby's at really high risk of sepsis, or whether we, how many days is this baby really going to be here on a spell count before they can go home? So I think those are the ways that AI is gonna come into the NICU first, and ways in which it's actually already there. We just don't consider it, quote, AI in the NICU right now.
Andrew Beam 26:02
Yeah, I'd also like to talk about so like, so I totally agree that sort of an EHR risk calculator is the easiest point of entry. But if your question is about, like permanent adoption, then you really have to think about sort of more sustainable business models. So I'm sure that as clinicians, like, you all know that there's a million calculators that you could use, but no one really uses that my Christian is writing a survey to try and sort of get at that. But I think unless there is some type of entity behind a particular tool that is constantly iterating, and improving upon it to get it adopted, it's not going to sort of have traction and have staying power. So medical devices and other areas of medicine, that are powered by AI have received FDA approval, and have like commercialization strategies around those. So like, I wouldn't be surprised in the near term, if there are sort of devices, like your ultrasound, all of a sudden is now easier to use and gives you interpretation. Like the red cam, things automatically give you interpretation or do the reads for you. So my guess is that things that are going to be durable, are going to have some type of business model like that behind them, and are going to be an FDA approved device, and things like that, I think the sort of long term vision is that you just don't know that you're interacting with AI by so it's just in the EHR, it's pulling in all these signals, and sort of all of a sudden, your job is easier to do. The sort of example that I gave is with smart devices. So if you think about like how I used to access my music collection, I had physical CDs, you know, in the visor of my car, right? I fumble through and then like put the CD in and then try and remember the track that I wanted to listen to. But now I just say like Alexa play the Red Hot Chili Peppers or something, right? And so accessing the Alexa Be quiet. Out here, sorry about that. But accessing the information that I want is just significantly easier now. And AI has facilitated that. And so that is sort of what I imagined the 1020 Year Vision for medicine being is that like, you don't, you're not conscious that you're interacting with some type of artificial intelligence, it's just much easier to get the information and to make the decisions that you want to make.
Ben 28:23
Yeah, we reviewed this paper with the red cam right about like the ability of the of the of the camera to interpret for opiates. Fascinating. And obviously the applications are are staggering. I want to go into a thorny subject. So I definitely anything else that you want to talk about before?
Daphna 28:39
Well, one of my one of my questions is I think what people worry about when, especially with new technologies is, you know, what if I get information that I wasn't anticipating, or what do I do with it, right? If I got, you know, a baby who's well appearing, but based on my AI, it says that this baby, you know, is going to be in trouble shortly, you know, so how to, how might we, you know, negotiate that and moving forward as we're getting more and more information that we may not understand quite yet. Yeah, my
Ben 29:18
co host is great, because that's exactly what I wanted to go into. But I think I'm gonna, I'm gonna make this a bit more uncomfortable. Because what about liability? Right? I mean, if if AI says this baby like deafness, it has sepsis, and I say I disagree. And then this baby ends up having sepsis. Am I am I going to then be liable this too, against somebody saying, well, the machine algorithm told you, this is a baby that you should have evaluated. So now you're responsible. And I think that idea alone can be very frightening for clinicians, when it comes to the adoption of AI. Like let me make the decision. I don't want to be put on the spot and then have to justify myself against the computer.
Andrew Beam 29:58
So I Yes, correct. Only the best legal thinking is that the buck still stops with the clinician, that if you if they especially if there are guidelines, and you go off guidelines and defer to the algorithm, then you're the one who's liable in the case of an adverse event. Going back to the medical device situation, some manufacturers offer liability insurance such that if you follow the algorithm and something happens, they're holding the liability bank. I think that in those like narrow instances, it's currently it's sort of much more clear cut. As these things grow in scope and grow in use cases, I think that liability reform will have to happen. And I don't know what the future of that looks like. My guess is that there there might be some sort of blanket policy that covers clinicians in that case, but in the case that there's an FDA device that you're using, often the device manufacturer will offer liability insurance in the case when you defer to the the algorithm, but there's a guideline that you're not following, then you hold it there.
Ben 31:01
Yeah, I think the fact that you're mentioning that manufacturers are willing to offer liability insurance should underscore how confident they are in the ability of the tools to do their jobs.
Andrew Beam 31:12
Yeah, that's that's been done some type of actuarial calculation that says that all of that is worth it. Yep.
Ben 31:18
Yeah. So then, on the same on the same line of similar, I'm thinking, can we talk a little bit about the concept of blackbox AI, where we're going to ask clinicians to to deal with something that they have pretty much no understanding of? And I feel like especially in the NICU, I don't know, Kristen, if you agree, we like to know how things work, meaning most of us know how our events are working. We know whether the type of ventilation that we're providing, we know how to how it functions, a lot of us are kind of handyman to begin with. How does that work? And can you define to us? What, what? What, like boxy ideas?
Kristyn Beam 31:57
Yeah, I will. It's interesting, because so Andrew just published a paper about AI explainability interpretability. Recently, and so it's had some exposure on Twitter, and we actually just started discussing it in a Twitter thread last night. Um, one thing I would say, and just to add a little controversy to this is, do do we, as clinicians always know exactly why we're making a decision. I would say that we don't always, and I think that we, we want to think we always know why we're making a decision. But I think there's a lot of times we make a decision, and we can talk ourselves into why we got there. I gave this baby this medicine because the sodium was this and the Creatinine was this and the bed settings were this. So like this medication makes sense. So we can explain how we got there. But I don't know that we can do that upfront, I think we sort of do that. Post hoc, and we. So that's just my one point there is that we want to think that we're understanding every decision that we're making, and I don't think we always do. And I think that's, that's okay, we go through a lot of training, we see a lot of examples of things, and we sort of walk into a patient's room, and we're like, oh, this does not feel good. I don't like what's going on here. And so we have this like, feeling when we go into a room sometimes. So I think talking about the black box, which is a lot of what clinicians are concerned about with these algorithms is that why don't understand why the algorithm is making the decision it's making, I think we should reflect on ourselves and understand that we don't always know why we're making decisions, we come into each experience with our own biases with our own experiences with our own last patients that we saw with this condition. And so that is explaining all of the steps that we're making when we go into a room and make a decision as well. And then I'll let you talk a little bit about Yeah,
Andrew Beam 34:07
the overall. So I think that that was that was an amazing description. I think that the other point, too, is that there are black boxes all over the hospital. And so most people, most clinicians couldn't tell you exactly how an MRI machine works, how it does what it does, or do the read of the MRI. In some cases. I think that lots of drugs have unknown mechanisms of action, but they've been shown is I mean, this is especially true in the NICU. But they've been shown to be safe and effective in clinical trials. And so the paper that Kristin alluded to, we were trying to like make the point that like, black boxes are everywhere. The current methods that we have to explain artificial intelligence algorithms aren't so good and can be misleading. So let's instead like, Let's do thorough validation. If it's a question about how do I trust this thing, then a trust mechanism would be we value All the way to this in the same way that we validated a drug, and therefore you can feel safe using it on your patients. I feel like the explainability blackbox question is getting at that sort of fundamental trust question. But going in this like very circuitous kind of way, given sort of what Kristin said, where a lot of people can't sort of interrogate their own decision making process to a perfect level, and there are already all these other black boxes that we're perfectly comfortable using in the hospital.
Daphna 35:25
Yes, especially in the NICU, like you said,
Kristyn Beam 35:28
as far as like, I can't completely explain to you why Tylenol could work for a PDA closure, like that neck like indomethacin and ibuprofen, that mechanism of action seems straightforward to me, but like, Tylenol is a little bit fuzzier. We don't totally understand that one, but we use it. We have an idea. We think we
Daphna 35:51
know that for a lot of medications. Right? Right, exactly. I think you brought up I mean, you brought up so many good points, but particularly about how we bring our own bias into the work. And I wonder given you know, how much inequity and inequality there is in healthcare, particularly in the NICU? We know that that happens, and particularly the way we collect data is biased. And, you know, I, I wonder, you know, is, Can AI help us be less bias? Does it have its own bias? You know, how, how do we protect from, you know, worsening the kind of inequalities that we already see in healthcare?
Kristyn Beam 36:38
Yeah, that's a great question. And great point, I think, I think AI and the systems that we develop, and the algorithms we develop are only as good as the data that we have to make these algorithms. And that brings us to a point that I am really trying, you know, at this early phase of my career to like, make these databases better, and make more accessible databases that can really have more granular information and better outcome definitions. And I think that will help us in a way, remove some of the bias that comes with data collection that we have. So I think the more granular the data can be, that we collect down to the waveforms we see on the monitor every day, like collecting those waveform pieces will really help us get down to a more physiologic definition of what we're seeing happening in the NICU. And for that to happen, I think there needs to be a lot of like collaboration between institutions, a lot of coordination between institutions, which we have in some aspects, like there's the Vermont Oxford Network, there's the neonatal Research Network, like we're trying to collect these things. But I don't think it's down to that granular data point yet. And I think we can get there with the right, with the right people getting together and working on that. But I think that will help with some of the biases. But there are algorithms out there right now. And this has been like really publicized in different ways that are very biased. And it's because the data that's put into those algorithms is biased. So there's this is especially true in some DERM dermatology literature, thinking about detecting different skin cancers or skin diseases. Unfortunately, a lot of the data that goes into those algorithms is on people with lighter skin. And so they're missing a large portion of the population with dark skin. And so now the, the diagnosis of these skin disorders are missing people with darker skin. So making sure you're very thoughtful in the way that you're collecting your data, and that you're very thoughtful. And when you're building this algorithm that you are trying to think through those pieces, is going to be really important moving forward.
Andrew Beam 39:04
Yeah. And so just to add a little on to that. The data, the algorithm is going to reflect back to you the biases in the data that you give it. So if the data that you give the algorithm refract reflect structural or societal biases, and you don't do any proactive measures to correct those, then those will also be reflected in the algorithms decision making process. So I think that like what Kristin was saying, like having big broad representative datasets, where we have sort of tried to ascertain to the best of our ability, what biases went into creating those datasets is super important. I will say sort of on a hopeful note, though, there has been work showing that ML can actually help reduce disparities. So there's this like paper in Nature Medicine last year that looked at knee pain scores using X rays. And so historically, black Americans have reported higher levels of knee pain but their clinicians have sometimes not believe Have them. And so what they did was trained instead of training the algorithm to predict the clinicians diagnosis on the basis of the X ray, they predicted this objective pain score. And they actually found regions in the knee in black Americans that were correlated with that pain that were previously unknown to the radiology community. So now they have the ability to explain these objective pain scores that were sort of clinicians were blind to. So I think that it is sort of a double edged sword in that it can exacerbate and operationalize existing biases. But there's also this opportunity to help sort of mitigate some existing structural ones, too.
Ben 40:36
I'm so happy we're talking about data. Because even in our institution, when we're starting to try to look at AI and stuff like that, and, and people have been asking me, so like, let's do it. And I'm like, Okay, we need to start gathering good data, and people, but we're already doing that, like, Duh. And it's, it's exactly what you're shaking your head. It's exactly the problem. And when you start telling people who expect you to bring, like this robot from from Spielberg's movie AI to the NICU, it's like, no, I just need better data to begin with people are like, Oh, this is not what we anticipated. In your opinion, what is the state of affairs when it comes to the data that were casually collecting in the NICU? Do you think that it's, it's in your, I guess, in your opinion, is this something that needs to be completely overhauled? How we collect data as as of today? Or is this something that can be fixed? Or is it or is it good? I don't know.
Kristyn Beam 41:32
I think it all depends on the question you're trying to ask to that rhythm. So. But I do think there are some big holes in our data collection. So I think, I think in the NICU community, we just have such an opportunity to implement these different algorithms. Because we have so much data on a daily basis. I mean, we know everything about this patient from the moment they're admitted until they go home. And that is something that doesn't exist in other areas of medicine, adults, you know, come in and out of the healthcare system. And so stuff happens at home that we're not aware of before an infant in the NICU. I mean, everything that happens to that baby is recorded in some way. So I think we're at a good starting point, I don't think we need to like completely overhaul. I think we're at a good starting point. But I do think there's some holes. So traditionally, waveform data has not been collected and saved. I think we're moving towards that a little bit more now. Not universally, but in certain places. And I think, yeah, I think we're trying to get those more granular pieces of the data. But we still need more work in that. I think the other thing is that we need, our definitions of the outcomes need to be a little bit more straightforward. And you know, we listened to the Jensen Eric Jensen episode you guys did talking about the ED definition. And I mean, that that one is like, near and dear to my heart, because it's something I'm interested in predicting as well. But the definitions for different diseases, we have need some improvement, as well. So I think we're in a good starting point. But I think there's more we can do.
Ben 43:27
So yeah. Go ahead. Go
Andrew Beam 43:29
ahead. No, I was just say, to add on to that. i There's a lot of like institutional variability. So I have sort of collaborators at different hospitals. And it seems like some hospitals throw away 90% of their data, because they don't understand the value in it. And they're not saving monitoring data. They're not sort of instrumented for research purposes. Some other like, usually, standalone children's hospitals just seem to have this thing nailed. And they have like an amazing IT infrastructure and can do all these amazing queries. But sort of even so I think that some hospitals do it better, or some of us do it better than others. But the data that is in the NICU, I think that we've only unlocked like 2% of the potential of the existing data. So I think there's several dozen careers worth of exciting machine learning and AI to do with current data that is generated by the NICU, we just have to systematically collected and store it. I do think that there are some things that are still not captured by the NICU now that we can talk about but I think that there's just so much that we could do if we could leverage the data that's already generated.
Ben 44:33
I was thinking about when we were talking about papers, right? In medical school, you're taught that generalizability is an important factor, meaning a paper that was published in Japan may not be generalizable to our population for definite in South Florida, because people differ based on geographical location. Do you think this is a critical aspect of AI where local datasets are going to be very important, meaning I won't be able to get good outputs from my algorithms, if the data that the algorithm is being trained on is data from babies in the case of the NICU that are geographically in a complete different area with completely different parameters surrounding them. Do you think that's true? And if he hasn't, I guess we're all going to need to start collecting collecting our own local data? No.
Kristyn Beam 45:16
Yeah, I definitely agree with that. So in the sense that so I'm in Boston, and we have a lot of Nicky's up here. And as part of my fellowship, I rotated through all of them. And all of them are slightly different, like, just as every NICU everywhere practices, medicine, you know, practices neonatology like slightly differently, just one guideline is slightly different than another. So I think when we're developing these algorithms we need either you develop it in one institution, you validate it in another institution, and see what the differences are, or even better, would be to have more of a general data collection repository that's really representative of different regions of the country as different levels of NICUs different populations of babies, and then you develop an algorithm on that data set, and you're likely to have a more generalizable algorithm. I think one group that's doing that really well is the mednax group with their baby steps. Data collection piece. So I think, I think they have a format where they're collecting more granular data on a daily basis. And it is representative of kind of more of a national scale than some of the institution institution databases. Yeah,
Andrew Beam 46:40
I'll just say. So one solution, though, is that you can sort of do what's called fine tuning at a given institution. So like, if someone trains a model at their institution, and you want to use it yours, and you can collect the same type of label data, you can then sort of adapt the algorithm to the local characteristics of your institution and in a pretty straightforward way. But yeah, we've been doing some work with them. And next group, and the data set that they have is sort of unlike any academic data set that I've had that I've seen before and, you know, 100, hundreds of institutions and things like that. And to me, that is the solution. It don't have don't worry about external validation, because everything is internal is the like the real answer. If you have a big Coalition, a big data set, where all of these you're part of the training data set, I think, is the actual answer, versus trying to, you know, hope that the institution that you're at is like an institution that was in in the training data set,
Kristyn Beam 47:41
because our institutions are so different, we're all treating different populations of patients, different families, different regions of the country. So if we were just able to bring all that data into one central place, I think that would be amazing and ideal for developing these algorithms.
Daphna 47:59
My question was really about, you know, especially when I think about babies, and how they function in their families, and how, you know, when we think about generational impacts, and things like that, do you think that there will be a time where we're actually incorporating more data outside of the unit, like from families to, you know, understand their whole lives to understand, you know, maternal stressors, prenatally, so many things that we aren't even close to capturing in the electronic medical record?
Kristyn Beam 48:37
Yeah, I think there could be a great opportunity with that, especially with like, different follow up programs to really capture data in that sense. It is, it is more challenging once the baby leaves the NICU. And then you know, you're, it's, you know, you're getting your loss to follow up at that point. And so it could be difficult to gather that information back. But I do think there are some other people who are interested in a little bit more of that post NICU discharge prediction of, you know, neurodevelopmental outcomes, school readiness, all these other things that if we could develop a way to collect that data as well, in a really systematic way, then we could incorporate those into
Andrew Beam 49:21
and I think there's another opportunity in the future is to send them home with some type of wearable device depending on like what types of data you would need to collect, send them home with like a tiny Fitbit or sending them home with some type of passive sensor. Or even if you're talking about like maternal stress being super important offer mom a Fitbit, too. And you can see if she's having heart rate spikes and things like that, so I think that they're I'm a big fan of things that are passive. So like if mom has to fill out a questionnaire every day or something like that. That's the I'm skeptical that she's had this very stressful event in the NICU. Her she's finally gotten her baby home and asking her to do homework every night. You know, maybe is it useful thing, but I'm also skeptical that you get a good response rate there. So I think depending on the types of information that you would need, once they leave, you could think about passively collecting some of that using something like a wearable or using some type of in home sensor. But I think that that's also an underexplored area right now.
Ben 50:19
We say we're gonna, we're gonna have on our show, Dr. Ross summers, who's actually working exactly on that and trying to trying to roll out deliver wearable sensors for babies after discharge from the NICU. So yeah, you're absolutely right about that. Sorry, Kristen.
Kristyn Beam 50:33
Yeah, I was just gonna say another thing, you know, that's getting a lot more traction is incorporating families and parents into building different prediction models or building guidelines? And so I think that, you know, once, you know, incorporating them early into these systems is probably better than building something and saying, Oh, well, do you think this is important? So I think as we move forward, incorporating families and patients into the structure of the algorithm so that we're actually predicting things that families care about, because, you know, do family? Like, do families care about a diagnosis of BPD? I don't know. You know, I think some do, I think some maybe don't, as long as their baby comes home, do families care how long their babies in the hospital? Yeah, like? That's the number one question that we all get at those prenatal consults. So. So I think understanding what families actually want to be predicted is really important, too.
Ben 51:36
We're coming to the end of the episode, we have about 15 minutes left, and I wanted to, I wanted to approach a subject that we're definitely not going to have time to go over fully. But let's say I think I think this episode is going to create three types of people, there's going to be people who are going to be absolutely no, I'm going to stay as far away from as possible. And that's fine. I think we're going to have by standards, and we're going to have people that are going to be super stoked about the potentials and everything. But it's very, for people who have gone through regular training, college med school residency, we have no training in computer science data and data analysis. How how do you? How do you say, okay, like, I'm interested, this sounds very cool. What do I What can I do? I don't know how to code. I don't know how to do any of these things. So is it just not going to be for me? How can you give us a little bit of a roadmap as to how can clinicians who want to embrace AI can actually get involved?
Kristyn Beam 52:31
Yeah, so that's me. I went through, I mean, I don't know how to say this, I went through undergrad, I never had to take a statistics class. In medical school, we took our like epidemiology class, which I think we all sort of, just do, because we have to, and, you know, you learn sensitivity and specificity every time you have to take a board exam or something. So you know, I got through to the end of the residency. And that's where I was, I had no idea how to code I had, you know, like enough statistical knowledge to read the papers, I needed to read for my clinical practice and understand that. So I think it's okay, if you don't know how to do all those things. And you're still interested in this area, I think is still a really approachable field to get into my story? Well, one is, I did marry someone who has a PhD in bioinformatics and does artificial intelligence. So I mean, you could do that. But I think everyone should, you can still get involved. I, during my fellowship, I did an additional research fellowship, which is this, it's a health services research fellowship. And I was able to get my master's in public health in quantitative methods during over a two year time period, that really gave me the skills that I need to move forward as a clinician interested in this area. I think that clinicians should not be afraid to get into the like computer science AI world. But I think we also need to say like, I'm not ever going to be able to completely code something or completely build something on my own. I think partnerships between multiple fields are really important. So I think what clinicians can bring to this is some understanding of some prediction models, some basic understanding of coding, which I can talk about in a second, and just some questions and understanding the questions and what what we want out of that. And then clinicians can bring our knowledge of the clinical environment to people who can actually build these algorithms and actually carry out that super technical piece. Like we're just not going to be that technical, and that's okay. But we have a lot of knowledge about just what happens day to day to do that. If you do want to get into coding though. I mean, I am like a, you know, novice coder, but like I can do some coding things and are for data science is a really good book that I think is pretty straightforward for clinicians, if you want to learn some Ark thing.
Andrew Beam 55:09
Yep, I'll just also say that I helped organize to conferences where like physicians and clinicians are first class citizens around AI. So one is called machine learning for health. And the other is called the conference on health inference and learning chill. So we always have clinician speakers, we always have work groups at these conferences, that sort of encourage clinicians and computer scientists to intermingle, to foster these kinds of relationships. If you're at sort of a research hospital, or an academic hospital, I would look up people in your computer science directory who are doing this computer scientists know that sort of machine or that medicine is sort of the frontier of AI. And so if you call, I get cold emails all the time, from clinical researchers, and I'm always happy to meet with them and sort of start new collaboration. So I guess if you if if folks are interested email people out there sort of in their local environment, they would find lots of willing collaborators who would help sort of onboard them to what's going on in AI.
Kristyn Beam 56:11
Yeah. And I would just encourage clinicians to not be shy or scared of getting involved in this field, because I do think we just bring so much to, like a computer scientist comes at this from one direction, and we're coming at it from another and both are so important if we actually want to make forward progress.
Ben 56:33
So to get even more practical, I mean, if you Google courses, artificial intelligence, you most likely will have some courses trying to teach you either Python TensorFlow or all these coding software's. And they're very daunting when you when you approach that you would, what is what is the one thing that people if they want to say, Okay, I won't I'm not like you said, I'm not going to be able to be coding fully by myself and do everything by myself. But I want to get enough of the basics. What should people start to look for online? In terms of what is it called? You said mentioned quantitative methods, but like, what is the what is the kind of words that people have to put in their search engine to fall on? They're in the right places?
Kristyn Beam 57:15
Yeah, there's the courses online.
Andrew Beam 57:19
Yeah. So I think it depends. It's hard to answer that in the abstract. Because I'd be weary to give you like you said, pytorch, and TensorFlow search terms, only for that to be sort of hopelessly like too technical. I think that honestly, there are the sort of sweet spot for this are blog posts and YouTube videos, that there are lots that are at the sort of right conceptual level, and I would say, deep learning neural net tutorial would be sort of where I would start, I would start sort of programming agnostic, and sort of get the basics. There's an excellent series of YouTube videos from this guy named three blue one brown, where he talks about how neural networks work, and he has these amazing visualizations. And watching those would probably be like, give you enough to then know sort of what the next hop that you need to take is
Ben 58:11
perfect. And if you don't mind, maybe we can put these in the show notes so that people can actually have access to that. That's awesome.
Daphna 58:17
Absolutely. Yeah. Like you said, I think there's some people who just, you know, this isn't what they're going to do. But I think you brought up a point that any of us who do research are engaged, right, because that research will go into an algorithm sometimes. So any tips for people who are starting projects or who are collecting data about you know, how they can refine that how they can make it more granular, like you said, so that we get better? Data?
Kristyn Beam 58:49
Yeah, I just think collecting data, broadly, if you can, is always going to be better. And then if you're collecting data, partnering with someone who would be interested in building the algorithm and understanding what pieces of information they would need to help build that algorithm. So you know, one of my projects I'm interested in is using some chest X rays early in Indiana, its course to predict BPD at a later course, for hopefully intervening with some earlier, medications and treatments. And that project has been kind of five years in the making, because the data collection is so important and understanding how we're collecting that data. So I think just partnering with someone to collect data, and always collecting data broadly is going to be really helpful.
Andrew Beam 59:43
Yeah, I mean, there's a trade off between, like the burden of collecting the data and like it's usefulness. But I think that, you know, it's hard to know what will be useful ahead of time. And so if you have like a good data collection strategy that will let you capture as much of that patient interaction in like care as possible, even if it may be is not relevant to the research question you're trying to collect that data for. You know, there's also IRB trade offs in like what you can and can't collect and things like that. So there's some sweet spot on that trade off where, okay, I'm collecting a pretty broad set of variables that like may contain some interesting information. But I'm also not having to, you know, spend two years arguing with the IRB about why I'm videotaping all of my patients on my iPhone.
Ben 1:00:31
I guess, I guess my last, my last question for you guys, is we've seen We've seen a lot of papers that have shown the merits of AI. But I feel like everybody has been scared to put AI versus doc plus AI in an RCT type of fashion, saying, well, let's see, let's put let's put the human element alone to the test. Do you think that's coming next? Do you know if I might so.
Andrew Beam 1:00:56
So there are papers that show that and unfortunately, AI does better than doc plus AI and some of these comparisons? It's not true across the board, sometimes the AI plus MD does better. But there are definitely I can remember these in the show notes. Definitely some studies where ai plus MD just does worse than either AI on its own or MD on its own.
Ben 1:01:17
And there's a lot of stuff I had seen from adult literature, especially in radiology. But I don't know if we have reached that, that gap in the in the NICU? I don't know if I didn't know. Okay.
Kristyn Beam 1:01:29
I think most of the work in that area has been done in ophthalmology and radiology. Yeah.
Ben 1:01:35
Yeah, my, my future. My wife, my brother's fiancee is interested in radiology. And I said, You better read some of that AI stuff.
Kristyn Beam 1:01:46
That's a good point. Like, if you're a if you're a med students Yes. To this, like, you know, just familiarize yourself with these methods. Because I do think in some way, it's probably going to touch a lot of fields of medicine in the next 10 or 15 years, or sooner in some fields, radiology pathology, specifically. But if you're, if you're in it, then it won't be as scary, then if you're like coming in, and all of a sudden, there's an algorithm that you have to
Andrew Beam 1:02:12
like, I teach at med school class. And like, every year, there's like a line of very nervous looking students. And I just know, well, they're the people who want to go into radiology. Because they're gonna come up and ask me if they still go into radiology, I tell him, yes. But just you should, you should be aware of the changes that are happening.
Daphna 1:02:31
Well, yeah, we're definitely near the end. And we got so engrossed in the topic that we really didn't even get to learn so much about the two of you. And just seeing the way you guys be entered. And the way you guys came on the call. I mean, I said that we missed out on that. But I wonder, at least for listeners, who may have, you know, a couple at home who is doing a lot of collaborating? How do you guys manage that? You know, spend a lot of time together, how do you not work while you're at home? Or Yes, work while you're at work? How does? How does that? How do you get it all done?
Andrew Beam 1:03:08
90 seconds left, go?
Kristyn Beam 1:03:12
Yeah, you know, I think it's just been so much of our relationship from the beginning, that we've sort of worked together and talked about each of our jobs, that it's just sort of become kind of part of what we do together. You know, our dinner conversations are sometimes very technical, and I'm sure our daughter when she grows up is gonna be like, even now she's a two and she'll be like, no, no talking, oh, you're talking. So I think it's just like, sort of part of our relationship, we definitely, you know, we don't work all the time. We don't talk about it all the time. But it's just kind of woven throughout our relationship. But, you know, you just have to know when to stop talking about it. And
Andrew Beam 1:03:53
it is a balancing act between like, engaging as a spouse versus engaging as a collaborator. And sometimes if the spouse is not super happy, then it's not wise to engage as a collaborator. And so I do think
Daphna 1:04:06
that message all around, right, right, right. Right.
Andrew Beam 1:04:10
So yeah, I mean, I think especially, I have to think about this like because I tend to just be I don't have an off switch with this stuff very easily. And so I have to be, just be mindful of like, what's going on in Kristen's life outside of the projects that I'm very excited about and make sure that like, I'm not overwhelming her with emails and slack messages and stuff like that,
Kristyn Beam 1:04:33
because, you know, I still have my clinical stuff. I do too. I'm, I'm on call. I have service time. So you know, we have to make it all work in some way.
Andrew Beam 1:04:42
But and then sometimes like this morning, the toddler isn't happy. And she has a meltdown when we're trying to get her out the door to do a podcast together and then we
Daphna 1:04:50
show up in Well,
Ben 1:04:54
we appreciate you making the time then. Thank you. Thank you so much. It was it was a lot of fun. I think there's there's a lot of The things that our audience is going to learn from this discussion, you guys were amazing. And we'll put all these resources in our show notes so that people can actually start the process of learning about AI. I'm really excited myself about those conferences that you mentioned. So yeah,
Kristyn Beam 1:05:12
thank you. I'm happy to like, anyone wants to work together. Yeah. Feel free to email me
Ben 1:05:19
to put your emails in the shownotes. Yeah, definitely.
Andrew Beam 1:05:21
I mean, part of the reason why we were excited is that we think that this is going to take a broad community based effort to like maximally achieve this. And so we would love to start building that community and putting those resources in place.
Ben 1:05:33
Amazing. Amazing. Well, thank you so very much.
Daphna 1:05:36
Yeah. Anyway.
Kristyn Beam 1:05:39
Yeah, this is great. All right. Thanks, everyone. All right.
Ben 1:05:44
Thank you for listening to this week's episode of the incubator. If you liked this episode, please leave us a review on Apple podcast or the Apple podcast website. You can find other episodes of the show on Apple podcasts, Spotify, Google podcasts, or the podcast app of your choice. We would love to hear from you. So feel free to send us questions, comments or suggestions to our email address, the queue podcast@gmail.com. You can also message the show on Instagram or Twitter at NICU podcast. Personally, I am on Twitter at Dr. Nikhil spelled Dr. NICU. And Daphna is at Dr. Dafna MD. Thanks again for listening and see you next time. This podcast is intended to be purely for entertainment and informational purposes and should not be construed as medical advice. If you have any medical concerns, please see your primary care practitioner. Thank you
Comments