Conversations

Season

1

Episode

39

|

Aug 3, 2023

The Promise of AI in Medicine with Eric Topol, MD

Emily speaks with cardiologist Eric Topol about his book Deep Medicine, which explores the potential for AI to enhance medical decision-making, improve patient outcomes, and restore the doctor-patient relationship.

0:00/1:34

Conversations

Season

1

Episode

39

|

Aug 3, 2023

The Promise of AI in Medicine with Eric Topol, MD

Emily speaks with cardiologist Eric Topol about his book Deep Medicine, which explores the potential for AI to enhance medical decision-making, improve patient outcomes, and restore the doctor-patient relationship.

0:00/1:34

Conversations

Season

1

Episode

39

|

8/3/23

The Promise of AI in Medicine with Eric Topol, MD

Emily speaks with cardiologist Eric Topol about his book Deep Medicine, which explores the potential for AI to enhance medical decision-making, improve patient outcomes, and restore the doctor-patient relationship.

0:00/1:34

About Our Guest

Eric Topol is a cardiologist, Professor of Molecular Medicine, Editor-in-Chief of Medscape, and Founder and Director of the Scripps Research Translational Institute. He has published over 1,200 peer-reviewed articles, and is one of the top 10 most cited researchers in medicine. He authored three bestseller books on the future of medicine: The Creative Destruction of Medicine, The Patient Will See You Now, and Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. He publishes the Substack newsletter “Ground Truths,” and can be followed on Twitter @erictopol.

About The Show

The Nocturnists is an award-winning medical storytelling podcast, hosted by physician Emily Silverman. We feature personal stories from frontline clinicians, conversations with healthcare-related authors, and art-makers. Our mission is to humanize healthcare and foster joy, wonder, and curiosity among clinicians and patients alike.

resources

Credits

About Our Guest

Eric Topol is a cardiologist, Professor of Molecular Medicine, Editor-in-Chief of Medscape, and Founder and Director of the Scripps Research Translational Institute. He has published over 1,200 peer-reviewed articles, and is one of the top 10 most cited researchers in medicine. He authored three bestseller books on the future of medicine: The Creative Destruction of Medicine, The Patient Will See You Now, and Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. He publishes the Substack newsletter “Ground Truths,” and can be followed on Twitter @erictopol.

About The Show

The Nocturnists is an award-winning medical storytelling podcast, hosted by physician Emily Silverman. We feature personal stories from frontline clinicians, conversations with healthcare-related authors, and art-makers. Our mission is to humanize healthcare and foster joy, wonder, and curiosity among clinicians and patients alike.

resources

Credits

About Our Guest

Eric Topol is a cardiologist, Professor of Molecular Medicine, Editor-in-Chief of Medscape, and Founder and Director of the Scripps Research Translational Institute. He has published over 1,200 peer-reviewed articles, and is one of the top 10 most cited researchers in medicine. He authored three bestseller books on the future of medicine: The Creative Destruction of Medicine, The Patient Will See You Now, and Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. He publishes the Substack newsletter “Ground Truths,” and can be followed on Twitter @erictopol.

About The Show

The Nocturnists is an award-winning medical storytelling podcast, hosted by physician Emily Silverman. We feature personal stories from frontline clinicians, conversations with healthcare-related authors, and art-makers. Our mission is to humanize healthcare and foster joy, wonder, and curiosity among clinicians and patients alike.

resources

Credits

The Nocturnists is made possible by the California Medical Association, and people like you who have donated through our website and Patreon page.

Transcript

Note: The Nocturnists is created primarily as a listening experience. The audio contains emotion, emphasis, and soundscapes that are not easily transcribed. We encourage you to listen to the episode if at all possible. Our transcripts are produced using both speech recognition software and human copy editors, and may not be 100% accurate. Thank you for consulting the audio before quoting in print.

Emily Silverman

You're listening to The Nocturnists: Conversations. I'm Emily Silverman.

Artificial Intelligence seems to have found its way into almost every aspect of our lives, and healthcare is no exception. From diagnosis to treatment to communication to clinical documentation, AI is already reshaping the very foundations of how we practice.

Today’s guest, cardiologist Eric Topol, was writing about AI way before it went mainstream. His book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,' unravels the mysteries behind AI, delves into its potential to revolutionize diagnosis, prognosis, and more – and it was published in 2019! Clearly this book was way ahead of its time.

Eric is a Professor of Molecular Medicine, Editor-in-Chief of Medscape, and Founder and Director of the Scripps Research Translational Institute. He’s published 1,200 peer-reviewed articles (and is one of the top 10 most cited researchers in medicine) and he’s authored 3 books: The Creative Destruction of Medicine, The Patient Will See You Now, and Deep Medicine, which I mentioned before. Eric also publishes a Substack newsletter “Ground Truths.”

In my conversation with Eric, we discuss what we understand and don’t understand about how AI works, how we grapple with that uncertainty, the ways AI is or can be applied in healthcare, and ultimately, his optimistic thesis that AI can actually help us bring humanity back into medicine.

But first, here’s Eric reading an excerpt from his book Deep Medicine:

Eric Topol

We're still in the earliest days of AI in medicine. The field is long on computer algorithmic validation and promises, but very short on real-world clinical proof of effectiveness. But with the pace we're seeing, in just the past few years, with machines outperforming humans on specific, narrow tasks, and likely to accelerate and broaden; it is inevitable that narrow AI will take hold. Work flow will improve for most clinicians, be it by faster and more accurate reading of scans and slides, seeing things that humans would miss, or eliminating keyboards, so that communication and presence during a clinic visit is restored. At the same time, individuals who so desire will eventually gain the capacity to have their medical data, seamlessly aggregated, updated and processed, along with all the medical literature to guide them, whether for an optimal diet or their physical or mental health. All of this is surrounded by the caveats that individuals must own and control their medical data, that doctors actively override administrators who desire to sacrifice enhanced human connection, in favor of heightened productivity, and that intensive steps to preserve privacy and security of data are taken.

Emily Silverman

I am sitting here with Dr. Eric Topol. Eric, thank you for being here today.

Eric Topol

It's great to be with you, Emily.

Emily Silverman

So Eric, I've had my eye on this book for quite some time. I hadn't read it until recently. And I want to tell you what sparked me to finally read it. And what it was, was two things. One: ChatGPT came out, and I started playing with it. And then, two: a friend of mine, named Kevin, who's a reporter at The New York Times, published a transcript of a conversation that he had with Bing's Chatbot "Sydney". And in that conversation, the chatbot tried to persuade him to leave his wife. And, I've always been interested in trends and technology and just interesting things, but I had never felt so... I don't think panic is necessarily the right word. But, I was like, "I am behind. I need to learn everything that I can about AI, right now." And I started watching these YouTube videos about language processing, and transformers. And of course, I only understood about 42% of it. But I realize now, I needed to catch up and read your book. As you said, the title is Deep Medicine: How Artificial intelligence Can Make Healthcare Human Again. So, I appreciate you coming on to the show. Even though this book came out in 2019, it's just so obviously relevant to everything that's happening right now. My first question is, since the book came out... To me, it feels like a lot has changed. Has, in fact, a lot changed? Or is it more just the public catching up with things that were already happening in the background?

Eric Topol

I think it's really the latter, Emily. That the public hadn't had the "wake-up" call. And, really, what happened with you, with ChatGPT, a billion unique users of ChatGPT in 90 days. There's never been anything like that in history. A billion! And, of course, it's still... Who knows how many now, months into it. So that is what woke the world up to AI. And you touched on transformer. It's that transformer architecture that we didn't have when I wrote the book, but you could see it coming. I remember talking to some of the top AI gurus and saying, "We don't have a way to process all this data with the images and the text and the voice and videos. What are we going to do?" And they say, "Oh, we'll have it. It's coming." And, "We're working on it." And, here it is. So. When GPT-4 came along, just in March, that was a biggie, because that was the first chatbot that was multimodal, that could take videos and images and text and speech and put it all together; process it. So, it was incubating four years ago when the book was published, but now it's getting legs, in a big way.

Emily Silverman

In the book, you do such a great job laying out the basics of Artificial Intelligence and machine learning. And I think one of the units, I guess, of technology, that is good to understand, or at least try to understand, is this idea of the DNN (the deep neural network). And, in the book, you talk about how there's these different layers of processing. There's an input layer; there's an output layer, and then there's all these layers in between. And you really say that, this DNN, this neural network, it's really just a tool. It's almost like a steam engine. It's really just a tool, that is fueling what is now a revolution. And we can use it for, as you said, images, words, language, data. But the problem is, those middle layers: We don't quite understand what they're doing or how they work. Is that true? And can you talk about that?

Eric Topol

Well, I love your steam engine metaphor.

Emily Silverman

It's your metaphor. It was in your book.

Eric Topol

It was? forgot about that. Oh, wow. Okay. I guess I should have re-read it. Any rate... That's funny. So, the layers, the so-called artificial neurons... (It's kind of a funky term.) But it's basically, once the inputs of hundreds of thousands, if not millions, of inputs. Whatever it is. It could be an image; it could be a text. It could be anything. That input goes through these layers to differentiate the features, that the machine leads to interpretation (or often called prediction). In fact, the first year of deep neural networks is kind of called Predictive AI. And then, the transformer model is now what we call Generative AI, which we'll talk about a bit more, because it does a lot more than just generate things. But, the number of layers is proportional to the complexity of the input data. The idea is to try to simulate the brain, although it requires a hell of a lot more power, which is another big problem that we're seeing ushered in right now. But, going through all these layers, eventually we get to the output whereby the processed solution, interpretation, prediction, whatever is made. And what we don't really understand is the magic of the deep neural network. How does it see these things? We know it can be trained by a gazillion inputs to see things that we can't see. So they have these things called saliency maps, where it visually tries to deconstruct those layers to see, "What is it picking up that we can't see?" So we could get smarter, and see the things that the machines can't see. And, the saliency maps are only partial. So the black box of these deep neural networks, is that we don't really understand what's going on in those hundreds (if not thousands) of layers of, basically, deconstructing that image, or text, or speech. On the other side, on the output side, is what is extraordinary, because we get image interpretation that is typically far better than the best human eyes. Up until now, it was all about images: X rays and CAT scans, PET scans, ECGs and retinal photos, and that kind of... Skin lesions. But, of course, now everything's changed. Because it's not just images. The change of deep neural networks, to this transformer model which has set up the potential for multi-modal, for different forms of data.

Emily Silverman

So we have the inputs, the outputs, the magic happening in between (that we don't understand. You know you could feed in a gazillion chest X-rays, and then train the model, let's say, to pick up pneumonia in a way that a radiologist maybe couldn't, and things like that. But then these transformers, or technology that's called a transformer, arrives and changes everything. What is a transformer? What did it change? And why was that so important for the technology, for a lay person?

Eric Topol

Yeah, yeah. Well, I guess the best story about it is, interestingly, it was invented by Google. And they had a pre-print, five years ago or so, about transformer model architecture. And what's interesting is, they could have upended their search. But, kind of like the story of Kodak or Blockbuster, not ever wanting to take on their main technology to challenge it. They didn't pursue it, and they let Open AI run circles around them. And then teamed up with Microsoft. But basically, the transformer model architecture is one that is adding another piece to the deep neural network of attention. So that it's able to ingest different modalities of data. Really what set up the large language models, which is based on that architecture. It isn't just a turbo charge input capability with attention, but it's also massive amounts of graphic processing units. Computer operations are flops at levels that we never thought would be possible. So, it's a combination of different building blocks that have gotten us to these large language models. If it was just that there was another alternative way to deal with input data, it wouldn't have been enough. It took all these other pieces to get to where we are today, which is still a work in progress. If GPT-4 is considered the leading chatbot of today, it's going to be seen as obsolete in the months ahead, for sure.

Emily Silverman

The months. Not years, but months.

Eric Topol

Yeah. Well remember, Chat GPT was November 30th. And GPT-4, which is a big deal jump ahead, was middle of March. So that's like four months. It's amazing. So, the progress here is inevitable. And that's, of course, why we have some people talking about the existential threat. Whereas others are talking about (like we are) its potential: revamping, and rebooting of a lot of things we do in healthcare and medicine.

Emily Silverman

Well, I want to stay in healthcare a bit, but we must to get to the existential piece, eventually, Maybe we'll try and save that for the end. But I want to hone in on what you said here about "the black box". So the fact that this magic, we don't understand how it works. And, you had a section of your book that talks about this. And you say, "We already accept black boxes in medicine. For example, electroconvulsive therapy is highly effective for severe depression, but we have no idea how it works. Likewise, there are many drugs that seem to work even though no one can explain how. Should we lend the same or extend to the same tolerance for uncertainty that we do with these interventions as we do to AI?" But then you go on, and you say, "There's something called the AI Now Institute that says 'No'. And then the European General Data Protection Regulation went into effect, and agreed 'No'; that they demand an explanation. They say, "We have a right to an explanation." So I'm curious. Where has this gone? And where do you personally fall?

Eric Topol

This is really a critical question you're asking. No surprise, of course, that you would come up with this. But, the issue here is, as you say, we hold machines potentially more accountable than we do traditional medicine. Besides the examples you gave, we don't even know how anesthetics work, but we give them in every operation.

Emily Silverman


Or Tylenol. I think what the mechanism of action of Tylenol, we still don't know.

Eric Topol

Don't know; don't know. So, the question is: If you had a large, rigorous trial of 100,000 people, and half got the AI intervention, and half got a sham AI (whatever reasonable control), and it showed you saved people's lives (whatever endpoint you like), would you say, "I'm not using that until I understand it?" That's the debate. Because, in medicine, we accept things that work, with compelling evidence. But are we going to do the same for machines? Now, on the other hand, because of this explainability drive, that you alluded to, with the European authorities and... All of us, we'd like to know; we're curious. We're going to use it to treat people or to make diagnoses; we sure want to know. So, the idea of deconstructing neural networks, to get to the explanation, is a work in progress. And there are many who are sanguine that we will know; that we gotta keep working on this. But right now, there's a black box, and we've got to deal with it. The good part is we don't have compelling data yet. So we don't have to make that choice so much. But we will. And we're seeing a course in the medical image space. We're seeing more randomized trials. And, you know, if you're a gastroenterologist and you use machine vision, you'll find more polyps. Whereas if you don't have machine vision during your scoping procedure, you'll miss things. So, those have been established by multiple randomized trials. Do we know exactly how the machine vision picks up the polyps? No. But do we care? I mean, should the patient who goes through all that prep, and has a risk of colon cancer... Should they not be entitled to have this help during their colonoscopy? These are the kind of questions we have to grapple with, because accepting proof, without the full story, is something that is going to become more and more common.

Emily Silverman

And in healthcare and medicine, who will decide? Because this AI Now Institute you mention, that's not healthcare specific. That's a general topic. NYU, European Union, also general, not healthcare specific. Will these decisions land at the level of the FDA? At the AMA? Like who do you think is going to be in the chair, making these decisions? Or who should be?

Eric Topol

Well, there's a good unknown, because the FDA is having a really hard time in dealing with this, because these neural networks and the large language models are autodidactic. They are insatiable for input. And so the more they get, the more smart the output gets. So when the FDA... They can't even figure out how to not freeze an algorithm, no less approve something without its full explainability. The FDA is really struggling with this so far. They've given a clearance (usually sub-approval of the so-called 510(k) mechanism), but some with actual frank approvals, to over 500 algorithms that use deep neural networks. Most of them are image-related, but not all, and they freeze every one of them. And they so far have not demanded explainability. So, at the FDA level, they'll take proprietary datasets that the medical community never gets to see. They don't even publish the papers, these companies. And the buy-in, or the implementation, has been slow, because it's not transparent.

Emily Silverman

Medicine tends to be pretty slow to adapt technology in general, as is demonstrated by the fact that many of us still use pagers and fax machines. In this case, is that a good thing? Is it good to have the brakes on and to be really intentional about what we take up and what we don't? Or do you think that we're too slow? And that there are exciting benefits to this technology that we're shutting out of the profession, out of fear or out of culture or out of ego, or things like that?

Eric Topol

I think every clinician needs to understand some of the aspects of AI, the nuances. It was really one of the reasons I wrote the book, Deep Medicine. I spent a few years researching it, because I didn't know anything about this topic. I mean, I had to learn and I thought that if we get everyone grounded to some extent, which isn't the case now, then you'll know better when you can use it; when you can trust it. Because it's invading our lives. It already has. Our music, our movie, TV we're gonna watch, our navigational systems. I mean, it has truly taken over our lives. A lot of it's invisible, but as people taking care of patients... And patients... This is something that's not going to go away. And so, it's really critical that, to make those decisions that you're asking, if you do it without knowledge of the downside, or the what I call pluri-potency. It really is... got, like, potency we've not seen before. But, it comes at a price, potentially. And that's what we have to be alert to, cognizant of, both its potential and liabilities.

Emily Silverman

The book opens with a personal story from you about your knee surgery. And you talk about the difference between "shallow medicine" and "deep medicine". And we kind of hear about these definitions through the anecdote of your knee. And one thing I love about this book is that it takes an optimistic posture. The subtitle is "how artificial intelligence can make healthcare human again". And so you're really making the case, in this book, that AI can help get us from shallow to deep. Could you say a few words, maybe, about your knee story, and this idea of shallow versus deep, and what you mean when you say that?

Eric Topol

I had a knee replacement, because of a condition I had as a child called Osteochondritis. Dissecans. That was actually the first thing I wrote about in the book: getting roughed up by my orthopedist (the same orthopedist who I had referred all my patients with knee and hip replacements). And the first line of the book is from him: "You should have your internist prescribe anti-depression medications." And my wife and I looked at each other bug-eyed saying, "Are you kidding me?" I was in desperate pain. I had what, to me, it looked like a gangrenous leg. I mean, it was all blue and markedly swollen. And this is weeks post op. I wasn't able to sleep; I was in pain. I was trying every cockamamie remedy known to mankind, with no relief. I was in the worst shape. I was thinking about suicide. So, being roughed up by a doctor, by basically saying, "Well, you have depression, and you should see an internist," in that state. And then not having the diagnosis made. I mean, my wife actually made the diagnosis 'cause she's trying to help me, and she said, "I think you have this condition called arthrofibrosis." And she was right. I never heard of it before. And it was this massive inflammation reaction to the prosthetic device of the knee. And my orthopedist never mentioned it. And there would have been ways to help prevent that, or treat that more effectively. And, rigorous physical therapy, which is what I kept getting prescribed, was making the inflammation worse. And I finally saw a physical therapist who rescued me, who said, "Stop this physical therapy; it's got to be gentle. And you gotta go on high-dose, anti inflammatory ibuprofen." And then I started to feel better. And I never got normalized. But I got rescued from the sense that I can't live like this. And it helped me identify people with chronic pain. When I talk to a group, and I say, "How many of you, or your loved ones, have been roughed up by a doctor?", everybody raises their hand. Because we don't have time. I think this orthopedist is actually a very good surgeon, and actually a person. But he had six rooms that he's going to and every one is like two minutes, and he just can't deal with it. So, what I thought, Emily, is, "How are we going to get out of this terrible rut?" We've got depressed doctors, burnout, rush-to-do-everything, squeezed to the max, can't deliver care. I'm on the receiving end, experiencing that firsthand. Is there a way to get out of this? And I actually think... I hope that we have a path. It isn't a sure thing, for reasons that we'll probably get into. But, I don't know any other way. I don't know of another solution to our mess that we're in right now.

Emily Silverman

I'm wondering about an alternate history. What would a version of this knee story have sounded like that included Artificial Intelligence decision support? How do we go from that shallow medicine, "one size fits all" assembly line to deep medicine? Or what could have been, if we had been using this technology in a humane way?

Eric Topol

Shallow medicine: it's reflexive instead of reflective. It's like System 1 thinking (of Kahneman) instead of System 2, where we actually have more than a few seconds to put thought into what is going on here. So, that is where GPT-4 works really well. If I had put my symptoms into GPT-4, which has up to date citations, and say, "What is going on here?", it wouldn't have taken my wife weeks to research things to help me. I would have had the answer right there, would have said, "You probably have arthrofibrosis. And you need to stop this physical therapy." Their having access to AI that's smart, that we haven't really had before, would have helped me; would have saved a lot of time. It could have given me an error. Right? And that's why you don't want to only have one conversation. You might want to have it two or three times to make sure that you're getting the same answers. You can audit the chatbot, or AI, by going through the same conversation on a second or third run. But, I think the answer here is, I would have had the right diagnosis, the right treatment, and I wouldn't have suffered. And if we can help people from not suffering, and we use this as a tool, as clinicians together, that I think (just in my example), would make a big difference.

Emily Silverman

There are a lot of applications for AI in medicine, and I want to get into some of the other ones, in a bit. But let's stay with diagnosis, for a moment, because I can tell that that's a really important one for you, given what you went through. And I think a lot of us listening probably have had similar experiences. I know that I have. So when we're thinking about AI and diagnostic support, there's really two applications that I'm aware of. One of them is this idea of putting in a constellation of symptoms and getting a list of diagnoses that pop out. And I know that there's a software called Isabel that has been used in this way. The other application that I've heard about is crowd-sourcing. And it doesn't even have to be physicians receiving the information. There's even versions of this where it's actually lay people who decide to take up some detective work, and help people figure out what it is that they're dealing with. Like in your scenario, it was actually your wife who figured it out, maybe after a few hours of intensive internet research. So. talk a little bit about symptom-checkers and crowd-sourcing. And maybe there are other ways that AI can help us with diagnosis? And, what is the state of that technology right now?

Eric Topol

Yeah, you reviewed where things have been. I think they're going to be largely superseded by the large language models. And the reason for that is they have the medical domain knowledge, which is trained as the entire internet, and Wikipedia, and books, and whatever you can think of as inputs, but now at a point where they can also have add-on medical-specific training. So, there will be the human in the loop, those kind of people that want to help in doing searches. But pretty quickly now... I mean, if you do a GPT-4 conversation. You'll see that we don't need those services anymore, because you'll get the answer and the exciting new part is you can put in your scans. You could put into Med-PaLM 2 and GPT-4. You could actually put a copy of your X ray, and it will start to interpret that, added to whatever symptoms. So, right now we have the beginning of AI on the patient side. We have smartwatches that help diagnose heart rhythm abnormalities. We have, in many countries now, a kit you can get from the drugstore, which is an AI kit to tell you if you have a urinary tract infection or not. We have ability to diagnose skin lesions and cancers through a smartphone photo, or children's ear infections through a smartphone attachment. And these are all algorithms. Being able to diagnose diabetic retinopathy in the supermarket, by untrained personnel. So we're seeing patient-side help of getting screening, which is doctorless. Doctorless, oh my gosh. But you still want to have a doctor, to get a treatment if you need a treatment. So, your point about the patient side, I think here, is that we have to acknowledge that even though we tend to hog the AI space, to how it's going to help us. It's going to help patients, if they get accurate screening diagnoses, and they have a human in the loop, whether it's a nurse, a physician, pharmacists, whatever. And so, the human in the loop part can't be emphasized enough. But if you can do the screening... just think. You know, all the dermatologists and family doctors that are dealing with skin issues or pediatricians dealing with ear infections, how much decompression occurs when they have those diagnoses ruled out, accurately, with AI tools.

Emily Silverman

You talk about the analogy to a self-driving car, and how in the self-driving car business, there's five levels. Level 5 is fully autonomous; Level 4 is mostly autonomous. Level 3 is conditionally automated, where a human can take over, and Level 2 would be something minimal, like cruise control or lane-keeping. And you say in the book, that it's unlikely medicine will ever get beyond Level 3 machine autonomy. So this idea that there needs to be a human in the loop. Can you talk a little bit more about that, and how we might work together with AI as a team? And what that means for us as physicians, and our identity, and all those sorts of things?

Eric Topol

Yeah, I'm so glad you brought up the driverless car story, because we can learn so much from it. It's, you know, years ahead of where we are in medicine. And it's multi-modal, you know, lots of different layers of data. It's in real time getting processed to drive the car, and not run people over or have accidents. But what's interesting is the Level 5, which is totally autonomous, all the time, under any weather conditions, any road conditions. We will never get there. But even though that's the case, we have people like Elon Musk, and other people who are big into AI and car, saying we will get there. No, it's impossible. And Level 4 is probably unlikely to be achieved, although that's kind of the reset goal. But the hype of driverless cars, that originally you saw these videos of everyone's in a driverless car; there's no drivers anymore. We're not going to see that. Because there's things like fog and ice and snow and rain and construction zones and whatnot. So that's really a lot like medicine. I mean, when you think about it, we're never going to be getting rid of doctors, and no doctor has to be worried about their job. But they should be thinking about how can they have more efficiency, more time when they need it with patients, so they can switch to deep mode, and slow mode, because of this support. If you think about the potential here, we have a unique opportunity, even if it's kind of a Level 3 equivalent in medicine, where we get the synergy of patients and clinicians to achieve something that gives us gift of time, which is what I consider the end goal here, is we get back our lives. The reason we went into medicine, which is "I want to care for patients". That's why I did this folks, but I can't do it. And so, if we get back to that, and the patient-doctor relationship, we have a big win. I hope we'll get there someday.

Emily Silverman

Two of the specialists that you focus on in the book are Radiology and Pathology. And I thought this was really interesting, because these specialties are all about pattern recognition. And pattern recognition is what AI does best. Things like making a complex diagnosis from a human and their story and their symptoms, that's a little bit harder. But something like reading an image... That feels, at least to me, more doable from a machine standpoint. And, you talk about some of the studies that have been done, and the outcomes and when, you know, they're comparing the machine-read scans with the human-read scans, and the performance. And you envision a future where some of that rote work of just reading ordinary scans, day after day, for hours, that could be outsourced to the machine, so that the physician can focus on other types of work. And you even propose that, maybe one day, Radiology and Pathology could fuse into a single specialty. Talk a little bit about this vision, because it feels radical, but also it makes so much sense. And so, I'm curious if you could tell our audience about that.

Eric Topol

Right? Well, pattern recognition is something we all do as clinicians when we're seeing patients and looking at their data, whether it's scans or labs or various things, but the radiologists and pathologists, as you point out, are doing that the most. They're looking at scans and slides throughout their day, throughout their life professionally. Well, whatever number of scans, radiologists (typically 50 to 100)... Each scan, of course, has all sorts of frames and loops and videos and whatnot, but 50 to 100? Well, you could say, "You know what, you could read it 500 a day with AI support." Or you could say to the radiologist, "You know what? We get a lot of scanned requests that are unnecessary, could you be the gatekeeper? And could you, by the way, go over with the patient the results? Because their surgeon's telling them they need surgery, and you may have a different view about this from your experience." So, the whole idea of the radiologists living in the basement and working in the dark in their pajamas, whatever, could be re-vamped, you know. And, you could actually wind up talking to patients. I've talked to a lot of radiologists who cherish the idea of being able to communicate with patients, and avoiding radiation (unnecessary) by having unnecessary scans. So, it's enriching what these pathologists... Who would have ever thought that a pathologist, looking at a slide of a potential cancer, would not only be able to make the diagnosis of cancer, where it's coming from (which sometimes is difficult to determine), what are the driver mutations, the prognosis of the patient, destruction mutations in the genome... all from looking at the slide with an AI support. So each of these particular specialties get to another level of discernment, we're machine help, and accuracy. So not just accuracy, but these machine eyes that see things that they couldn't ever see, because they're trained on orders of magnitude more images than they will, in a lifetime. So the other thing to keep in mind, 50% of physicians are below average. So it helps bring up the rear, if you will, for those radiologists who are easily distracted, or not as experienced, or for whatever reason, they're below average. Or pathologists. It brings them up to a higher level of accuracy. But then there's another dimension to this, which is really exciting. And I didn't see this four years ago, and it's just exploding. Which is seeing things and images that humans will never see. So let's just take example, the retina. Now we know, which we weren't fully aware, a few years ago, that the retinal photo will help make the diagnosis of kidney disease, pre-neurodegenerative diseases, heart disease risk. It will tell us about diabetes control. A paper came out today: it'll help diagnose hyperlipidemia, blood pressure control. It's basically a gateway to the whole body. And ophthalmologists are just happy to be able to interpret for their eye diseases. When it has a hepatobiliary disease from a retinal picture? And then, a cardiogram As a cardiologist, that's what I spend a lot of time reading cardiograms. I would never have expected I could tell the hemoglobin to the decimal point, or the age/sex of the patient, their ejection fraction. Valvular heart disease, all sorts of other diagnoses that are hard to make, the pulmonary capillary wedge filling pressure of the heart. I mean, all this stuff from a cardiogram! That I'll never be able to see. I've been reading cardiograms for 35 years. I can't imagine this. So this is what is extraordinary, is seeing things that (machine eyes) that we will never be able to see. It's somewhat humiliating, that machines can do this. But at the same time, why not get their help? Why not lean on them, when we know it's really accurate, useful information. And, that we work together.

Emily Silverman

I was stunned by the section in the book about the scans of the retina, where you say, "As we learned from a Google study of more than 300,000 patients, retinal images can predict a patient's age, gender, blood pressure, smoking status, diabetes control via hemoglobin A1C, and risk of major cardiovascular events - all without knowledge of clinical factors. Such a study suggests the potential of a far greater role for eyes as a window," you say "into the body, for monitoring patients." But you know that old saying, that the eyes are the window to the soul?

Eric Topol

Right, right.

Emily Silverman

And I didn't know that you are able to tell those things from an EKG: things about pulmonary capillary wedge pressure. These are things that normally I associate with measuring on an ultrasound of the heart. Can you tell us a bit about the cardiology updates, as a cardiologist? What's happening? And what's still in the research phase, and what is actually being used in a clinical setting? Because, when I work in the hospital, I'm not yet seeing any of this technology.

Eric Topol

Right, right. Right. Well, if you were at Mayo Clinic, you would see the extra electrocardiogram readouts. So you would get, likely, ejection fraction diagnoses like hypertrophic cardiomyopathy and pulmonary hypertension. So, you would get a whole bunch of things that you don't get anywhere else, because they did some of the initial publications in this space. But, perhaps to me, the most exciting thing is a smartphone ultrasound or echocardiogram. So this blows me away. As long as you knew the heart was on the left side of the chest, and you put the probe somewhere on the chest, on the left side, (as long as the patient doesn't have situs inversus, right?) that the AI will tell you to move it up or down, or clock or counter-clock. And just like you were depositing a check in your bank account, and it auto captures when you did it right, it auto captures the video loop of your echocardiogram. We don't even know it. And then you get an auto-interpretation. And all of a sudden, you have now people in the hinterlands of Africa, India, other low and middle-income countries, where they're able to do an echocardiogram or any smartphone ultrasound, with no training, and get good interpretation, all AI driven. The acquisition of the images and the interpretation. And what you can see is that, eventually, when these probes are really cheap, when you can just pop them on the bottom of your smartphone, patients who will be imaging themselves. Of course, then you get to some scary thoughts of the expectant mother imaging their fetus, you know, every few hours, or crazy things like this. But, if you're a patient with heart failure, instead of having to go into a clinic, and you're concerned about your breathing or whatever, or just your checkup, you could just send in your image. And there are studies like that being done right now. So, the cardiology space is getting a lot of AI both in imaging, in electrocardiograms and in echo right now. And, of course, there are other ways that Cardiology is getting charged up with some AI tools. But all of this stuff is early. I mean, just think. Here we are mid 2023, but what's it going to look like in just a couple/few years? If we embrace it; if we use it in positive ways.

Emily Silverman

Do you ever worry about information overload? Because there are some parts of the book where you emphasize the importance of research, and really understanding the value of certain measurements, the value of certain interventions. There are examples where cancer screening, you know, early cancer screening: you catch more cancers earlier, but the outcomes don't actually change when it comes to things like being cancer-free or mortality data. So do we need to invent a whole other field on top of AI that's dedicated to interpreting the enormous data streams that are coming out of this AI? Like, is a patient capturing their own echo useful? Is it of value? Who decides? How do we measure that? How do we make sure that there isn't just more contributions to this problem of waste in medicine?

Eric Topol

Right now, we have some flagrant examples of how when you do unnecessary tests, like ultrasound of the thyroid that was done in Korea, and made all these diagnoses of thyroid cancer and never changed any outcomes. We have really dumbed down medicine, where if you're 40, as a woman, you should get a mammogram every year. If you're 50, you should have a colonoscopy. We use age as a single criteria for a lot of things. That's really dumb, because there's so many other features that we could put into it. Then you get to the point of, "Oh, what about with AI?" We could make things worse; we could have more incidentalomas and more TMI stuff. So, we have to come up with the right balance where we get smarter. We have so much information now about any given patient, or we could get that information. But rather than using retinal photos, or their genome (parts of their genome, particular genes of interest)... Rather than using that, we just use these reductionist criteria of age, when we know that people of a given age could be physiologically much younger or older. I mean, help me. This is just so dumb. So, I do think that, with the help of AI, we'll discern risk at a better level, to know about screening. Because, we've talked about diagnoses, but one of our real foibles in medicine is, you the old Bayes' theorem of doing screening tests on people of low risk. We do that all the darn time. Let's only do the screening tests, or use these tools, on the people that really need them. And if AI can help direct that, to find the people at increased risk, rather than us being stupid, hopefully that will be an advance.

Emily Silverman

When I imagine medicine in 5/10/20 years, I almost can't imagine medicine because the rate of change is so high. And, you know, I'm envisioning walking into the hospital, and there are no stethoscopes, and everybody just has a pocket ultrasound. And I'm envisioning all of these new data streams, and all of these new ways of thinking about, and understanding, diagnosis and treatment. What do we need to be doing with medical education, to start embracing, responsibly embracing, these changes? You talk in the book about how the whole thing about being a med student for so many years was about memorizing and test scores, and how emotional intelligence and things like that aren't really assessed as much. But even beyond emotional intelligence is this different type of domain knowledge. You say, "Future doctors need a far better understanding of data science, bio-informatics, bio-computing, probabilistic thinking, the guts of deep learning neural networks, algorithms, and understanding how they work, and also the liabilities of these algorithms." You talk in the book a bit about things like bias being coded into the algorithms. So, what should med students be learning these days? And, you know, because there's the old dogs, you know, the... It's funny. I don't mean to imply that you're at all an old dog, because you're more up to date on these topics than probably most medical students. But where do we target our efforts? and how does the medical education landscape shift?

Eric Topol

I wish I was a medical student, or a young physician now, because the excitement going forward is so palpable, extraordinary. But, at the same token, I'm really discouraged that we have no medical schools in this country that have AI in their curriculum. This is about taking deep data of a person to prevent their illness from ever occurring, that they otherwise would be highly likely to manifest. This is what's so exciting about the future is fulfilling a fantasy, that has never been even approximated, with that person's data information and the knowledge that we can use, with the help of AI. So, the problem we have now, is we still believe doctors need to memorize all this stuff. Basically be a brainiac, or some semblance of a brainiac; get perfect scores on their MCAT; have a really high GPA. And here's your ticket to medical school. It's totally the wrong way we should be selecting the future physician workforce. Because these large language models are not going to go away. They're going to be a very important resource. Yes, they will make mistakes that have to be overridden, and there has to be oversight for sure. But the brainiac era is over. We shouldn't be picking people on their MCAT scores anymore, or their GPA. We should be doing interviews and seeing how a person interacts, in terms of: Do they exude the ability to develop a interpersonal bond? Do they communicate well? Do they show any sign of compassion or trust or empathy or whatever that kind of sense is? You know, the smell test of a person being able to relate to other people. That's the humanity in medicine that we need. And AI will emphasize that, in the years ahead. So, sure, we want people who are intelligent, but we have another auxilary path to promote intelligence. What we need most of all, is those people who will establish presence, who will truly care. Who the patient knows, "This doctor has my back. He or she really does care about me. And that's going to help me get through whatever illness I have, or will help me prevent one that I'm at risk of getting." So, that's the future of medicine in a nutshell. But there isn't a medical school in the world that's gearing up for that, both with respect to the students already enrolled, less the ones they will admit.

Emily Silverman

Wow, that felt kind of like a mic drop to me. So I'm just absorbing. I'm absorbing that the era of the brainiac is over. It's just so true. We're in a new world: new technology, new priorities. And I really appreciate everything that you just said. As we wind to a close, Eric, is there anything else that you'd like to say to our audience about AI? About Deep Medicine? Or about the future, the future of health care?

Eric Topol

Nah, I know we've covered it well. I still remember the first time I came across your work - when you wrote about comedy, and your enthusiasm for comedy and your talent. And, you know, I think you exemplify the dynamic aspects of physicians, all the things that you've done, beyond caring for patients directly. What I hope is that our lives as physicians, and clinicians in general, will improve. I mean, I think we're in a pretty desperate situation right now. COVID didn't help that at all. But I never give up hope. If there ever was an eternal optimist, I might be the one that would be cited, because I don't want to accept where we are, where we've been as our resting place. And, I do really think we have a path forward. And I love seeing how people can do things that are outside of their initial vision of what they were going to do. Like you. I hope all of us can find that: the things that are enriching and fulfilling, in that we feel like we're on a mission to do something. Whether you're doing research that will help patients, or you're doing things that will help physicians in general, that is our potential to actualize. But you can't do that if you can barely get through a day, and that your mental health is so compromised. Hopefully, it will get out of that situation. We need a remedy, and I'm banking on this being the one.

Emily Silverman

Well, we'll leave it there. I have been speaking to Dr. Eric Topol about his book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. I learned so much from this book, and I'm really inspired. I've already been Googling online courses to learn more about AI. So, I just think you're either on the ship, or you're not on the ship, because the ship is going. So, I hope this episode inspires you all to learn more about AI as well - in general, but also its applications to medicine. And Eric, thank you so much for being our guide.

Eric Topol

Well, thank you, Emily. If we could do a little bit in AI what you've done for storytelling in medicine, we'll have achieved something. Thanks very much.

Note: The Nocturnists is created primarily as a listening experience. The audio contains emotion, emphasis, and soundscapes that are not easily transcribed. We encourage you to listen to the episode if at all possible. Our transcripts are produced using both speech recognition software and human copy editors, and may not be 100% accurate. Thank you for consulting the audio before quoting in print.

Emily Silverman

You're listening to The Nocturnists: Conversations. I'm Emily Silverman.

Artificial Intelligence seems to have found its way into almost every aspect of our lives, and healthcare is no exception. From diagnosis to treatment to communication to clinical documentation, AI is already reshaping the very foundations of how we practice.

Today’s guest, cardiologist Eric Topol, was writing about AI way before it went mainstream. His book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,' unravels the mysteries behind AI, delves into its potential to revolutionize diagnosis, prognosis, and more – and it was published in 2019! Clearly this book was way ahead of its time.

Eric is a Professor of Molecular Medicine, Editor-in-Chief of Medscape, and Founder and Director of the Scripps Research Translational Institute. He’s published 1,200 peer-reviewed articles (and is one of the top 10 most cited researchers in medicine) and he’s authored 3 books: The Creative Destruction of Medicine, The Patient Will See You Now, and Deep Medicine, which I mentioned before. Eric also publishes a Substack newsletter “Ground Truths.”

In my conversation with Eric, we discuss what we understand and don’t understand about how AI works, how we grapple with that uncertainty, the ways AI is or can be applied in healthcare, and ultimately, his optimistic thesis that AI can actually help us bring humanity back into medicine.

But first, here’s Eric reading an excerpt from his book Deep Medicine:

Eric Topol

We're still in the earliest days of AI in medicine. The field is long on computer algorithmic validation and promises, but very short on real-world clinical proof of effectiveness. But with the pace we're seeing, in just the past few years, with machines outperforming humans on specific, narrow tasks, and likely to accelerate and broaden; it is inevitable that narrow AI will take hold. Work flow will improve for most clinicians, be it by faster and more accurate reading of scans and slides, seeing things that humans would miss, or eliminating keyboards, so that communication and presence during a clinic visit is restored. At the same time, individuals who so desire will eventually gain the capacity to have their medical data, seamlessly aggregated, updated and processed, along with all the medical literature to guide them, whether for an optimal diet or their physical or mental health. All of this is surrounded by the caveats that individuals must own and control their medical data, that doctors actively override administrators who desire to sacrifice enhanced human connection, in favor of heightened productivity, and that intensive steps to preserve privacy and security of data are taken.

Emily Silverman

I am sitting here with Dr. Eric Topol. Eric, thank you for being here today.

Eric Topol

It's great to be with you, Emily.

Emily Silverman

So Eric, I've had my eye on this book for quite some time. I hadn't read it until recently. And I want to tell you what sparked me to finally read it. And what it was, was two things. One: ChatGPT came out, and I started playing with it. And then, two: a friend of mine, named Kevin, who's a reporter at The New York Times, published a transcript of a conversation that he had with Bing's Chatbot "Sydney". And in that conversation, the chatbot tried to persuade him to leave his wife. And, I've always been interested in trends and technology and just interesting things, but I had never felt so... I don't think panic is necessarily the right word. But, I was like, "I am behind. I need to learn everything that I can about AI, right now." And I started watching these YouTube videos about language processing, and transformers. And of course, I only understood about 42% of it. But I realize now, I needed to catch up and read your book. As you said, the title is Deep Medicine: How Artificial intelligence Can Make Healthcare Human Again. So, I appreciate you coming on to the show. Even though this book came out in 2019, it's just so obviously relevant to everything that's happening right now. My first question is, since the book came out... To me, it feels like a lot has changed. Has, in fact, a lot changed? Or is it more just the public catching up with things that were already happening in the background?

Eric Topol

I think it's really the latter, Emily. That the public hadn't had the "wake-up" call. And, really, what happened with you, with ChatGPT, a billion unique users of ChatGPT in 90 days. There's never been anything like that in history. A billion! And, of course, it's still... Who knows how many now, months into it. So that is what woke the world up to AI. And you touched on transformer. It's that transformer architecture that we didn't have when I wrote the book, but you could see it coming. I remember talking to some of the top AI gurus and saying, "We don't have a way to process all this data with the images and the text and the voice and videos. What are we going to do?" And they say, "Oh, we'll have it. It's coming." And, "We're working on it." And, here it is. So. When GPT-4 came along, just in March, that was a biggie, because that was the first chatbot that was multimodal, that could take videos and images and text and speech and put it all together; process it. So, it was incubating four years ago when the book was published, but now it's getting legs, in a big way.

Emily Silverman

In the book, you do such a great job laying out the basics of Artificial Intelligence and machine learning. And I think one of the units, I guess, of technology, that is good to understand, or at least try to understand, is this idea of the DNN (the deep neural network). And, in the book, you talk about how there's these different layers of processing. There's an input layer; there's an output layer, and then there's all these layers in between. And you really say that, this DNN, this neural network, it's really just a tool. It's almost like a steam engine. It's really just a tool, that is fueling what is now a revolution. And we can use it for, as you said, images, words, language, data. But the problem is, those middle layers: We don't quite understand what they're doing or how they work. Is that true? And can you talk about that?

Eric Topol

Well, I love your steam engine metaphor.

Emily Silverman

It's your metaphor. It was in your book.

Eric Topol

It was? forgot about that. Oh, wow. Okay. I guess I should have re-read it. Any rate... That's funny. So, the layers, the so-called artificial neurons... (It's kind of a funky term.) But it's basically, once the inputs of hundreds of thousands, if not millions, of inputs. Whatever it is. It could be an image; it could be a text. It could be anything. That input goes through these layers to differentiate the features, that the machine leads to interpretation (or often called prediction). In fact, the first year of deep neural networks is kind of called Predictive AI. And then, the transformer model is now what we call Generative AI, which we'll talk about a bit more, because it does a lot more than just generate things. But, the number of layers is proportional to the complexity of the input data. The idea is to try to simulate the brain, although it requires a hell of a lot more power, which is another big problem that we're seeing ushered in right now. But, going through all these layers, eventually we get to the output whereby the processed solution, interpretation, prediction, whatever is made. And what we don't really understand is the magic of the deep neural network. How does it see these things? We know it can be trained by a gazillion inputs to see things that we can't see. So they have these things called saliency maps, where it visually tries to deconstruct those layers to see, "What is it picking up that we can't see?" So we could get smarter, and see the things that the machines can't see. And, the saliency maps are only partial. So the black box of these deep neural networks, is that we don't really understand what's going on in those hundreds (if not thousands) of layers of, basically, deconstructing that image, or text, or speech. On the other side, on the output side, is what is extraordinary, because we get image interpretation that is typically far better than the best human eyes. Up until now, it was all about images: X rays and CAT scans, PET scans, ECGs and retinal photos, and that kind of... Skin lesions. But, of course, now everything's changed. Because it's not just images. The change of deep neural networks, to this transformer model which has set up the potential for multi-modal, for different forms of data.

Emily Silverman

So we have the inputs, the outputs, the magic happening in between (that we don't understand. You know you could feed in a gazillion chest X-rays, and then train the model, let's say, to pick up pneumonia in a way that a radiologist maybe couldn't, and things like that. But then these transformers, or technology that's called a transformer, arrives and changes everything. What is a transformer? What did it change? And why was that so important for the technology, for a lay person?

Eric Topol

Yeah, yeah. Well, I guess the best story about it is, interestingly, it was invented by Google. And they had a pre-print, five years ago or so, about transformer model architecture. And what's interesting is, they could have upended their search. But, kind of like the story of Kodak or Blockbuster, not ever wanting to take on their main technology to challenge it. They didn't pursue it, and they let Open AI run circles around them. And then teamed up with Microsoft. But basically, the transformer model architecture is one that is adding another piece to the deep neural network of attention. So that it's able to ingest different modalities of data. Really what set up the large language models, which is based on that architecture. It isn't just a turbo charge input capability with attention, but it's also massive amounts of graphic processing units. Computer operations are flops at levels that we never thought would be possible. So, it's a combination of different building blocks that have gotten us to these large language models. If it was just that there was another alternative way to deal with input data, it wouldn't have been enough. It took all these other pieces to get to where we are today, which is still a work in progress. If GPT-4 is considered the leading chatbot of today, it's going to be seen as obsolete in the months ahead, for sure.

Emily Silverman

The months. Not years, but months.

Eric Topol

Yeah. Well remember, Chat GPT was November 30th. And GPT-4, which is a big deal jump ahead, was middle of March. So that's like four months. It's amazing. So, the progress here is inevitable. And that's, of course, why we have some people talking about the existential threat. Whereas others are talking about (like we are) its potential: revamping, and rebooting of a lot of things we do in healthcare and medicine.

Emily Silverman

Well, I want to stay in healthcare a bit, but we must to get to the existential piece, eventually, Maybe we'll try and save that for the end. But I want to hone in on what you said here about "the black box". So the fact that this magic, we don't understand how it works. And, you had a section of your book that talks about this. And you say, "We already accept black boxes in medicine. For example, electroconvulsive therapy is highly effective for severe depression, but we have no idea how it works. Likewise, there are many drugs that seem to work even though no one can explain how. Should we lend the same or extend to the same tolerance for uncertainty that we do with these interventions as we do to AI?" But then you go on, and you say, "There's something called the AI Now Institute that says 'No'. And then the European General Data Protection Regulation went into effect, and agreed 'No'; that they demand an explanation. They say, "We have a right to an explanation." So I'm curious. Where has this gone? And where do you personally fall?

Eric Topol

This is really a critical question you're asking. No surprise, of course, that you would come up with this. But, the issue here is, as you say, we hold machines potentially more accountable than we do traditional medicine. Besides the examples you gave, we don't even know how anesthetics work, but we give them in every operation.

Emily Silverman


Or Tylenol. I think what the mechanism of action of Tylenol, we still don't know.

Eric Topol

Don't know; don't know. So, the question is: If you had a large, rigorous trial of 100,000 people, and half got the AI intervention, and half got a sham AI (whatever reasonable control), and it showed you saved people's lives (whatever endpoint you like), would you say, "I'm not using that until I understand it?" That's the debate. Because, in medicine, we accept things that work, with compelling evidence. But are we going to do the same for machines? Now, on the other hand, because of this explainability drive, that you alluded to, with the European authorities and... All of us, we'd like to know; we're curious. We're going to use it to treat people or to make diagnoses; we sure want to know. So, the idea of deconstructing neural networks, to get to the explanation, is a work in progress. And there are many who are sanguine that we will know; that we gotta keep working on this. But right now, there's a black box, and we've got to deal with it. The good part is we don't have compelling data yet. So we don't have to make that choice so much. But we will. And we're seeing a course in the medical image space. We're seeing more randomized trials. And, you know, if you're a gastroenterologist and you use machine vision, you'll find more polyps. Whereas if you don't have machine vision during your scoping procedure, you'll miss things. So, those have been established by multiple randomized trials. Do we know exactly how the machine vision picks up the polyps? No. But do we care? I mean, should the patient who goes through all that prep, and has a risk of colon cancer... Should they not be entitled to have this help during their colonoscopy? These are the kind of questions we have to grapple with, because accepting proof, without the full story, is something that is going to become more and more common.

Emily Silverman

And in healthcare and medicine, who will decide? Because this AI Now Institute you mention, that's not healthcare specific. That's a general topic. NYU, European Union, also general, not healthcare specific. Will these decisions land at the level of the FDA? At the AMA? Like who do you think is going to be in the chair, making these decisions? Or who should be?

Eric Topol

Well, there's a good unknown, because the FDA is having a really hard time in dealing with this, because these neural networks and the large language models are autodidactic. They are insatiable for input. And so the more they get, the more smart the output gets. So when the FDA... They can't even figure out how to not freeze an algorithm, no less approve something without its full explainability. The FDA is really struggling with this so far. They've given a clearance (usually sub-approval of the so-called 510(k) mechanism), but some with actual frank approvals, to over 500 algorithms that use deep neural networks. Most of them are image-related, but not all, and they freeze every one of them. And they so far have not demanded explainability. So, at the FDA level, they'll take proprietary datasets that the medical community never gets to see. They don't even publish the papers, these companies. And the buy-in, or the implementation, has been slow, because it's not transparent.

Emily Silverman

Medicine tends to be pretty slow to adapt technology in general, as is demonstrated by the fact that many of us still use pagers and fax machines. In this case, is that a good thing? Is it good to have the brakes on and to be really intentional about what we take up and what we don't? Or do you think that we're too slow? And that there are exciting benefits to this technology that we're shutting out of the profession, out of fear or out of culture or out of ego, or things like that?

Eric Topol

I think every clinician needs to understand some of the aspects of AI, the nuances. It was really one of the reasons I wrote the book, Deep Medicine. I spent a few years researching it, because I didn't know anything about this topic. I mean, I had to learn and I thought that if we get everyone grounded to some extent, which isn't the case now, then you'll know better when you can use it; when you can trust it. Because it's invading our lives. It already has. Our music, our movie, TV we're gonna watch, our navigational systems. I mean, it has truly taken over our lives. A lot of it's invisible, but as people taking care of patients... And patients... This is something that's not going to go away. And so, it's really critical that, to make those decisions that you're asking, if you do it without knowledge of the downside, or the what I call pluri-potency. It really is... got, like, potency we've not seen before. But, it comes at a price, potentially. And that's what we have to be alert to, cognizant of, both its potential and liabilities.

Emily Silverman

The book opens with a personal story from you about your knee surgery. And you talk about the difference between "shallow medicine" and "deep medicine". And we kind of hear about these definitions through the anecdote of your knee. And one thing I love about this book is that it takes an optimistic posture. The subtitle is "how artificial intelligence can make healthcare human again". And so you're really making the case, in this book, that AI can help get us from shallow to deep. Could you say a few words, maybe, about your knee story, and this idea of shallow versus deep, and what you mean when you say that?

Eric Topol

I had a knee replacement, because of a condition I had as a child called Osteochondritis. Dissecans. That was actually the first thing I wrote about in the book: getting roughed up by my orthopedist (the same orthopedist who I had referred all my patients with knee and hip replacements). And the first line of the book is from him: "You should have your internist prescribe anti-depression medications." And my wife and I looked at each other bug-eyed saying, "Are you kidding me?" I was in desperate pain. I had what, to me, it looked like a gangrenous leg. I mean, it was all blue and markedly swollen. And this is weeks post op. I wasn't able to sleep; I was in pain. I was trying every cockamamie remedy known to mankind, with no relief. I was in the worst shape. I was thinking about suicide. So, being roughed up by a doctor, by basically saying, "Well, you have depression, and you should see an internist," in that state. And then not having the diagnosis made. I mean, my wife actually made the diagnosis 'cause she's trying to help me, and she said, "I think you have this condition called arthrofibrosis." And she was right. I never heard of it before. And it was this massive inflammation reaction to the prosthetic device of the knee. And my orthopedist never mentioned it. And there would have been ways to help prevent that, or treat that more effectively. And, rigorous physical therapy, which is what I kept getting prescribed, was making the inflammation worse. And I finally saw a physical therapist who rescued me, who said, "Stop this physical therapy; it's got to be gentle. And you gotta go on high-dose, anti inflammatory ibuprofen." And then I started to feel better. And I never got normalized. But I got rescued from the sense that I can't live like this. And it helped me identify people with chronic pain. When I talk to a group, and I say, "How many of you, or your loved ones, have been roughed up by a doctor?", everybody raises their hand. Because we don't have time. I think this orthopedist is actually a very good surgeon, and actually a person. But he had six rooms that he's going to and every one is like two minutes, and he just can't deal with it. So, what I thought, Emily, is, "How are we going to get out of this terrible rut?" We've got depressed doctors, burnout, rush-to-do-everything, squeezed to the max, can't deliver care. I'm on the receiving end, experiencing that firsthand. Is there a way to get out of this? And I actually think... I hope that we have a path. It isn't a sure thing, for reasons that we'll probably get into. But, I don't know any other way. I don't know of another solution to our mess that we're in right now.

Emily Silverman

I'm wondering about an alternate history. What would a version of this knee story have sounded like that included Artificial Intelligence decision support? How do we go from that shallow medicine, "one size fits all" assembly line to deep medicine? Or what could have been, if we had been using this technology in a humane way?

Eric Topol

Shallow medicine: it's reflexive instead of reflective. It's like System 1 thinking (of Kahneman) instead of System 2, where we actually have more than a few seconds to put thought into what is going on here. So, that is where GPT-4 works really well. If I had put my symptoms into GPT-4, which has up to date citations, and say, "What is going on here?", it wouldn't have taken my wife weeks to research things to help me. I would have had the answer right there, would have said, "You probably have arthrofibrosis. And you need to stop this physical therapy." Their having access to AI that's smart, that we haven't really had before, would have helped me; would have saved a lot of time. It could have given me an error. Right? And that's why you don't want to only have one conversation. You might want to have it two or three times to make sure that you're getting the same answers. You can audit the chatbot, or AI, by going through the same conversation on a second or third run. But, I think the answer here is, I would have had the right diagnosis, the right treatment, and I wouldn't have suffered. And if we can help people from not suffering, and we use this as a tool, as clinicians together, that I think (just in my example), would make a big difference.

Emily Silverman

There are a lot of applications for AI in medicine, and I want to get into some of the other ones, in a bit. But let's stay with diagnosis, for a moment, because I can tell that that's a really important one for you, given what you went through. And I think a lot of us listening probably have had similar experiences. I know that I have. So when we're thinking about AI and diagnostic support, there's really two applications that I'm aware of. One of them is this idea of putting in a constellation of symptoms and getting a list of diagnoses that pop out. And I know that there's a software called Isabel that has been used in this way. The other application that I've heard about is crowd-sourcing. And it doesn't even have to be physicians receiving the information. There's even versions of this where it's actually lay people who decide to take up some detective work, and help people figure out what it is that they're dealing with. Like in your scenario, it was actually your wife who figured it out, maybe after a few hours of intensive internet research. So. talk a little bit about symptom-checkers and crowd-sourcing. And maybe there are other ways that AI can help us with diagnosis? And, what is the state of that technology right now?

Eric Topol

Yeah, you reviewed where things have been. I think they're going to be largely superseded by the large language models. And the reason for that is they have the medical domain knowledge, which is trained as the entire internet, and Wikipedia, and books, and whatever you can think of as inputs, but now at a point where they can also have add-on medical-specific training. So, there will be the human in the loop, those kind of people that want to help in doing searches. But pretty quickly now... I mean, if you do a GPT-4 conversation. You'll see that we don't need those services anymore, because you'll get the answer and the exciting new part is you can put in your scans. You could put into Med-PaLM 2 and GPT-4. You could actually put a copy of your X ray, and it will start to interpret that, added to whatever symptoms. So, right now we have the beginning of AI on the patient side. We have smartwatches that help diagnose heart rhythm abnormalities. We have, in many countries now, a kit you can get from the drugstore, which is an AI kit to tell you if you have a urinary tract infection or not. We have ability to diagnose skin lesions and cancers through a smartphone photo, or children's ear infections through a smartphone attachment. And these are all algorithms. Being able to diagnose diabetic retinopathy in the supermarket, by untrained personnel. So we're seeing patient-side help of getting screening, which is doctorless. Doctorless, oh my gosh. But you still want to have a doctor, to get a treatment if you need a treatment. So, your point about the patient side, I think here, is that we have to acknowledge that even though we tend to hog the AI space, to how it's going to help us. It's going to help patients, if they get accurate screening diagnoses, and they have a human in the loop, whether it's a nurse, a physician, pharmacists, whatever. And so, the human in the loop part can't be emphasized enough. But if you can do the screening... just think. You know, all the dermatologists and family doctors that are dealing with skin issues or pediatricians dealing with ear infections, how much decompression occurs when they have those diagnoses ruled out, accurately, with AI tools.

Emily Silverman

You talk about the analogy to a self-driving car, and how in the self-driving car business, there's five levels. Level 5 is fully autonomous; Level 4 is mostly autonomous. Level 3 is conditionally automated, where a human can take over, and Level 2 would be something minimal, like cruise control or lane-keeping. And you say in the book, that it's unlikely medicine will ever get beyond Level 3 machine autonomy. So this idea that there needs to be a human in the loop. Can you talk a little bit more about that, and how we might work together with AI as a team? And what that means for us as physicians, and our identity, and all those sorts of things?

Eric Topol

Yeah, I'm so glad you brought up the driverless car story, because we can learn so much from it. It's, you know, years ahead of where we are in medicine. And it's multi-modal, you know, lots of different layers of data. It's in real time getting processed to drive the car, and not run people over or have accidents. But what's interesting is the Level 5, which is totally autonomous, all the time, under any weather conditions, any road conditions. We will never get there. But even though that's the case, we have people like Elon Musk, and other people who are big into AI and car, saying we will get there. No, it's impossible. And Level 4 is probably unlikely to be achieved, although that's kind of the reset goal. But the hype of driverless cars, that originally you saw these videos of everyone's in a driverless car; there's no drivers anymore. We're not going to see that. Because there's things like fog and ice and snow and rain and construction zones and whatnot. So that's really a lot like medicine. I mean, when you think about it, we're never going to be getting rid of doctors, and no doctor has to be worried about their job. But they should be thinking about how can they have more efficiency, more time when they need it with patients, so they can switch to deep mode, and slow mode, because of this support. If you think about the potential here, we have a unique opportunity, even if it's kind of a Level 3 equivalent in medicine, where we get the synergy of patients and clinicians to achieve something that gives us gift of time, which is what I consider the end goal here, is we get back our lives. The reason we went into medicine, which is "I want to care for patients". That's why I did this folks, but I can't do it. And so, if we get back to that, and the patient-doctor relationship, we have a big win. I hope we'll get there someday.

Emily Silverman

Two of the specialists that you focus on in the book are Radiology and Pathology. And I thought this was really interesting, because these specialties are all about pattern recognition. And pattern recognition is what AI does best. Things like making a complex diagnosis from a human and their story and their symptoms, that's a little bit harder. But something like reading an image... That feels, at least to me, more doable from a machine standpoint. And, you talk about some of the studies that have been done, and the outcomes and when, you know, they're comparing the machine-read scans with the human-read scans, and the performance. And you envision a future where some of that rote work of just reading ordinary scans, day after day, for hours, that could be outsourced to the machine, so that the physician can focus on other types of work. And you even propose that, maybe one day, Radiology and Pathology could fuse into a single specialty. Talk a little bit about this vision, because it feels radical, but also it makes so much sense. And so, I'm curious if you could tell our audience about that.

Eric Topol

Right? Well, pattern recognition is something we all do as clinicians when we're seeing patients and looking at their data, whether it's scans or labs or various things, but the radiologists and pathologists, as you point out, are doing that the most. They're looking at scans and slides throughout their day, throughout their life professionally. Well, whatever number of scans, radiologists (typically 50 to 100)... Each scan, of course, has all sorts of frames and loops and videos and whatnot, but 50 to 100? Well, you could say, "You know what, you could read it 500 a day with AI support." Or you could say to the radiologist, "You know what? We get a lot of scanned requests that are unnecessary, could you be the gatekeeper? And could you, by the way, go over with the patient the results? Because their surgeon's telling them they need surgery, and you may have a different view about this from your experience." So, the whole idea of the radiologists living in the basement and working in the dark in their pajamas, whatever, could be re-vamped, you know. And, you could actually wind up talking to patients. I've talked to a lot of radiologists who cherish the idea of being able to communicate with patients, and avoiding radiation (unnecessary) by having unnecessary scans. So, it's enriching what these pathologists... Who would have ever thought that a pathologist, looking at a slide of a potential cancer, would not only be able to make the diagnosis of cancer, where it's coming from (which sometimes is difficult to determine), what are the driver mutations, the prognosis of the patient, destruction mutations in the genome... all from looking at the slide with an AI support. So each of these particular specialties get to another level of discernment, we're machine help, and accuracy. So not just accuracy, but these machine eyes that see things that they couldn't ever see, because they're trained on orders of magnitude more images than they will, in a lifetime. So the other thing to keep in mind, 50% of physicians are below average. So it helps bring up the rear, if you will, for those radiologists who are easily distracted, or not as experienced, or for whatever reason, they're below average. Or pathologists. It brings them up to a higher level of accuracy. But then there's another dimension to this, which is really exciting. And I didn't see this four years ago, and it's just exploding. Which is seeing things and images that humans will never see. So let's just take example, the retina. Now we know, which we weren't fully aware, a few years ago, that the retinal photo will help make the diagnosis of kidney disease, pre-neurodegenerative diseases, heart disease risk. It will tell us about diabetes control. A paper came out today: it'll help diagnose hyperlipidemia, blood pressure control. It's basically a gateway to the whole body. And ophthalmologists are just happy to be able to interpret for their eye diseases. When it has a hepatobiliary disease from a retinal picture? And then, a cardiogram As a cardiologist, that's what I spend a lot of time reading cardiograms. I would never have expected I could tell the hemoglobin to the decimal point, or the age/sex of the patient, their ejection fraction. Valvular heart disease, all sorts of other diagnoses that are hard to make, the pulmonary capillary wedge filling pressure of the heart. I mean, all this stuff from a cardiogram! That I'll never be able to see. I've been reading cardiograms for 35 years. I can't imagine this. So this is what is extraordinary, is seeing things that (machine eyes) that we will never be able to see. It's somewhat humiliating, that machines can do this. But at the same time, why not get their help? Why not lean on them, when we know it's really accurate, useful information. And, that we work together.

Emily Silverman

I was stunned by the section in the book about the scans of the retina, where you say, "As we learned from a Google study of more than 300,000 patients, retinal images can predict a patient's age, gender, blood pressure, smoking status, diabetes control via hemoglobin A1C, and risk of major cardiovascular events - all without knowledge of clinical factors. Such a study suggests the potential of a far greater role for eyes as a window," you say "into the body, for monitoring patients." But you know that old saying, that the eyes are the window to the soul?

Eric Topol

Right, right.

Emily Silverman

And I didn't know that you are able to tell those things from an EKG: things about pulmonary capillary wedge pressure. These are things that normally I associate with measuring on an ultrasound of the heart. Can you tell us a bit about the cardiology updates, as a cardiologist? What's happening? And what's still in the research phase, and what is actually being used in a clinical setting? Because, when I work in the hospital, I'm not yet seeing any of this technology.

Eric Topol

Right, right. Right. Well, if you were at Mayo Clinic, you would see the extra electrocardiogram readouts. So you would get, likely, ejection fraction diagnoses like hypertrophic cardiomyopathy and pulmonary hypertension. So, you would get a whole bunch of things that you don't get anywhere else, because they did some of the initial publications in this space. But, perhaps to me, the most exciting thing is a smartphone ultrasound or echocardiogram. So this blows me away. As long as you knew the heart was on the left side of the chest, and you put the probe somewhere on the chest, on the left side, (as long as the patient doesn't have situs inversus, right?) that the AI will tell you to move it up or down, or clock or counter-clock. And just like you were depositing a check in your bank account, and it auto captures when you did it right, it auto captures the video loop of your echocardiogram. We don't even know it. And then you get an auto-interpretation. And all of a sudden, you have now people in the hinterlands of Africa, India, other low and middle-income countries, where they're able to do an echocardiogram or any smartphone ultrasound, with no training, and get good interpretation, all AI driven. The acquisition of the images and the interpretation. And what you can see is that, eventually, when these probes are really cheap, when you can just pop them on the bottom of your smartphone, patients who will be imaging themselves. Of course, then you get to some scary thoughts of the expectant mother imaging their fetus, you know, every few hours, or crazy things like this. But, if you're a patient with heart failure, instead of having to go into a clinic, and you're concerned about your breathing or whatever, or just your checkup, you could just send in your image. And there are studies like that being done right now. So, the cardiology space is getting a lot of AI both in imaging, in electrocardiograms and in echo right now. And, of course, there are other ways that Cardiology is getting charged up with some AI tools. But all of this stuff is early. I mean, just think. Here we are mid 2023, but what's it going to look like in just a couple/few years? If we embrace it; if we use it in positive ways.

Emily Silverman

Do you ever worry about information overload? Because there are some parts of the book where you emphasize the importance of research, and really understanding the value of certain measurements, the value of certain interventions. There are examples where cancer screening, you know, early cancer screening: you catch more cancers earlier, but the outcomes don't actually change when it comes to things like being cancer-free or mortality data. So do we need to invent a whole other field on top of AI that's dedicated to interpreting the enormous data streams that are coming out of this AI? Like, is a patient capturing their own echo useful? Is it of value? Who decides? How do we measure that? How do we make sure that there isn't just more contributions to this problem of waste in medicine?

Eric Topol

Right now, we have some flagrant examples of how when you do unnecessary tests, like ultrasound of the thyroid that was done in Korea, and made all these diagnoses of thyroid cancer and never changed any outcomes. We have really dumbed down medicine, where if you're 40, as a woman, you should get a mammogram every year. If you're 50, you should have a colonoscopy. We use age as a single criteria for a lot of things. That's really dumb, because there's so many other features that we could put into it. Then you get to the point of, "Oh, what about with AI?" We could make things worse; we could have more incidentalomas and more TMI stuff. So, we have to come up with the right balance where we get smarter. We have so much information now about any given patient, or we could get that information. But rather than using retinal photos, or their genome (parts of their genome, particular genes of interest)... Rather than using that, we just use these reductionist criteria of age, when we know that people of a given age could be physiologically much younger or older. I mean, help me. This is just so dumb. So, I do think that, with the help of AI, we'll discern risk at a better level, to know about screening. Because, we've talked about diagnoses, but one of our real foibles in medicine is, you the old Bayes' theorem of doing screening tests on people of low risk. We do that all the darn time. Let's only do the screening tests, or use these tools, on the people that really need them. And if AI can help direct that, to find the people at increased risk, rather than us being stupid, hopefully that will be an advance.

Emily Silverman

When I imagine medicine in 5/10/20 years, I almost can't imagine medicine because the rate of change is so high. And, you know, I'm envisioning walking into the hospital, and there are no stethoscopes, and everybody just has a pocket ultrasound. And I'm envisioning all of these new data streams, and all of these new ways of thinking about, and understanding, diagnosis and treatment. What do we need to be doing with medical education, to start embracing, responsibly embracing, these changes? You talk in the book about how the whole thing about being a med student for so many years was about memorizing and test scores, and how emotional intelligence and things like that aren't really assessed as much. But even beyond emotional intelligence is this different type of domain knowledge. You say, "Future doctors need a far better understanding of data science, bio-informatics, bio-computing, probabilistic thinking, the guts of deep learning neural networks, algorithms, and understanding how they work, and also the liabilities of these algorithms." You talk in the book a bit about things like bias being coded into the algorithms. So, what should med students be learning these days? And, you know, because there's the old dogs, you know, the... It's funny. I don't mean to imply that you're at all an old dog, because you're more up to date on these topics than probably most medical students. But where do we target our efforts? and how does the medical education landscape shift?

Eric Topol

I wish I was a medical student, or a young physician now, because the excitement going forward is so palpable, extraordinary. But, at the same token, I'm really discouraged that we have no medical schools in this country that have AI in their curriculum. This is about taking deep data of a person to prevent their illness from ever occurring, that they otherwise would be highly likely to manifest. This is what's so exciting about the future is fulfilling a fantasy, that has never been even approximated, with that person's data information and the knowledge that we can use, with the help of AI. So, the problem we have now, is we still believe doctors need to memorize all this stuff. Basically be a brainiac, or some semblance of a brainiac; get perfect scores on their MCAT; have a really high GPA. And here's your ticket to medical school. It's totally the wrong way we should be selecting the future physician workforce. Because these large language models are not going to go away. They're going to be a very important resource. Yes, they will make mistakes that have to be overridden, and there has to be oversight for sure. But the brainiac era is over. We shouldn't be picking people on their MCAT scores anymore, or their GPA. We should be doing interviews and seeing how a person interacts, in terms of: Do they exude the ability to develop a interpersonal bond? Do they communicate well? Do they show any sign of compassion or trust or empathy or whatever that kind of sense is? You know, the smell test of a person being able to relate to other people. That's the humanity in medicine that we need. And AI will emphasize that, in the years ahead. So, sure, we want people who are intelligent, but we have another auxilary path to promote intelligence. What we need most of all, is those people who will establish presence, who will truly care. Who the patient knows, "This doctor has my back. He or she really does care about me. And that's going to help me get through whatever illness I have, or will help me prevent one that I'm at risk of getting." So, that's the future of medicine in a nutshell. But there isn't a medical school in the world that's gearing up for that, both with respect to the students already enrolled, less the ones they will admit.

Emily Silverman

Wow, that felt kind of like a mic drop to me. So I'm just absorbing. I'm absorbing that the era of the brainiac is over. It's just so true. We're in a new world: new technology, new priorities. And I really appreciate everything that you just said. As we wind to a close, Eric, is there anything else that you'd like to say to our audience about AI? About Deep Medicine? Or about the future, the future of health care?

Eric Topol

Nah, I know we've covered it well. I still remember the first time I came across your work - when you wrote about comedy, and your enthusiasm for comedy and your talent. And, you know, I think you exemplify the dynamic aspects of physicians, all the things that you've done, beyond caring for patients directly. What I hope is that our lives as physicians, and clinicians in general, will improve. I mean, I think we're in a pretty desperate situation right now. COVID didn't help that at all. But I never give up hope. If there ever was an eternal optimist, I might be the one that would be cited, because I don't want to accept where we are, where we've been as our resting place. And, I do really think we have a path forward. And I love seeing how people can do things that are outside of their initial vision of what they were going to do. Like you. I hope all of us can find that: the things that are enriching and fulfilling, in that we feel like we're on a mission to do something. Whether you're doing research that will help patients, or you're doing things that will help physicians in general, that is our potential to actualize. But you can't do that if you can barely get through a day, and that your mental health is so compromised. Hopefully, it will get out of that situation. We need a remedy, and I'm banking on this being the one.

Emily Silverman

Well, we'll leave it there. I have been speaking to Dr. Eric Topol about his book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. I learned so much from this book, and I'm really inspired. I've already been Googling online courses to learn more about AI. So, I just think you're either on the ship, or you're not on the ship, because the ship is going. So, I hope this episode inspires you all to learn more about AI as well - in general, but also its applications to medicine. And Eric, thank you so much for being our guide.

Eric Topol

Well, thank you, Emily. If we could do a little bit in AI what you've done for storytelling in medicine, we'll have achieved something. Thanks very much.

Transcript

Note: The Nocturnists is created primarily as a listening experience. The audio contains emotion, emphasis, and soundscapes that are not easily transcribed. We encourage you to listen to the episode if at all possible. Our transcripts are produced using both speech recognition software and human copy editors, and may not be 100% accurate. Thank you for consulting the audio before quoting in print.

Emily Silverman

You're listening to The Nocturnists: Conversations. I'm Emily Silverman.

Artificial Intelligence seems to have found its way into almost every aspect of our lives, and healthcare is no exception. From diagnosis to treatment to communication to clinical documentation, AI is already reshaping the very foundations of how we practice.

Today’s guest, cardiologist Eric Topol, was writing about AI way before it went mainstream. His book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,' unravels the mysteries behind AI, delves into its potential to revolutionize diagnosis, prognosis, and more – and it was published in 2019! Clearly this book was way ahead of its time.

Eric is a Professor of Molecular Medicine, Editor-in-Chief of Medscape, and Founder and Director of the Scripps Research Translational Institute. He’s published 1,200 peer-reviewed articles (and is one of the top 10 most cited researchers in medicine) and he’s authored 3 books: The Creative Destruction of Medicine, The Patient Will See You Now, and Deep Medicine, which I mentioned before. Eric also publishes a Substack newsletter “Ground Truths.”

In my conversation with Eric, we discuss what we understand and don’t understand about how AI works, how we grapple with that uncertainty, the ways AI is or can be applied in healthcare, and ultimately, his optimistic thesis that AI can actually help us bring humanity back into medicine.

But first, here’s Eric reading an excerpt from his book Deep Medicine:

Eric Topol

We're still in the earliest days of AI in medicine. The field is long on computer algorithmic validation and promises, but very short on real-world clinical proof of effectiveness. But with the pace we're seeing, in just the past few years, with machines outperforming humans on specific, narrow tasks, and likely to accelerate and broaden; it is inevitable that narrow AI will take hold. Work flow will improve for most clinicians, be it by faster and more accurate reading of scans and slides, seeing things that humans would miss, or eliminating keyboards, so that communication and presence during a clinic visit is restored. At the same time, individuals who so desire will eventually gain the capacity to have their medical data, seamlessly aggregated, updated and processed, along with all the medical literature to guide them, whether for an optimal diet or their physical or mental health. All of this is surrounded by the caveats that individuals must own and control their medical data, that doctors actively override administrators who desire to sacrifice enhanced human connection, in favor of heightened productivity, and that intensive steps to preserve privacy and security of data are taken.

Emily Silverman

I am sitting here with Dr. Eric Topol. Eric, thank you for being here today.

Eric Topol

It's great to be with you, Emily.

Emily Silverman

So Eric, I've had my eye on this book for quite some time. I hadn't read it until recently. And I want to tell you what sparked me to finally read it. And what it was, was two things. One: ChatGPT came out, and I started playing with it. And then, two: a friend of mine, named Kevin, who's a reporter at The New York Times, published a transcript of a conversation that he had with Bing's Chatbot "Sydney". And in that conversation, the chatbot tried to persuade him to leave his wife. And, I've always been interested in trends and technology and just interesting things, but I had never felt so... I don't think panic is necessarily the right word. But, I was like, "I am behind. I need to learn everything that I can about AI, right now." And I started watching these YouTube videos about language processing, and transformers. And of course, I only understood about 42% of it. But I realize now, I needed to catch up and read your book. As you said, the title is Deep Medicine: How Artificial intelligence Can Make Healthcare Human Again. So, I appreciate you coming on to the show. Even though this book came out in 2019, it's just so obviously relevant to everything that's happening right now. My first question is, since the book came out... To me, it feels like a lot has changed. Has, in fact, a lot changed? Or is it more just the public catching up with things that were already happening in the background?

Eric Topol

I think it's really the latter, Emily. That the public hadn't had the "wake-up" call. And, really, what happened with you, with ChatGPT, a billion unique users of ChatGPT in 90 days. There's never been anything like that in history. A billion! And, of course, it's still... Who knows how many now, months into it. So that is what woke the world up to AI. And you touched on transformer. It's that transformer architecture that we didn't have when I wrote the book, but you could see it coming. I remember talking to some of the top AI gurus and saying, "We don't have a way to process all this data with the images and the text and the voice and videos. What are we going to do?" And they say, "Oh, we'll have it. It's coming." And, "We're working on it." And, here it is. So. When GPT-4 came along, just in March, that was a biggie, because that was the first chatbot that was multimodal, that could take videos and images and text and speech and put it all together; process it. So, it was incubating four years ago when the book was published, but now it's getting legs, in a big way.

Emily Silverman

In the book, you do such a great job laying out the basics of Artificial Intelligence and machine learning. And I think one of the units, I guess, of technology, that is good to understand, or at least try to understand, is this idea of the DNN (the deep neural network). And, in the book, you talk about how there's these different layers of processing. There's an input layer; there's an output layer, and then there's all these layers in between. And you really say that, this DNN, this neural network, it's really just a tool. It's almost like a steam engine. It's really just a tool, that is fueling what is now a revolution. And we can use it for, as you said, images, words, language, data. But the problem is, those middle layers: We don't quite understand what they're doing or how they work. Is that true? And can you talk about that?

Eric Topol

Well, I love your steam engine metaphor.

Emily Silverman

It's your metaphor. It was in your book.

Eric Topol

It was? forgot about that. Oh, wow. Okay. I guess I should have re-read it. Any rate... That's funny. So, the layers, the so-called artificial neurons... (It's kind of a funky term.) But it's basically, once the inputs of hundreds of thousands, if not millions, of inputs. Whatever it is. It could be an image; it could be a text. It could be anything. That input goes through these layers to differentiate the features, that the machine leads to interpretation (or often called prediction). In fact, the first year of deep neural networks is kind of called Predictive AI. And then, the transformer model is now what we call Generative AI, which we'll talk about a bit more, because it does a lot more than just generate things. But, the number of layers is proportional to the complexity of the input data. The idea is to try to simulate the brain, although it requires a hell of a lot more power, which is another big problem that we're seeing ushered in right now. But, going through all these layers, eventually we get to the output whereby the processed solution, interpretation, prediction, whatever is made. And what we don't really understand is the magic of the deep neural network. How does it see these things? We know it can be trained by a gazillion inputs to see things that we can't see. So they have these things called saliency maps, where it visually tries to deconstruct those layers to see, "What is it picking up that we can't see?" So we could get smarter, and see the things that the machines can't see. And, the saliency maps are only partial. So the black box of these deep neural networks, is that we don't really understand what's going on in those hundreds (if not thousands) of layers of, basically, deconstructing that image, or text, or speech. On the other side, on the output side, is what is extraordinary, because we get image interpretation that is typically far better than the best human eyes. Up until now, it was all about images: X rays and CAT scans, PET scans, ECGs and retinal photos, and that kind of... Skin lesions. But, of course, now everything's changed. Because it's not just images. The change of deep neural networks, to this transformer model which has set up the potential for multi-modal, for different forms of data.

Emily Silverman

So we have the inputs, the outputs, the magic happening in between (that we don't understand. You know you could feed in a gazillion chest X-rays, and then train the model, let's say, to pick up pneumonia in a way that a radiologist maybe couldn't, and things like that. But then these transformers, or technology that's called a transformer, arrives and changes everything. What is a transformer? What did it change? And why was that so important for the technology, for a lay person?

Eric Topol

Yeah, yeah. Well, I guess the best story about it is, interestingly, it was invented by Google. And they had a pre-print, five years ago or so, about transformer model architecture. And what's interesting is, they could have upended their search. But, kind of like the story of Kodak or Blockbuster, not ever wanting to take on their main technology to challenge it. They didn't pursue it, and they let Open AI run circles around them. And then teamed up with Microsoft. But basically, the transformer model architecture is one that is adding another piece to the deep neural network of attention. So that it's able to ingest different modalities of data. Really what set up the large language models, which is based on that architecture. It isn't just a turbo charge input capability with attention, but it's also massive amounts of graphic processing units. Computer operations are flops at levels that we never thought would be possible. So, it's a combination of different building blocks that have gotten us to these large language models. If it was just that there was another alternative way to deal with input data, it wouldn't have been enough. It took all these other pieces to get to where we are today, which is still a work in progress. If GPT-4 is considered the leading chatbot of today, it's going to be seen as obsolete in the months ahead, for sure.

Emily Silverman

The months. Not years, but months.

Eric Topol

Yeah. Well remember, Chat GPT was November 30th. And GPT-4, which is a big deal jump ahead, was middle of March. So that's like four months. It's amazing. So, the progress here is inevitable. And that's, of course, why we have some people talking about the existential threat. Whereas others are talking about (like we are) its potential: revamping, and rebooting of a lot of things we do in healthcare and medicine.

Emily Silverman

Well, I want to stay in healthcare a bit, but we must to get to the existential piece, eventually, Maybe we'll try and save that for the end. But I want to hone in on what you said here about "the black box". So the fact that this magic, we don't understand how it works. And, you had a section of your book that talks about this. And you say, "We already accept black boxes in medicine. For example, electroconvulsive therapy is highly effective for severe depression, but we have no idea how it works. Likewise, there are many drugs that seem to work even though no one can explain how. Should we lend the same or extend to the same tolerance for uncertainty that we do with these interventions as we do to AI?" But then you go on, and you say, "There's something called the AI Now Institute that says 'No'. And then the European General Data Protection Regulation went into effect, and agreed 'No'; that they demand an explanation. They say, "We have a right to an explanation." So I'm curious. Where has this gone? And where do you personally fall?

Eric Topol

This is really a critical question you're asking. No surprise, of course, that you would come up with this. But, the issue here is, as you say, we hold machines potentially more accountable than we do traditional medicine. Besides the examples you gave, we don't even know how anesthetics work, but we give them in every operation.

Emily Silverman


Or Tylenol. I think what the mechanism of action of Tylenol, we still don't know.

Eric Topol

Don't know; don't know. So, the question is: If you had a large, rigorous trial of 100,000 people, and half got the AI intervention, and half got a sham AI (whatever reasonable control), and it showed you saved people's lives (whatever endpoint you like), would you say, "I'm not using that until I understand it?" That's the debate. Because, in medicine, we accept things that work, with compelling evidence. But are we going to do the same for machines? Now, on the other hand, because of this explainability drive, that you alluded to, with the European authorities and... All of us, we'd like to know; we're curious. We're going to use it to treat people or to make diagnoses; we sure want to know. So, the idea of deconstructing neural networks, to get to the explanation, is a work in progress. And there are many who are sanguine that we will know; that we gotta keep working on this. But right now, there's a black box, and we've got to deal with it. The good part is we don't have compelling data yet. So we don't have to make that choice so much. But we will. And we're seeing a course in the medical image space. We're seeing more randomized trials. And, you know, if you're a gastroenterologist and you use machine vision, you'll find more polyps. Whereas if you don't have machine vision during your scoping procedure, you'll miss things. So, those have been established by multiple randomized trials. Do we know exactly how the machine vision picks up the polyps? No. But do we care? I mean, should the patient who goes through all that prep, and has a risk of colon cancer... Should they not be entitled to have this help during their colonoscopy? These are the kind of questions we have to grapple with, because accepting proof, without the full story, is something that is going to become more and more common.

Emily Silverman

And in healthcare and medicine, who will decide? Because this AI Now Institute you mention, that's not healthcare specific. That's a general topic. NYU, European Union, also general, not healthcare specific. Will these decisions land at the level of the FDA? At the AMA? Like who do you think is going to be in the chair, making these decisions? Or who should be?

Eric Topol

Well, there's a good unknown, because the FDA is having a really hard time in dealing with this, because these neural networks and the large language models are autodidactic. They are insatiable for input. And so the more they get, the more smart the output gets. So when the FDA... They can't even figure out how to not freeze an algorithm, no less approve something without its full explainability. The FDA is really struggling with this so far. They've given a clearance (usually sub-approval of the so-called 510(k) mechanism), but some with actual frank approvals, to over 500 algorithms that use deep neural networks. Most of them are image-related, but not all, and they freeze every one of them. And they so far have not demanded explainability. So, at the FDA level, they'll take proprietary datasets that the medical community never gets to see. They don't even publish the papers, these companies. And the buy-in, or the implementation, has been slow, because it's not transparent.

Emily Silverman

Medicine tends to be pretty slow to adapt technology in general, as is demonstrated by the fact that many of us still use pagers and fax machines. In this case, is that a good thing? Is it good to have the brakes on and to be really intentional about what we take up and what we don't? Or do you think that we're too slow? And that there are exciting benefits to this technology that we're shutting out of the profession, out of fear or out of culture or out of ego, or things like that?

Eric Topol

I think every clinician needs to understand some of the aspects of AI, the nuances. It was really one of the reasons I wrote the book, Deep Medicine. I spent a few years researching it, because I didn't know anything about this topic. I mean, I had to learn and I thought that if we get everyone grounded to some extent, which isn't the case now, then you'll know better when you can use it; when you can trust it. Because it's invading our lives. It already has. Our music, our movie, TV we're gonna watch, our navigational systems. I mean, it has truly taken over our lives. A lot of it's invisible, but as people taking care of patients... And patients... This is something that's not going to go away. And so, it's really critical that, to make those decisions that you're asking, if you do it without knowledge of the downside, or the what I call pluri-potency. It really is... got, like, potency we've not seen before. But, it comes at a price, potentially. And that's what we have to be alert to, cognizant of, both its potential and liabilities.

Emily Silverman

The book opens with a personal story from you about your knee surgery. And you talk about the difference between "shallow medicine" and "deep medicine". And we kind of hear about these definitions through the anecdote of your knee. And one thing I love about this book is that it takes an optimistic posture. The subtitle is "how artificial intelligence can make healthcare human again". And so you're really making the case, in this book, that AI can help get us from shallow to deep. Could you say a few words, maybe, about your knee story, and this idea of shallow versus deep, and what you mean when you say that?

Eric Topol

I had a knee replacement, because of a condition I had as a child called Osteochondritis. Dissecans. That was actually the first thing I wrote about in the book: getting roughed up by my orthopedist (the same orthopedist who I had referred all my patients with knee and hip replacements). And the first line of the book is from him: "You should have your internist prescribe anti-depression medications." And my wife and I looked at each other bug-eyed saying, "Are you kidding me?" I was in desperate pain. I had what, to me, it looked like a gangrenous leg. I mean, it was all blue and markedly swollen. And this is weeks post op. I wasn't able to sleep; I was in pain. I was trying every cockamamie remedy known to mankind, with no relief. I was in the worst shape. I was thinking about suicide. So, being roughed up by a doctor, by basically saying, "Well, you have depression, and you should see an internist," in that state. And then not having the diagnosis made. I mean, my wife actually made the diagnosis 'cause she's trying to help me, and she said, "I think you have this condition called arthrofibrosis." And she was right. I never heard of it before. And it was this massive inflammation reaction to the prosthetic device of the knee. And my orthopedist never mentioned it. And there would have been ways to help prevent that, or treat that more effectively. And, rigorous physical therapy, which is what I kept getting prescribed, was making the inflammation worse. And I finally saw a physical therapist who rescued me, who said, "Stop this physical therapy; it's got to be gentle. And you gotta go on high-dose, anti inflammatory ibuprofen." And then I started to feel better. And I never got normalized. But I got rescued from the sense that I can't live like this. And it helped me identify people with chronic pain. When I talk to a group, and I say, "How many of you, or your loved ones, have been roughed up by a doctor?", everybody raises their hand. Because we don't have time. I think this orthopedist is actually a very good surgeon, and actually a person. But he had six rooms that he's going to and every one is like two minutes, and he just can't deal with it. So, what I thought, Emily, is, "How are we going to get out of this terrible rut?" We've got depressed doctors, burnout, rush-to-do-everything, squeezed to the max, can't deliver care. I'm on the receiving end, experiencing that firsthand. Is there a way to get out of this? And I actually think... I hope that we have a path. It isn't a sure thing, for reasons that we'll probably get into. But, I don't know any other way. I don't know of another solution to our mess that we're in right now.

Emily Silverman

I'm wondering about an alternate history. What would a version of this knee story have sounded like that included Artificial Intelligence decision support? How do we go from that shallow medicine, "one size fits all" assembly line to deep medicine? Or what could have been, if we had been using this technology in a humane way?

Eric Topol

Shallow medicine: it's reflexive instead of reflective. It's like System 1 thinking (of Kahneman) instead of System 2, where we actually have more than a few seconds to put thought into what is going on here. So, that is where GPT-4 works really well. If I had put my symptoms into GPT-4, which has up to date citations, and say, "What is going on here?", it wouldn't have taken my wife weeks to research things to help me. I would have had the answer right there, would have said, "You probably have arthrofibrosis. And you need to stop this physical therapy." Their having access to AI that's smart, that we haven't really had before, would have helped me; would have saved a lot of time. It could have given me an error. Right? And that's why you don't want to only have one conversation. You might want to have it two or three times to make sure that you're getting the same answers. You can audit the chatbot, or AI, by going through the same conversation on a second or third run. But, I think the answer here is, I would have had the right diagnosis, the right treatment, and I wouldn't have suffered. And if we can help people from not suffering, and we use this as a tool, as clinicians together, that I think (just in my example), would make a big difference.

Emily Silverman

There are a lot of applications for AI in medicine, and I want to get into some of the other ones, in a bit. But let's stay with diagnosis, for a moment, because I can tell that that's a really important one for you, given what you went through. And I think a lot of us listening probably have had similar experiences. I know that I have. So when we're thinking about AI and diagnostic support, there's really two applications that I'm aware of. One of them is this idea of putting in a constellation of symptoms and getting a list of diagnoses that pop out. And I know that there's a software called Isabel that has been used in this way. The other application that I've heard about is crowd-sourcing. And it doesn't even have to be physicians receiving the information. There's even versions of this where it's actually lay people who decide to take up some detective work, and help people figure out what it is that they're dealing with. Like in your scenario, it was actually your wife who figured it out, maybe after a few hours of intensive internet research. So. talk a little bit about symptom-checkers and crowd-sourcing. And maybe there are other ways that AI can help us with diagnosis? And, what is the state of that technology right now?

Eric Topol

Yeah, you reviewed where things have been. I think they're going to be largely superseded by the large language models. And the reason for that is they have the medical domain knowledge, which is trained as the entire internet, and Wikipedia, and books, and whatever you can think of as inputs, but now at a point where they can also have add-on medical-specific training. So, there will be the human in the loop, those kind of people that want to help in doing searches. But pretty quickly now... I mean, if you do a GPT-4 conversation. You'll see that we don't need those services anymore, because you'll get the answer and the exciting new part is you can put in your scans. You could put into Med-PaLM 2 and GPT-4. You could actually put a copy of your X ray, and it will start to interpret that, added to whatever symptoms. So, right now we have the beginning of AI on the patient side. We have smartwatches that help diagnose heart rhythm abnormalities. We have, in many countries now, a kit you can get from the drugstore, which is an AI kit to tell you if you have a urinary tract infection or not. We have ability to diagnose skin lesions and cancers through a smartphone photo, or children's ear infections through a smartphone attachment. And these are all algorithms. Being able to diagnose diabetic retinopathy in the supermarket, by untrained personnel. So we're seeing patient-side help of getting screening, which is doctorless. Doctorless, oh my gosh. But you still want to have a doctor, to get a treatment if you need a treatment. So, your point about the patient side, I think here, is that we have to acknowledge that even though we tend to hog the AI space, to how it's going to help us. It's going to help patients, if they get accurate screening diagnoses, and they have a human in the loop, whether it's a nurse, a physician, pharmacists, whatever. And so, the human in the loop part can't be emphasized enough. But if you can do the screening... just think. You know, all the dermatologists and family doctors that are dealing with skin issues or pediatricians dealing with ear infections, how much decompression occurs when they have those diagnoses ruled out, accurately, with AI tools.

Emily Silverman

You talk about the analogy to a self-driving car, and how in the self-driving car business, there's five levels. Level 5 is fully autonomous; Level 4 is mostly autonomous. Level 3 is conditionally automated, where a human can take over, and Level 2 would be something minimal, like cruise control or lane-keeping. And you say in the book, that it's unlikely medicine will ever get beyond Level 3 machine autonomy. So this idea that there needs to be a human in the loop. Can you talk a little bit more about that, and how we might work together with AI as a team? And what that means for us as physicians, and our identity, and all those sorts of things?

Eric Topol

Yeah, I'm so glad you brought up the driverless car story, because we can learn so much from it. It's, you know, years ahead of where we are in medicine. And it's multi-modal, you know, lots of different layers of data. It's in real time getting processed to drive the car, and not run people over or have accidents. But what's interesting is the Level 5, which is totally autonomous, all the time, under any weather conditions, any road conditions. We will never get there. But even though that's the case, we have people like Elon Musk, and other people who are big into AI and car, saying we will get there. No, it's impossible. And Level 4 is probably unlikely to be achieved, although that's kind of the reset goal. But the hype of driverless cars, that originally you saw these videos of everyone's in a driverless car; there's no drivers anymore. We're not going to see that. Because there's things like fog and ice and snow and rain and construction zones and whatnot. So that's really a lot like medicine. I mean, when you think about it, we're never going to be getting rid of doctors, and no doctor has to be worried about their job. But they should be thinking about how can they have more efficiency, more time when they need it with patients, so they can switch to deep mode, and slow mode, because of this support. If you think about the potential here, we have a unique opportunity, even if it's kind of a Level 3 equivalent in medicine, where we get the synergy of patients and clinicians to achieve something that gives us gift of time, which is what I consider the end goal here, is we get back our lives. The reason we went into medicine, which is "I want to care for patients". That's why I did this folks, but I can't do it. And so, if we get back to that, and the patient-doctor relationship, we have a big win. I hope we'll get there someday.

Emily Silverman

Two of the specialists that you focus on in the book are Radiology and Pathology. And I thought this was really interesting, because these specialties are all about pattern recognition. And pattern recognition is what AI does best. Things like making a complex diagnosis from a human and their story and their symptoms, that's a little bit harder. But something like reading an image... That feels, at least to me, more doable from a machine standpoint. And, you talk about some of the studies that have been done, and the outcomes and when, you know, they're comparing the machine-read scans with the human-read scans, and the performance. And you envision a future where some of that rote work of just reading ordinary scans, day after day, for hours, that could be outsourced to the machine, so that the physician can focus on other types of work. And you even propose that, maybe one day, Radiology and Pathology could fuse into a single specialty. Talk a little bit about this vision, because it feels radical, but also it makes so much sense. And so, I'm curious if you could tell our audience about that.

Eric Topol

Right? Well, pattern recognition is something we all do as clinicians when we're seeing patients and looking at their data, whether it's scans or labs or various things, but the radiologists and pathologists, as you point out, are doing that the most. They're looking at scans and slides throughout their day, throughout their life professionally. Well, whatever number of scans, radiologists (typically 50 to 100)... Each scan, of course, has all sorts of frames and loops and videos and whatnot, but 50 to 100? Well, you could say, "You know what, you could read it 500 a day with AI support." Or you could say to the radiologist, "You know what? We get a lot of scanned requests that are unnecessary, could you be the gatekeeper? And could you, by the way, go over with the patient the results? Because their surgeon's telling them they need surgery, and you may have a different view about this from your experience." So, the whole idea of the radiologists living in the basement and working in the dark in their pajamas, whatever, could be re-vamped, you know. And, you could actually wind up talking to patients. I've talked to a lot of radiologists who cherish the idea of being able to communicate with patients, and avoiding radiation (unnecessary) by having unnecessary scans. So, it's enriching what these pathologists... Who would have ever thought that a pathologist, looking at a slide of a potential cancer, would not only be able to make the diagnosis of cancer, where it's coming from (which sometimes is difficult to determine), what are the driver mutations, the prognosis of the patient, destruction mutations in the genome... all from looking at the slide with an AI support. So each of these particular specialties get to another level of discernment, we're machine help, and accuracy. So not just accuracy, but these machine eyes that see things that they couldn't ever see, because they're trained on orders of magnitude more images than they will, in a lifetime. So the other thing to keep in mind, 50% of physicians are below average. So it helps bring up the rear, if you will, for those radiologists who are easily distracted, or not as experienced, or for whatever reason, they're below average. Or pathologists. It brings them up to a higher level of accuracy. But then there's another dimension to this, which is really exciting. And I didn't see this four years ago, and it's just exploding. Which is seeing things and images that humans will never see. So let's just take example, the retina. Now we know, which we weren't fully aware, a few years ago, that the retinal photo will help make the diagnosis of kidney disease, pre-neurodegenerative diseases, heart disease risk. It will tell us about diabetes control. A paper came out today: it'll help diagnose hyperlipidemia, blood pressure control. It's basically a gateway to the whole body. And ophthalmologists are just happy to be able to interpret for their eye diseases. When it has a hepatobiliary disease from a retinal picture? And then, a cardiogram As a cardiologist, that's what I spend a lot of time reading cardiograms. I would never have expected I could tell the hemoglobin to the decimal point, or the age/sex of the patient, their ejection fraction. Valvular heart disease, all sorts of other diagnoses that are hard to make, the pulmonary capillary wedge filling pressure of the heart. I mean, all this stuff from a cardiogram! That I'll never be able to see. I've been reading cardiograms for 35 years. I can't imagine this. So this is what is extraordinary, is seeing things that (machine eyes) that we will never be able to see. It's somewhat humiliating, that machines can do this. But at the same time, why not get their help? Why not lean on them, when we know it's really accurate, useful information. And, that we work together.

Emily Silverman

I was stunned by the section in the book about the scans of the retina, where you say, "As we learned from a Google study of more than 300,000 patients, retinal images can predict a patient's age, gender, blood pressure, smoking status, diabetes control via hemoglobin A1C, and risk of major cardiovascular events - all without knowledge of clinical factors. Such a study suggests the potential of a far greater role for eyes as a window," you say "into the body, for monitoring patients." But you know that old saying, that the eyes are the window to the soul?

Eric Topol

Right, right.

Emily Silverman

And I didn't know that you are able to tell those things from an EKG: things about pulmonary capillary wedge pressure. These are things that normally I associate with measuring on an ultrasound of the heart. Can you tell us a bit about the cardiology updates, as a cardiologist? What's happening? And what's still in the research phase, and what is actually being used in a clinical setting? Because, when I work in the hospital, I'm not yet seeing any of this technology.

Eric Topol

Right, right. Right. Well, if you were at Mayo Clinic, you would see the extra electrocardiogram readouts. So you would get, likely, ejection fraction diagnoses like hypertrophic cardiomyopathy and pulmonary hypertension. So, you would get a whole bunch of things that you don't get anywhere else, because they did some of the initial publications in this space. But, perhaps to me, the most exciting thing is a smartphone ultrasound or echocardiogram. So this blows me away. As long as you knew the heart was on the left side of the chest, and you put the probe somewhere on the chest, on the left side, (as long as the patient doesn't have situs inversus, right?) that the AI will tell you to move it up or down, or clock or counter-clock. And just like you were depositing a check in your bank account, and it auto captures when you did it right, it auto captures the video loop of your echocardiogram. We don't even know it. And then you get an auto-interpretation. And all of a sudden, you have now people in the hinterlands of Africa, India, other low and middle-income countries, where they're able to do an echocardiogram or any smartphone ultrasound, with no training, and get good interpretation, all AI driven. The acquisition of the images and the interpretation. And what you can see is that, eventually, when these probes are really cheap, when you can just pop them on the bottom of your smartphone, patients who will be imaging themselves. Of course, then you get to some scary thoughts of the expectant mother imaging their fetus, you know, every few hours, or crazy things like this. But, if you're a patient with heart failure, instead of having to go into a clinic, and you're concerned about your breathing or whatever, or just your checkup, you could just send in your image. And there are studies like that being done right now. So, the cardiology space is getting a lot of AI both in imaging, in electrocardiograms and in echo right now. And, of course, there are other ways that Cardiology is getting charged up with some AI tools. But all of this stuff is early. I mean, just think. Here we are mid 2023, but what's it going to look like in just a couple/few years? If we embrace it; if we use it in positive ways.

Emily Silverman

Do you ever worry about information overload? Because there are some parts of the book where you emphasize the importance of research, and really understanding the value of certain measurements, the value of certain interventions. There are examples where cancer screening, you know, early cancer screening: you catch more cancers earlier, but the outcomes don't actually change when it comes to things like being cancer-free or mortality data. So do we need to invent a whole other field on top of AI that's dedicated to interpreting the enormous data streams that are coming out of this AI? Like, is a patient capturing their own echo useful? Is it of value? Who decides? How do we measure that? How do we make sure that there isn't just more contributions to this problem of waste in medicine?

Eric Topol

Right now, we have some flagrant examples of how when you do unnecessary tests, like ultrasound of the thyroid that was done in Korea, and made all these diagnoses of thyroid cancer and never changed any outcomes. We have really dumbed down medicine, where if you're 40, as a woman, you should get a mammogram every year. If you're 50, you should have a colonoscopy. We use age as a single criteria for a lot of things. That's really dumb, because there's so many other features that we could put into it. Then you get to the point of, "Oh, what about with AI?" We could make things worse; we could have more incidentalomas and more TMI stuff. So, we have to come up with the right balance where we get smarter. We have so much information now about any given patient, or we could get that information. But rather than using retinal photos, or their genome (parts of their genome, particular genes of interest)... Rather than using that, we just use these reductionist criteria of age, when we know that people of a given age could be physiologically much younger or older. I mean, help me. This is just so dumb. So, I do think that, with the help of AI, we'll discern risk at a better level, to know about screening. Because, we've talked about diagnoses, but one of our real foibles in medicine is, you the old Bayes' theorem of doing screening tests on people of low risk. We do that all the darn time. Let's only do the screening tests, or use these tools, on the people that really need them. And if AI can help direct that, to find the people at increased risk, rather than us being stupid, hopefully that will be an advance.

Emily Silverman

When I imagine medicine in 5/10/20 years, I almost can't imagine medicine because the rate of change is so high. And, you know, I'm envisioning walking into the hospital, and there are no stethoscopes, and everybody just has a pocket ultrasound. And I'm envisioning all of these new data streams, and all of these new ways of thinking about, and understanding, diagnosis and treatment. What do we need to be doing with medical education, to start embracing, responsibly embracing, these changes? You talk in the book about how the whole thing about being a med student for so many years was about memorizing and test scores, and how emotional intelligence and things like that aren't really assessed as much. But even beyond emotional intelligence is this different type of domain knowledge. You say, "Future doctors need a far better understanding of data science, bio-informatics, bio-computing, probabilistic thinking, the guts of deep learning neural networks, algorithms, and understanding how they work, and also the liabilities of these algorithms." You talk in the book a bit about things like bias being coded into the algorithms. So, what should med students be learning these days? And, you know, because there's the old dogs, you know, the... It's funny. I don't mean to imply that you're at all an old dog, because you're more up to date on these topics than probably most medical students. But where do we target our efforts? and how does the medical education landscape shift?

Eric Topol

I wish I was a medical student, or a young physician now, because the excitement going forward is so palpable, extraordinary. But, at the same token, I'm really discouraged that we have no medical schools in this country that have AI in their curriculum. This is about taking deep data of a person to prevent their illness from ever occurring, that they otherwise would be highly likely to manifest. This is what's so exciting about the future is fulfilling a fantasy, that has never been even approximated, with that person's data information and the knowledge that we can use, with the help of AI. So, the problem we have now, is we still believe doctors need to memorize all this stuff. Basically be a brainiac, or some semblance of a brainiac; get perfect scores on their MCAT; have a really high GPA. And here's your ticket to medical school. It's totally the wrong way we should be selecting the future physician workforce. Because these large language models are not going to go away. They're going to be a very important resource. Yes, they will make mistakes that have to be overridden, and there has to be oversight for sure. But the brainiac era is over. We shouldn't be picking people on their MCAT scores anymore, or their GPA. We should be doing interviews and seeing how a person interacts, in terms of: Do they exude the ability to develop a interpersonal bond? Do they communicate well? Do they show any sign of compassion or trust or empathy or whatever that kind of sense is? You know, the smell test of a person being able to relate to other people. That's the humanity in medicine that we need. And AI will emphasize that, in the years ahead. So, sure, we want people who are intelligent, but we have another auxilary path to promote intelligence. What we need most of all, is those people who will establish presence, who will truly care. Who the patient knows, "This doctor has my back. He or she really does care about me. And that's going to help me get through whatever illness I have, or will help me prevent one that I'm at risk of getting." So, that's the future of medicine in a nutshell. But there isn't a medical school in the world that's gearing up for that, both with respect to the students already enrolled, less the ones they will admit.

Emily Silverman

Wow, that felt kind of like a mic drop to me. So I'm just absorbing. I'm absorbing that the era of the brainiac is over. It's just so true. We're in a new world: new technology, new priorities. And I really appreciate everything that you just said. As we wind to a close, Eric, is there anything else that you'd like to say to our audience about AI? About Deep Medicine? Or about the future, the future of health care?

Eric Topol

Nah, I know we've covered it well. I still remember the first time I came across your work - when you wrote about comedy, and your enthusiasm for comedy and your talent. And, you know, I think you exemplify the dynamic aspects of physicians, all the things that you've done, beyond caring for patients directly. What I hope is that our lives as physicians, and clinicians in general, will improve. I mean, I think we're in a pretty desperate situation right now. COVID didn't help that at all. But I never give up hope. If there ever was an eternal optimist, I might be the one that would be cited, because I don't want to accept where we are, where we've been as our resting place. And, I do really think we have a path forward. And I love seeing how people can do things that are outside of their initial vision of what they were going to do. Like you. I hope all of us can find that: the things that are enriching and fulfilling, in that we feel like we're on a mission to do something. Whether you're doing research that will help patients, or you're doing things that will help physicians in general, that is our potential to actualize. But you can't do that if you can barely get through a day, and that your mental health is so compromised. Hopefully, it will get out of that situation. We need a remedy, and I'm banking on this being the one.

Emily Silverman

Well, we'll leave it there. I have been speaking to Dr. Eric Topol about his book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. I learned so much from this book, and I'm really inspired. I've already been Googling online courses to learn more about AI. So, I just think you're either on the ship, or you're not on the ship, because the ship is going. So, I hope this episode inspires you all to learn more about AI as well - in general, but also its applications to medicine. And Eric, thank you so much for being our guide.

Eric Topol

Well, thank you, Emily. If we could do a little bit in AI what you've done for storytelling in medicine, we'll have achieved something. Thanks very much.

0:00/1:34