One advocate shares her vision for the crucial role patients must play in the future of health care AI.

Tradeoffs’ coverage on diagnostic excellence is supported, in part, by the Gordon and Betty Moore Foundation.

Episode Transcript and Resources

Episode Transcript

Note: This transcript has been created with a combination of machine ears and human eyes. There may be small differences between this document and the audio version, which is one of many reasons we encourage you to listen to the episode above! 

Dan Gorenstein (DG): Yes, we know, almost everyone in health policy these days seems to bring up artificial intelligence at some point. Hospitals…

Clip: Hospitals are finding new tools to use in their digital arsenal.

DG: Tech companies…

Clip: Elon Musk, Brett Taylor, Sam Altman they want to build the next generation of GenAI companies

DG: Lawmakers, too…

Clip: We are going to see regulation, the only question is whether the U.S. moves first. 

DG: All of these groups are sitting down to write the rules of the road for how this new technology will revolutionize health care or … maybe blow it up.

But what about patients? The reality is that our actual care hangs in the balance. 

Andrea Downing (AD): What scares me is decisions about us are being made without us in a way that can cost our lives and our health.

DG: Today … we sit down with one advocate to understand her vision for the crucial role patients must play in the growth of health care AI.

From the studio at the Leonard Davis Institute at the University of Pennsylvania, I’m Dan Gorenstein. This is Tradeoffs.

****

AD: My name is Andrea Downing and I am co-founder and board president of a patient-led nonprofit called the Light Collective. 

DG: Andrea’s journey as a patient advocate began nearly 20 years ago in a doctor’s office in Austin, Texas.

AD: When I was 25 years old, my genetic counselor and doctors sat me down because we had an incredibly strong history of cancer in my family. And so we took a genetic test, my mom and me, and I tested positive for a genetic mutation. And what that meant was I had up to an 87% chance of developing breast cancer, and up to a 60% chance of developing ovarian cancer in my lifetime.

DG: This genetic bombshell upended Andrea’s life. But through online support groups — many on Facebook — Andrea connected with other women who shared similar risks for breast cancer. The groups became an anchor for her.

AD: And then in March of 2018, Cambridge Analytica happened.

News Clip: Facebook stock tanking after allegations data firm Cambridge Analytica secretly harvested the personal information of 50 million unsuspecting FB users.

AD: And I asked myself, “If our support groups and vulnerable communities are residing on a platform like Facebook, how can data be scraped or misused in ways that could harm us instead of benefit us?”

DG: Andrea had started her career working in Silicon Valley so she used her know-how to look under the hood of Facebook’s tech.

AD: I will never forget this just chilling, visceral moment when I started to do this research, and I found ways that our support groups could be infiltrated, could be hacked, and used against us in ways that I wasn’t aware of. And around this time we just realized, okay, we’ve got to do something differently. So we founded the Light Collective and began to do this work giving people a path to learning about technology that affects them in health care.

DG: So, Andrea, you’d already spent years thinking about how to protect patients from the dangers of health technology. Then all of a sudden, health care AI bursts onto the scene. These computer programs, trained to perform tasks normally done by humans, like diagnosing illness, scheduling patients, or determining whether something is covered by insurance. What was the moment that made you realize, damn patients need to be a part of this AI conversation, and they’re not?

AD: So in 2022 people were kind of coming to the Light Collective and saying, “Oh, this is a patient group that is learning about technology.” SO I started getting invited to all of these meetings Where I would be like a patient on a panel being asked to talk about but at the same time, I started recognizing, okay, if I’m the only patient in the room, I can’t represent everybody. we have to get these grassroots, diverse communities who are impacted by this to come ready to engage in these conversations about shaping policy, about shaping, you know, this new technology in a way that’s designed for our benefit rather than to harm us.

DG: And this may seem obvious, but why in that moment did you feel it was so important to have patients involved? Because, like, I can imagine people saying most patients are not doctors, most patients are not coders. Why should people without any kind of medical or AI expertise be involved in making decisions about health care AI?

AD: Well, we bring a lot. So the first thing we bring is a unique view of what the priorities and outcomes need to be for a new technology to work. The second thing we bring is a really good understanding of what threats or harms may befall a community if we don’t proactively design a technology in partnership with researchers and tech companies.

DG: You talk about threats. What happens if patients really, at the end of the day, are not in these conversations? What are you already seeing play out around health AI that is scaring you?

AD: When we create predictive models with AI that are used on patients without their knowledge or consent we can’t get a second opinion, and we’re making life or death decisions about a patient without even helping them to understand or be empowered by that decision. So it’s really easy for us to just build a predictive model or a new startup that makes claims that are totally snakeoil, they’re not validated, they’re not proven. And then if we don’t get patients into these discussions, we lack accountability to the group that is actually going to be affected or harmed if something goes wrong.

DG: So it sounds like you have had folks come to you and ask you to provide a patient perspective on AI but it felt more like tokenism, and you’re trying to get patients a legitimate seat at the table with regulators, health systems and developers. What has happened when you try to get a reservation at the table?

AD: [Laughs] Well, I’m not going to name names here, but there were definitely some codes of conduct and early initiatives before 2022 where we knew about them, our names were kind of floated to be in these rooms, and they were like, “No.” So, yeah, in the early days we were not invited to the table. But I do want to point out that’s changing. There are certain initiatives that have really reached out to us with open arms.

DG: And these are groups made up of developers, hospital systems?

AD: Yeah, there’s this group out of Duke called Health AI Partnership, and they reached out to us and said, “Hey, you know, we believe in what you’re doing and we need to find ways to support this work.” And then within the National Academy of Medicine, we’ve been given the opportunity to come to what’s called a cross-cutting workgroup with federal agencies, health systems, developers and researchers. It’s also a bit of a test for us to say how far can we move the needle, how far can we push for change? 

DG: After the break we talk with Andrea about what some patients may need to overcome their skepticism of AI, how to bring greater diversity to these conversations and what success looks like. 

MIDROLL

DG: Welcome back. We’re talking with Andrea Downing, co-founder of the health tech patient advocacy group The Light Collective, which recently released a list of seven rights for patients — things like security, privacy and transparency. Andrea, we’ve heard lots of people in government and industry say they want patients at the AI table, but I think there are legitimate questions about what meaningful patient engagement with AI looks like.

So let’s just take one of these rights that you have outlined: patient-led governance. You describe this as patients “co-creating the rules that govern how AI is designed and used in health care.” In your dream world, what does it actually mean for patients to be co-creating AI rules?

AD: It means that we not only have a seat at the table, we are driving innovation that is for patients and by patients. And what that looks like is drawing from a whole body of research. It includes really good diversity and representation. It includes early and ongoing engagement  and to truly share power instead of this uneven power dynamic that we see today.

DG: One thing that struck me, Andrea, as I was preparing for this interview, was the fact that so many patients do not trust AI. A 2023 Pew survey found 6 in 10 Americans would be uncomfortable with their doctor relying on artificial intelligence in their own care. What, in your personal experience, helped engender trust for you with this new technology? 

AD: So there is this core of me that is all about facing fear. And it goes from, you know, that first story I told you about my experience with BRCA to facing that fear in Cambridge Analytica and saying, “Okay, yeah, this is scary. The only way to deal with that is to dive right in with a sense of curiosity and meet all the people who are doing this and learn about it.” That’s just who I am. And luckily, we have so many people in the Light Collective that have that same mindset.

DG: I guess what I hear you saying is we need enough patient advocates and that those people, that sort of beachhead of individuals can begin to go back to their respective communities and share with them what is being learned. And that is how, through relationships, larger trust is built.

AD: Yes. When you’re in a place of fear, it’s time to learn. It’s time to learn about the science. It’s time to organize your community. It’s time to teach each other so you have strength in numbers. And then build bridges with the experts who want to do the right thing and figure it out together.

DG: So you have, Andrea, forced your way to the table as a patient advocate, and that has been tough. But you’re also quick to point out that you’re White and cannot speak for the vast diversity of patient experiences. What is one concrete thing you’ve done to ensure more diverse patient voices are also being heard?

AD: When we put together the Patient AI Rights Initiative, the majority of advocates who co-authored the document are people of color and representing advocacy organizations serving their communities. And we did this because these advocates and these communities have direct experience encountering disparities and bias in health care.  

DG: Final question, Andrea: What does success look like for you? Like, if we had you back on the show in a year, what would need to have happened for you to feel like it’s been a good year for patients and health care AI?

AD: What success looks like is taking what we crafted in the Patient AI rights Initiative and putting it into practice in a way that is legally binding or the standard of practice for anybody developing AI in health care. If you are a technology company, if you are a federal agency, if you’re a health system, we want to partner. Change in health care takes time and it’s hard. So I don’t know all of the answers of how we would operationalize this, but we hope that that would be the next phase of work.

DG: Andrea, thanks so much for taking the time to talk to us on Tradeoffs.

AD: Thank you for having me.

DG: You can see The Light Collective’s full Rights for AI Patients along with more Tradeoffs reporting on AI in health care on our website tradeoffs.org.

I’m Dan Gorenstein, this is Tradeoffs.

Episode Resources

Additional Resources and Research on the Patient AI Perspective:

Episode Credits

Guest:

  • Andrea Downing, President and Co-Founder, The Light Collective
  • The Tradeoffs theme song was composed by Ty Citerman. Additional music this episode from Blue Dot Sessions and Epidemic Sound.

This episode was produced by Ryan Levi, edited by Dan Gorenstein, and mixed by Andrew Parrella and Cedric Wilson.

Additional thanks to: Ysabel Duron, Valencia Robinson, Mark Sendak, Christine Von Raesfeld and the Tradeoffs Advisory Board and our stellar staff!

Ryan is the managing editor for Tradeoffs, helping lead the newsroom’s editorial strategy and guide its coverage on its flagship podcast, digital articles, newsletters and live events. Ryan spent six...