1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #15, Mother of All Life with Gülin Yilmaz
Generationship
22 MIN

Ep. #15, Mother of All Life with Gülin Yilmaz

light mode
about the episode

In episode 15 of Generationship, Rachel Chalmers sits down with Gülin Yilmaz of Rosette Health. This episode dives into the current state of healthcare in the US and discusses how large language models (LLMs) can play a crucial role in enhancing patient-provider communication and education. Gülin shares insights on mitigating the risks of AI hallucinations, the pros and cons of AI hype, and the transformative potential of AI tools in improving diagnostic speed and accuracy. Discover how technology is reshaping the role of healthcare professionals and empowering women to access top-tier care, regardless of their location or income.

Gülin Yilmaz is Co-founder & CEO of Rosette Health. She is an ​AI/ML ​product ​leader ​who ​is ​also ​an ​expert ​in ​maternal ​health. ​She ​has ​a ​passion ​for ​building ​next ​generation ​products ​and ​for ​helping ​pregnant ​people ​get ​the ​care ​they ​deserve ​and ​need. Prior ​to ​founding Rosette ​Health, ​Gülin ​was ​head ​of ​product ​at ​Stork ​Club. ​She ​has ​spent ​time ​as ​a ​COO, ​VP ​of ​product ​innovation, ​and ​product ​manager ​at ​companies ​such ​as ​Facebook, ​Compact​Cath, ​and ​Carrum ​Health.

transcript

Rachel Chalmers: Today I am thrilled to welcome Gülin Yilmaz to the show. Gülin is an AI ML product leader who is also an expert in maternal health. She has a passion for building next generation products and for helping pregnant people get the care they deserve and need.

Prior to founding her startup, Rosette Health, Gülin was head of product at Stork Health. She has spent time as a COO, VP of product Innovation and product manager at companies such as Facebook, CompactCath and Carrum Health. She received both her MS in behavioral decision making and her MBA in business administration and management from Stanford University.

Gülin, thank you so much for coming on the show.

Gülin Yilmaz: Thank you for having me, Rachel.

Rachel: It's a pleasure. What is wrong with reproductive healthcare right now and where might large language models help?

Gülin: It is safer to be pregnant and give birth in all high resource countries and several moderate resource countries than it is in the United States. Preventable deaths, most of which have mental health as root cause, high rates of severe maternal morbidity, high rates of premature birth, and last but not least, racial disparities in care.

It is absolutely disgraceful to know that black people are two to three times more likely to die in childbirth when you normalize the data for income and education level. It's just unacceptable.

Rachel: Serena Williams, who is married to billionaire, Alexis Ohanian, almost lost her life in childbirth despite being one of the wealthiest black women in America.

Gülin: Yeah, absolutely. And Rachel the root causes are multifactorial and very complex. But I do think that LLMs can make a big difference in facilitating communication, patient education and empowerment.

In fact, I recently graduated from the NIHs I-Corp program along with Professor Ellen Tilden, who is our clinical medical director. And through that program we've had 103 conversations in the eight weeks with clinicians, health systems and payers. And I understood perhaps for the first time how fragmented our healthcare system is in the United States.

There are silos everywhere and give the long episode of care for pregnancy and postpartum, these silos are even more prevalent. Postpartum care is separate than prenatal care, separate from mental health, separate from neonatal care and pediatrics. And I think that how LLM could help, starting with providers, then moving to the patients, they can help bridge the gap between the patient and the provider.

In fact, there are multiple levels of translations that need to happen from medical language to layman's terms, right, everyday terms. Cultural interpretation is another layer. And again, last but not least, language translation. If we make all of this happen, then this can enable providers to be more empowered to communicate with their patients.

It would enable shared decision-making and respectful maternal care. On the patient side, patient education and empowerment, and really giving back the onus of one's decision over their body back to themselves.

Rachel: So your hypothesis is that the poor maternal outcomes in the United States come from a couple of main sources. One is poor communication between patients and care providers, and another is poor communication across the care providers themselves.

And so the language models can help by improving communication from patient to provider and by having a sort of institutional memory of the state of this particular pregnancy that can be transferred from provider to provider.

Gülin: Absolutely.

And I do not mean to undermine in any way other complex root causes, such as lack of universal healthcare, right, and systemic racism. However, I do think that one of the easiest ways that LLMs can come in to help is the facilitation of that communication because we know that when we listen to the patients, they are able to express most of what's going on and their preferences resulting in better outcomes.

Rachel: How dangerous might hallucinations be in this context?

Gülin: Yes, hallucinations are quite important, especially with AI applied to the healthcare industry. Obviously, patient safety comes first, and I think it's very important.

This is why we are putting a lot of emphasis on building a system that has guardrails in place. And I think these guardrails is a combination of state of art techniques, if you will, as well as collecting high quality data in addition to making sure that we are designing a human-centric, whether that's patient or provider, a human-centric design, where humans are all the experts are still in charge.

Rachel: So this is not just a skin in front of Chat GPT 4. This is a model that you've trained specifically to help women and babies not die.

Gülin: Exactly, yes. And we used, as I was mentioning earlier, we used a combination of techniques that includes retrieval augmented generation, which is an AI technique that aims to reduce hallucinations in AI models.

We use LLM as a judge, we have the human in the loop, and on top of which we have our own proprietary technology, which we continue to develop. And we'll touch base on this a little bit later on, I'm assuming. But the trust and patient centeredness is very much key to our product offering.

Rachel: So this is how you think about risk mitigation. You have retrieval augmented generation, so you're pulling in data from the real world to sanity check the output of the AI.

You have AI sitting in judgment on one another. You have a nurse practitioner or a doula checking the information. And all of this is designed to reduce the risk of a hallucination.

Gülin: That is correct. And also Rachel, we are serving providers first with AI simulated patients. So we are not directly communicating with the patients, we're serving providers first.

You could think of it as a simulation training, and then next phase will be to add the patient in the loop with the provider still in the loop. So we're taking it step by step.

Rachel: Oh, that's really clever. So you're doing a test where you know no fetus is involved. And so the providers are both learning to use the system and learning to trust the system. And the system is learning to be trustworthy before you introduce a pregnant person into the mix.

Gülin: Correct.

Rachel: The fear of course in all of these industries is that AI will be used to replace jobs. Some of my best friends are nurses. What are some strategies for healthcare workers to coexist with these kinds of AI tools in workplaces of the future?

Gülin: Such a good question. For me, the general idea is to aim for AI acting as a forced multiplier and not as a replacement for humans. We strongly believe that humans will never be out of the loop. That human connection matters a lot.

But what can we do given the rising cost of healthcare, given the provider shortage, given the rural parts of America where you can't meet face-to-face with the providers. As an industry, I think we will collectively get there. And solutions which are AI native being built with the intent of being a force multipliers, I believe will thrive in the future.

We're already seeing concrete examples of such products such as digital scribes, for instance, for providers. They aim to improve the amount of quality eye to eye, if you will, times one-to-one time spent between the patient and the provider, which makes complete sense to us. So they save time on the provider side so that the provider can spend time connecting with the patient.

There are a set of different technologies that are being developed to improve the diagnostic speed and accuracy of those diagnoses by the providers. And at Rosette, we are developing products and technologies aiming to extend the reach of providers, both in time and location.

Rachel: How does your background in behavioral decision making at graduate level inform this product that you are building at Rosette?

Gülin: Oh, this is such a good catch, Rachel. So I've always been fascinated with how human beings decide to trust or not trust each other.

When I was at Stanford doing a master's in the engineering school, I took decision and risk analysis classes, but I also complimented it with behavioral decision making classes that takes into emotional decision making. And I basically designed my own concentration. It was really fun.

I wrote a paper on behavioral decision making, and in particular, I was trying to predict mathematically how humans trust based on the source of information, so relevant. This is almost 20 years ago now.

Later on, I worked in trust and safety, both at PayPal and Meta. And at Meta I worked on identifying social determinants of trust. So my background and personal interest in how humans trust each other is directly applicable to my work right now at Rosette, as we put humans, that's again, patients and providers in the center of what we're building.

So we ran many experiments to learn that trust is multifaceted for instance. It's not just about, so we've talked about hallucinations and, of course, we want accurate information, but there's more than that. There's emotional safety, there is the human connection, is there someone else who cares at the end of the day? Because if they do, then I'll share.

But then there's also, wait, I'm very ashamed. I will not share. I will share with AI, not with a human, unless you make sure that I'm safe. So there are many different facets to it, and I'm fascinated every day.

I hope I'm answering your question. Trust in human is in the center of our design, and I think my background informs me or maybe makes it really important and not lose sight of having customer interviews.

Rachel: No, I think that's a really good answer. And obviously this conversation makes me reflect on the births of my own children who are in college now. So at the time that you were doing your masters, I was making decisions about my pregnancies and they were some of the most consequential decisions I will ever make.

And I switched providers from my first child to my second. I wanted a more human-centric birth experience. And for my second, I had a doula who also worked with a friend of mine, and my OB-GYN was also a friend of a friend. It wasn't just the hierarchy because the first hospital I went through was very, very highly rated.

It was this peer network of people who had had to make similar decisions, who I had very high trust in because of our personal connections feeding into that set of decisions about which providers to trust to get a good outcome.

So you are exactly right, the emotional safety, the combination of official rating from a trusted third party institution, but also being in my city and being enmeshed in this community of women who were raising children and having that network as well, to bring another perspective to bear.

Gülin: Yeah, it's very multifaceted. Yeah.

Rachel: And the AI will need to operate in the world of that complexity. It will need to be able to integrate itself into those, the whisper networks as well as the official hierarchies.

How does that background and decision making inform the choices you make in your own career? People have told you that startup founder is a very risky job, right?

Gülin: Yes, they have. I left a lucrative, a good salary behind.

Rachel: You left Meta, that's good money.

Gülin: Yes, yes. You know, Rachel, after becoming a mother, which is almost a decade ago now, I felt like my value system changed and it was almost that function changed almost overnight.

And basically I found myself in this wonderful place of being able to work hard only on topics that really, really spoke to my heart. And having fallen in love with becoming a mother and babies, I'm a birth doula in training, I was compelled to, I was, I wanted to serve women, starting with mothers and babies. That's just, I don't know how my background plays into it, but I know exactly how my motherhood plays into it.

Rachel: That's a really great perspective, and it puts your idea about the force multiplier into a really great context as well. It's, you are wanting to serve women and obviously middle class wealthy women get the best care. It's wanting to extend that care to everybody, to broaden the scope of the people who get those services.

Gülin: Yeah, that's a great way to look at it. I am very compelled to serving mothers and babies and protecting that one wonderful connection that then becomes a protective layer against mental health, right? Mental health diseases and addictions.

And then being a techie also at heart, I have been thinking about how can we bring the latest technology in the service of women so that they can access the best care, as you put it, from anywhere in the world and independent of their income level.

But to answer your question, I think in my own choices for my career, I was looking at it from a trust perspective again, and I do think that I trust my own desires and intuition quite a bit, in addition to the guidance I get from a wonderful group of coaches and mentors.

Rachel: Yeah pregnancy does teach you to trust your intuition more than any other single experience in life, I think.

Gülin: Yes, yes.

Rachel: As a startup founder, do you worry about the intensity of the hype around artificial intelligence and machine learning right now?

Gülin: I want to say yes and no. Yes, I worry because I'm seeing many companies, not just in the healthcare vertical, in other verticals as well, we're part of accelerator programs, one of which is the Alchemist.

Many companies doing what looks to me like the exact same thing, AI agents, automated calls. And I wonder how they will differentiate themselves over time and whether they have a solid business model to survive the hype.

As for my company, I think we are being very thoughtful because yes, we're dreaming very big, but we are combining that big dream and vision with an actual business model that makes sense, that can bring in venture scale returns.

So that's the part where I say no, if you have a business model to start with and you are using AI to extend, or again, I'm going to come back to use it as a force multiplier, then we have nothing to worry about.

Rachel: What are some of your favorite sources for learning about AI?

Gülin: So I want to answer that question in two ways. So there's our AI, wonderful AI scientist team that brings me up to speed. And they have a whole bunch of resources that they check.

They're in Reddit communities. They read blogs such as towardsdatascience.com. They are on Twitter and YouTube. I know that they have multiple Slack channels that they're in. They're going to conferences, they're following many people, thought leaders in the space, such as Andrej Karpathy on Twitter and YouTube. And we're also reading Hacker News from YC.

But, so while they're doing that and they're cherrypicking and then tagging me on our Slack channels in order to bring me up to speed, I am reading actively on the healthcare AI space. Not just deep technology, not just healthcare, but both. I love reading a16z Julie Yoo's thought pieces. She's a general partner on the Bio+ Health team.

And I love working with Professor Ellen Tilden from OHSU, who's our clinical medical director, as I may have mentioned. And she is thoughtfully choosing and curating relevant and trustworthy publications as it applies to our space.

And then last but not least, I'm a bit geeky in this space. One of my favorite things to do on a Friday evening after spending quality time with my son and husband is to have a non-time bound reading time on PubMed. Basically, I go through recent 2022, 2023, 2024. So very recent academic papers, especially as AI applied to maternal and neonatal health.

Rachel: I love that vision of you on a Friday night kicking back with PubMed in a glass of wine. That's that's... You are living your best life. That's wonderful.

Gülin: Exactly.

Rachel: Gülin, I am going to make you God emperor of the solar system. For the next five years, everything shakes out exactly the way that you think it should. What does our world look like five years from now?

Gülin: Oh, this is just so fun to imagine. So I'm going to tell you my answer, but I want to tell you why I'm giving that answer. I strongly believe that in the near future, providers will encounter, will deal, will give service to patients in quite different ways, in different contexts.

Because of all the information that is accessible currently, patients have basically clinical and basic medical information, a lot more easily. Diagnosis happening faster. Radiology images are now being read, like recent mammography being read by AI in addition to the radiologist and the doctor.

I think the role of the doctor is shifting already from diagnosis to patient support, I want to say patient education and patient mental care. So given that shifting role and the human connection, actually in a way I'm super excited because the human connection gets elevated again. All else is automated, of course, humans are in the loop to make decision, but the patient provider connection matters a lot.

With that hypothesis and thesis in mind, I think that I see everyone having access to an always on trustworthy, professionally trained support and care in their pockets, in their hearts and minds, accessible from anywhere, not just pregnant people and families. I think everyone can, and this technology can enable it.

Rachel: I mean, it's continuing a trend that we've already seen with nurse practitioners becoming able to prescribe medication. Doctors no longer have the monopoly on healthcare, and the value has obviously shifted to that continuity of care, that the continuity of support.

I can see how AI could fit into that trajectory and continue extending it.

Gülin: Yeah.

Rachel: Last question. My favorite question, if you had your own Generationship on a journey to Alpha Centauri, what would you call it?

Gülin: The first name that came to my mind is Gaia. Greek goddess of earth, mother of all life. I imagine this mother of all life, divine power, exploring the stars. That's what came to mind.

Rachel: That's absolutely beautiful Gülin Yilmaz, thank you for coming on the show. It's been a delight to talk to you and very best of wishes with all your endeavors.

Gülin: Thank you, Rachel. Thank you for having me.