Skip to main content
Committee HearingHouse

PA House Health — 2026-03-24

March 24, 2026 · HEALTH · 15,394 words · 18 speakers · 123 segments

Representative Dan Frankelassemblymember

Good morning. It's 9 o'clock. I'm going to bring this informational hearing of the Health and Communications and Technology Committees to order. We are having a hearing on artificial intelligence in the healthcare sector, and I want to start by asking the members who are with us to briefly introduce themselves and I'll start on my far left.

Valerie Gatosother

Thank you, Mr. Chair. My name is Valerie Gatos from 44th District, Allegheny County.

L

Joe Cerisi, the 146th District, Montgomery County.

Representative Dan Frankelassemblymember

Okay.

D

Representative Kathy Rapp, I'm from the 65th, Warren, Forrest, and Crawford Counties.

E

Jason Ortitai, the 46th District, Allegheny, and Washington Counties.

F

Joe DiOrsi, 47th Legislative District, York County.

Valerie Gatosother

Charity Krupa out of the 51st District in Fayette County.

H

Jamie Walsh, 117th Legislative District, Luzerne County.

I

Brad Roy, the 6th District, Crawford and Erie.

J

Sean Docherty, 172nd District, Philadelphia County.

K

Napoleon Nelson, 154th Legislative District in Montgomery County.

L

Morning, Jim Prokopiak, 140th District, Bucks County.

M

Hi, I'm Ben Waxman, 182nd in Philadelphia County.

N

Good morning, Nikki Rivera, House District 96 in Lancaster City.

O

Good morning, Christine Howard, the 167th District in Chester County.

Representative Dan Frankelassemblymember

Thank you, members. I'm Representative Dan Frankel, the Majority Chair of the Health Committee, and welcome all of our testifiers and guests today, both here in person and virtually. This is the second half of a conversation about AI and health care. During the first part of the meeting, we heard from academics and experts about what the evidence shows about the use of AI in health care so far. We learned that spending tends to increase in health care with all new technology, and so far that seems true for AI, but that we're also seeing evidence of some improvements in care and reduction of physician burden. We learned that those who are using AI have varying abilities to evaluate that AI and that a central tension to consider would be how AI can reduce burnout for frontline caregivers, not add to it. Today we'll be hearing about artificial intelligence in practice. We'll hear from hospital systems, frontline providers, and insurers about how they deploy AI and what it looks like in use, including hopefully what's working and what isn't. With that, I welcome my other chairs. We have my counterpart, Chair Rapp of the Health Committee. We have the majority chair of the Communications and Technology Committee, Representative Cerise, and the ranking member, Representative Jason Ortitai. And I welcome Representative Chair Rapp if you have any comments.

D

Thank you, Mr. Chairman. I'd just like to take the moment to welcome all the testifiers. I sure any information that you give us will be very helpful in making decisions on legislation So I welcome you today and look forward to all of your information

Representative Dan Frankelassemblymember

Thank you, Mr. Chairman. Thank you. Chair Cerisi?

L

Thank you very much, both Chairman and Chairwoman, for having us today. This is an interesting time for all of us as we look at the changing times, not only in technology but in health care and how it will affect us in the future. The things we do today will affect tomorrow and beyond. So we heard a lot of interesting testimony at the last hearing. I know today we'll hear more, and I look forward to being able to work with the Health Committee on this, with my colleague Chairman Ortizai, and looking at ways that we can make sure that all of our citizens are safe and have the best care they possibly can have. So thank you both for having us.

Representative Dan Frankelassemblymember

Chair Ortizai.

E

Thank you, Mr. Chairman. I'm looking forward to continuing the conversation. I know we've had many hearings on this so far and a lot of discussions behind the scenes on certain pieces of legislation. So it's good when we can have panels like this that give us really good, solid information that we can work with and help refine into legislative successes. So thank you.

Representative Dan Frankelassemblymember

Thank you. Our first panel is on hospitals. And I'd like if each of the testifiers could briefly introduce themselves in terms of their background. And we have on the first panel Dr. Adusamali and Dr. Kruk-Kleitis. I hope I pronounced those correctly. While everybody's sitting down, we also have virtually with us Representative Inglis and Representative Schusterman, Representative Rossi, who's here, and Representative Kozorowski, who joined us as well. Great. Dr. Dussamali?

Sri Dussamaliother

That was perfect. Thank you. Good morning, everyone. My name is Sri Dussamali, and I'm a practicing cardiologist, health informaticist, as well as Vice President and Chief Health Information Officer at Penn Medicine, particularly the University of Pennsylvania Health System. I also serve as an Associate Professor of Cardiovascular Medicine and Informatics at the Parallelman School of Medicine and Adjunct Professor of Healthcare Management at the Wharton School at the University of Pennsylvania. Chair Cerisi, Chair Ortitai, Chair Frankel, and Chair Rapp, as well as members of the House Communications and Technology and Health Committees, thank you so much for holding this joint informational meeting on Health AI, and we at Penn Medicine appreciate the opportunity to share our perspectives on this dynamic space. I apologize, I could not be there in person with you today, have a previously scheduled enterprise town hall locally, also on Health AI, partially at least later this morning. The following slides, which I'll pull up here shortly, represent the highlights of our written testimony, which is also submitted. Can I get a confirmation that you all can see these slides? Yeah, we can see them. Amazing. Thank you. So I'm hoping to spend the next five minutes or so taking you through some of the highlights of our submitted testimony, including some of the use cases in which we're seeing benefit of and also some of the opportunities around health AI. So the first is I wanted to tell you a little bit about how we're articulating sort of the case, the platform for health AI here at Penn Medicine. Number one, we think that change is imperative The status quo in healthcare delivery we think is simply not sustainable And that because of all the reasons I think you all have already discussed that demand exceeds and will continue to exceed and surpass the capacity of our current systems to be able to deliver care in the current way that we deliver it. We know that our care teams, our clinicians, our nurses, our care teams, all together have a tremendous amount of administrative burden, in part, which is driven by the EHR, but not exclusively. We know that our health system, our employers, our patients have increasing financial pressures, particularly around health care, particularly around affordability of health care. And then we also know the nature of our field is changing, very much so both in the domains of consumerism in terms of more autonomy and agency, which we think is good for our patients and our consumers, as well as the democratization of medical information so that more, including our care teams, can engage with it. And if you haven't read this article by internist Dr. Drew Cooler at Cornell called The Role of Doctors is Changing Forever, it's a great read, would highly recommend it in relation to health AI. The other thing I wanted to do was ground the rest of my comments in sort of a common mental model around health AI. And I won't, I know you all are familiar with this already, but one point I would like to make from this slide is that AI in healthcare, we believe, is not new. But what is new is the increasing accessibility to it. Everyone has AI on their minds. And so, but we know this has been a journey, particularly in healthcare, since the 1950s, when AI started as simply if-then-then-that rules-based systems and has progressed through the machine learning era, the deep learning era, and finally this relatively newer era since 2022 around large language models and generative AI. And, you know, this is a great table from a Java paper on this topic that started back in the 50s with those if-then-then-that rules, then the deep learning era, which was particularly around, you know, use cases like speech recognition, diabetic retinopathy screening, and now again, the era of foundation models and generative AI embedded within healthcare delivery. One of the other ways that we like to frame AI within our organization is particularly with a focus on the A in AI, and that A is augmented intelligence. how can we use AI technologies to be able to enhance and augment our clinicians and our entire care teams around elevating human reasoning, judgment, and empathy? And to us, that means a laser focus on human-centered design. It's so critical how we design these systems and particularly how the human and the computer interact. And we think that that's an area, you know, traditionally, I think of less focus, but is very critical at this moment in time. And then also, how are these tools integrated into workflow so the benefits can be surfaced there. There are a couple of key areas for most health systems, including ours around AI, including using it to be able to bolster access to and appropriate triage to safe, high-quality care and diagnostic testing, changing from a reactive health care delivery system in general in our country to a more proactive health care delivery system, using it to train, recruit, retain, and ideally delight. That's not necessarily a word we typically use with health IT, but ideally to delight our clinicians and our consumers and patients, reduce administrative burden across teams, improve quality, safety, and equity of care delivery. And then ideally, since we're all working on this collectively as a community, become thought leaders in how to do this responsibly. We think that that all needs to be grounded in broad organizational competencies across the missions in productive utilization of health AI. Despite all of that that I mentioned is changing, there's a lot that's not changing, including the critical nature of the patient and care team connection. And we try to emphasize that in our work So just to give you a couple of use cases around how we using generative AI for health I wanted to tell you about James James is a patient who suffered a fall requiring a short hospital stay and had some imaging in our hospital and revealed a 1 centimeter incidental pulmonary nodule requiring immediate follow-up. He didn't have a primary care clinician, and this finding was lost to follow-up, but we've been using generative AI to be able to scan our records and detect overdue recommendations and then triage the patients to appropriate sites of care. And that already has shown us the ability to detect cancers that otherwise would have gone undetected. One use case. Number two is around ambient intelligence. I believe you all have heard much about ambient intelligence, the idea of translating a conversation into a fully formatted note. And these are just some quotes here of how this has transformed the nature of how our care teams do work in terms of reducing documentation burden. But the two quotes at the bottom, you know, far less time savings than anticipated, the amount of time needed to check the note, shows that we have work to do in terms of context that we need to provide to these models and then to be able to deliver the most accurate content as possible. And finally, we're using ambient AI as another very promising use case for patients. I was in clinic yesterday, and we built a chat GPT equivalent that's embedded in the EHR that can extract insights and be able to summarize, you know, tomes, practically Moby Dick lent medical records into summarized content that I can take action on at the bedside. And that, you know, surfaces findings that I think in many cases may not have otherwise been surfaced at all. So it's a new mode of interaction with the EHR. I just want to let you know the way that we're thinking about evaluating AI is very rigorous, all the way from trying to think about, anchor ourselves in the problems to be solved through single-site, multi-site review, all the way through scale deployment with stage gates in between, and in some cases, all the way to randomized level testing. We have a data and AI governance group, as many health systems do, that focus on the areas depicted here. And we're trying to work through, although I will admit many of these questions don't yet have clear answers yet, considerations on ensuring human-centric AI, everything from implications on our labor force all the way to implications around the environment. And we're also trying to provide these answers to our organization as well. So with that, a couple of closing comments, and then would love your questions as well. We think that going forward, augmented intelligence will be woven through healthcare journeys, and we need to learn how to incorporate safely, responsibly, and effectively. We'd like to generate space to learn by fostering innovation and safe monitored pilots in service of patients and clinicians. We would love to be able to collaborate with you and other practitioners across the state to develop effective and implementable safeguards, and then to right-size those guardrails as well, that recognize that our current system is far from perfect, and there are many improvements we can realize in current practice leveraging effective human-computer collaboration. With that, thank you for your time, and I look forward to our conversation.

Representative Dan Frankelassemblymember

Thank you, Dr. Aldous-Tomali, and if you'll stick around, we'll have the testimony from Dr. Krukulaitis, and then open it up for comments and questions. Dr. Krukulaitis.

Dr. Robert Kruklidisother

Good morning. Good morning. On behalf of the Guthrie Clinic, I would like to thank Chairs Frankel, Rapp, Ortitai, Cerisi, as well as the House Health and House Communications and Technology Committee for allowing me to speak today on the impact of AI in health care. My name is Dr. Robert Kruklidis. I am an executive vice president and chief clinical officer at Guthrie, as well as a practicing pulmonary and critical care physician in the Commonwealth. Guthrie is an integrated rural health care system. We have six hospitals. three of which are located in Pennsylvania. We cover over 11,000 square miles with 10,000 employees. We consider ourselves thought leaders in utilizing AI to improve outcomes and support clinicians. In our written testimony, we outlined several areas where Guthrie is leveraging AI. Given time constraints, I'm going to focus on one important clinical area supported by the Guthrie's Pulse Center. So first, what is Guthrie's Pulse Center? It's a transformative care model that provides 24-7, 365-day access to clinical support throughout our network. The people working at the Pulse Center are highly trained, Guthrie-employed professionals. Their jobs involve leveraging technology, including AI, to improve patient outcome, to enhance patient experience and to optimize operational efficiency. It's really important to note that the teams do not replace our bedside staff. Rather, they augment the bedside staff. Our patients can opt out, but very few, if any, ever do. And every single clinical decision at Guthrie is ultimately made by a human. So what I wanted to speak to you guys about is how we're using AI in sepsis. So what is sepsis? Sepsis is a severe, life-threatening condition. It's probably the leading cause of death at most hospitals throughout Pennsylvania as well as the nation. And we know how to treat sepsis, but first we need to diagnose it. So imagine, if you will, I'm in one of Guthrie's intensive care units, and I'm taking care of a patient in bed one. And I'm talking to the nurse, and I'm reviewing the chart. and I'm interviewing the patient. At the same time, completely unbeknownst to me, we're getting critical information back on a patient on the other side of the unit, maybe in bed eight. And so this patient is having a fever. Their blood pressure is starting to drop. They have an elevated white blood cell count and an elevated serum lactate. The x-ray shows an infiltrate at the top of the left lung. With all of this information, we'd be able to make a diagnosis of sepsis and begin treatment. But the problem is I may not know about that information because I'm caring for the patient on the other side of the unit. This is how Guthrie is starting to use AI. Our AI is continuously looking through all of our patients, looking for any signs that might be concerning for sepsis. At the first signs of sepsis, it will ping a nurse in the Guthrie Pulse Center. And if the nurse reviews the case, if they have any concern, they'll page me. and they'll say, hey, Dr. Kruklidis, you really need to go see the patient in room eight as soon as possible. We think that patient might have sepsis. But making a diagnosis is not all that AI is helping us with. You see, the treatment for sepsis is complex and time-sensitive, right? We need to start antibiotics. We need to draw blood cultures. We need to administer various lab tests, IV fluid resuscitation. And so AI can keep track of my treatment to make sure that we're meeting the sensitive time deadlines, that we're not falling behind. And with this technology, we are seeing real impacts at Guthrie. We've seen a considerable decrease in the mortality of our patients. Now, I think it's fair to say that numerous recommendations could be made regarding AI in health care. I'm going to focus on one. We highly recommend that Pennsylvania establish and support pilot programs to help hospitals adopt AI and integrate technologies into care delivery The reality is that many hospitals do not have the financial capability to develop innovative care models on their own While we're eager to help others implement AI care models like the Pulse Center, we cannot do so without financial support. I can say with confidence and extensive data that AI has great potential to strengthen care delivery. As legislators, you can help make this possible. And Guthrie, with our award-winning and nationally recognized Pulse Center, we want to be an active partner in this work. We're really encouraged to see the Pulse Center highlighted in the state's Rural Health Transformation Plan. Thank you. Thank you so much for allowing me to testify.

Representative Dan Frankelassemblymember

I welcome any questions. Thank you, Dr. Cruz-Litus. I appreciate the testimony. I'm going to open up to members who have come. Representative Nelson.

K

Thank you. Thank you so much for both of you for your testimony. Doctor, you gave a great example of how you all are using AI across the hospital. The example you provided was sepsis. Help me understand. So if an AI system is identifying that a patient has. One minute. Hold on. Take your time. It's okay. It's showing signs of sepsis. I assume there will be some sort of warning signs that, in this case, the AI decision support system can identify, but nobody else is kind of there in the room where nobody else is. If that information is passed along to a nurse or a doctor, What then is the expectation within your hospital for follow-up from that caregiver?

Dr. Robert Kruklidisother

So AI is looking for signs that might be concerning for sepsis, passes that to the physician or the bedside nurse. They then do their regular clinical work. They start to evaluate, you know, is this data accurate? Is there really signs of sepsis? Has it already been recognized? Perhaps, in some cases, the antibiotics have already been started, the cultures have already been drawn, the treatment has already been initiated.

K

Great. It's a double check, making sure we're doing the right thing. In other instances, we may find the AIs first noticing this. We've got data being populated in all of our patients' records throughout the day. And it's impossible for us to be in those records simultaneously looking at every single patient the instant that the information comes in. And so I guess the question for me is, if you're a physician or your nurse who's kind of caring for that individual patient, if they're suspicious of, if they're not trusting of AI and they disregard that red flag that a system is providing to them, saying that there are signs here that this patient is having some of those elevated temperatures, how does that work within your system? Can a physician say, yeah, yeah, yeah, I'm not as interested because I don't trust you yet?

Dr. Robert Kruklidisother

You know, I guess I haven't seen caregivers, nurses, and physicians disregard clinical information because they don't trust AI. Certainly what we think is very important is we've got this information. We need to evaluate it whether I got that information through AI or not this is triggering me to review the medical information and we make a diagnosis based on our interpretation at that point

K

Thank you. Thank you.

Representative Dan Frankelassemblymember

I want to note that we're also joined by Representative Arowski, Representative Kahn, Representative Brown, and Representative Friel-Otten. Representative Waxman.

M

Thank you, Chairman, and thank you so much for hosting this. both the Health Committee and the Communications and Technology Committee. I am just like very much have a growing concern about the fact that a very, very small number of companies in the world control most of the AI technology and the development and deployment of AI technology. And I'm wondering if I could ask one of you to just describe the systems that you use. How did you obtain them? What was the procurement process? Who, you know, what agreements do you have in place with some of these larger AI companies, or do you have your own system that you control completely?

Sri Dussamaliother

I can start. And so we have a process that we have implemented that we call the New Technology Review Committee. It's not only for artificial intelligence, but it's for all kinds of new technologies that touch our electronic health record system. And as a part of that, we're doing continuous environmental surveillance to be able to look at what are the current and emerging solutions across three buckets of spaces. One is the set of solutions that are produced by our electronic health record vendor, which are deeply integrated with workflow. The second is what are solutions that are produced by vendors in particular areas and spaces, solving particular areas of importance for the organization. And then the third is in areas that are underserved or white spaces, as we might call it, what might we build ourselves? And our objective for each one of those buckets is to do a comprehensive analysis of what are the solutions on the market, what data do they have, and then how do we put them together in our environment? Because we feel our responsibility to our clinicians and our patients is essentially that product of how all those tools work together. So you're correct that there are a number of foundation companies that are producing much of the AI. We have there are a number of electronic health record platforms, but there's also a burgeoning environment of other companies that produce products that solve for various use cases. And so we try to include those in sort of our ecosystem of solutions.

Dr. Robert Kruklidisother

Yeah, I guess I would just add we have a very similar technology and AI governance committee. Certainly no decisions are made by a single individual. It's really important for us to start with the clinical problem that we're trying to solve. You know, don't start with the technology and then figure out how to use it, but rather what is the issue that we're trying to improve upon and then try to find the appropriate technology to help solve that problem.

M

I guess perhaps I didn't phrase my question properly, so let me just try to re-ask it if that's okay, which is I'm trying to understand the intersection between where you work, the facilities where you work, and these very, very large AI companies that have emerged over the past couple of years. Specifically, what goes into purchasing one of these systems, deploying it? Do you have an ongoing contract with these companies? I mean, how do AI and the use of AI is becoming just critical across so many fields, including obviously the medical field. And my concern remains about just that there a small small number of very large companies at this point that control most of the technology And so I just trying to understand how you navigate that especially how you dealing with the costs associated Like are you purchasing systems that you can use in perpetuity Are they somehow rented? I mean, this is what I'm trying to get at.

Sri Dussamaliother

Just to, so I guess a couple of points on that. There are a small number of foundation labs, the companies we might all think of, who provide many of the models that underlie the technologies that we have. That being said, based on those sets of models, there are lots of other companies that then build products on top of those, whether that's the EHR vendor or other companies, let's say a startup. And so those are typically the companies that we're interacting with because they've taken a model, which in and of itself might not be that useful, just sort of out of the box for our clinicians, and actually turned it into something that's useful and has outcomes that are meaningful for patients and clinicians. In terms of the agreement or how many of these models are software models, meaning the models run on a cloud. And then there's a variety of other ways that one could pay for those, whether that's an agreement that covers an enterprise for as much as you can eat, an all-you-can-eat model, or perhaps a pay-per-seat or pay-per-use model. It sort of varies across the board.

M

Great. Thank you so much. Let me ask you, while we're on kind of the topic of the contracts, how much negotiating power do you have over liability if there's a malfunction with respect to that? I mean, if something goes wrong, is the developer carrying any of the liability?

Sri Dussamaliother

Yeah, I'm not sure that I know how to directly answer your question. You know, I think that in health care, we're, you know, obviously critically important to have backup systems and processes. I think our philosophy has been that we're building this on top of the standard bedside care. We always want to have a process to care for patients, regardless of whether it's AI or any other technological problems or malfunctions.

M

Thank you.

Representative Dan Frankelassemblymember

Representative Eric Nelson.

S

Thank you, Mr. Chair. Right here in the middle. So with the onset of AI and we really have seen significant growth in use, you know, I think recently one of our – we have a bunch of kids. Somebody had broke a bone, smashed her finger. As we go to the hospital, then we hear of AI reading the x-rays and that it's a much faster process and maybe even more accurate. But when you were providing your short testimony, you were talking about the need for additional money to be able to do AI. I mean I'm looking at it in a very different lens where we have the talent in nursing. We have the talent in doctors. Can you share how AI can streamline and help reduce medical costs? Because it sounds like you might be asking for more money in an already stressed system versus a tool to help reduce costs.

Dr. Robert Kruklidisother

Yeah, I think the ultimate goal. goal is to provide higher quality care more efficiently at lower cost, right? And I think that that is super important in healthcare. Costs in healthcare are really not sustainable. So I think we need to look at technology to try to accomplish those goals. The broken bone example is another use case of AI, right? So patients are coming into the emergency departments, and there's a number of scans that are being performed simultaneously. And the queue of these images is building up in a radiologist's work queue. AI now can start to help prioritize. There's a better look at this scan first. There's a really potentially critical finding on this scan. We, of course, need to have physicians overread those.

S

But I guess if going into the cost in the arena of cost, are you referring to short-term costs in order to help establish the system, which are then going to deliver savings? Or is this – are you referring to an ongoing, like an additional billing line item for AI support services or where are these cost savings going to show up or arrive? Like how do we benefit a very stressed health system through reducing costs with AI?

Dr. Robert Kruklidisother

Yeah. So obviously there's startup costs to get this technology, to utilize it. These companies are – there's obviously a cost associated with that. I think that we need to use the technology ultimately to be more efficient and effective in order to bring down overall health care costs. You know, how do we redeploy our health care providers into – there's so many jobs in health care. There's so many needs. You know, I think it would allow us to have – if we use this technology to be more efficient, we can expand the reach and provide better care for the population of patients that we're serving.

S

Thank you.

Representative Dan Frankelassemblymember

Thank you, Mr. Chair. Dr. Somali, maybe you can address specifically what does it cost your systems to have this technology?

Sri Dussamaliother

Yes. To address the comment, a couple of thoughts on cost and sort of even the quality, safety, and equity of really come down of health AI is not purely a technology to question, of course. You know, in both cardiology and in my other profession, informatics, you know, we talk about a lot of the outcomes of technology are driven by people in process. That's 80 percent of it. And so, you know, in order to reduce costs, in order to derive the outcomes, both in health care and other industries, you need to redesign or even reimagine, you know, the people in the processes in health care, knowing what the art of the possible is with technology. So two ways you might be able to do that or we might be able to. We're thinking about doing that. I think many are thinking about is one, being more proactive in terms of being able to identify disease earlier in the disease process. So for example, rather than someone being admitted with end-stage heart failure who needs a left ventricular support device in my field in cardiology can we catch that patient before they had their myocardial infarctions before they had their heart attacks Because we treated them better their blood pressure and their cholesterol and all of that better up front So how can we identify those progressing along spectrum risk and connect them to the appropriate therapies is sort of one way. Another is, oftentimes we've been anchored in delivering care in our methods on visits, on one must have an ambulatory visit in person, brick and mortar, or in the hospital to receive care. What health AI also allows us to do in combination with other modalities of care like virtual is to be more in touch and continuously monitor and continuously be in touch with patients in ways that we can't just do with having a human call and reach out. So I think that is another way of being able to be more in touch, proactively manage and continuously manage disease over time. the costs of the of the system system just to address that that question as well i think are driven by a couple of factors one is around the cost of the technology itself as i mentioned a variety of models to be able to pay for it but as we discussed earlier also to be able to generate the the the sort of whole support structure in order to validate and then not only once you have the system in production out and running, it needs to be monitored. You need to be able to detect drift as the system continue to work, does the system need to be decommissioned? And so there needs to be structures, you know, each system has, and I think we more broadly, either at a state or national level, have to be able to do that.

Representative Dan Frankelassemblymember

All right. I'm going to close out this panel with Representative Gatos.

Valerie Gatosother

Thank you, Mr. Chair. I'm a big fan of AI and medicine. I think that But as you demonstrate that when you're looking at CAT scans or things like that, that many times that you have a doctor that may be a young doctor that now has the experience of multiple doctors because of AIs able to pick up things that maybe a young doctor might not pick up. But my question is a little bit more remedial. You know, do doctors use AI when they do patient notes? So the question is, are we using AI to do patient notes?

Sri Dussamaliother

And the answer is yes. We're increasingly leveraging the technology. The AI can, you know, first and foremost, the electronic medical records have become so large with so much information. AI can provide a real vital resource in terms of helping pull the important information to the front. Some records are, frankly, impossible to read. They're so large. So pulling the pertinent information into the foreground to make sure that critical data is not missed is certainly one way that we're using AI to help document. Ambient listening is another. So ambient listening is maybe some of you have experienced it. You go to your physician. They ask if they can turn on their microphone. The conversation proceeds face-to-face with your doctor, and this will populate a medical progress note or a medical consultative note. You know, over the last 10 or 15 years, in some aspects, it may go down as the worst aspect to ever practice medicine. Why do I say that? because we all had to become data entry clerks You know I personally had patients say to me are you going to look at me or are you just going to type into the computer right And so the ambient listening technology is alleviating that burden I can now be in the room present with a patient, face-to-face, eye contact, listening, more engaged, and not have to worry about typing into a computer. And so that, I think, is a really great use case of AI.

Valerie Gatosother

So you're kind of going where I was going with that. So at some point, though, is that an AI technology that would end up replacing doctors because now this information is going – you don't need that physician in that room because now you can be just talking to AI and it will actually solve your problem? I mean that's kind of where I was going with that.

Sri Dussamaliother

Geez, I don't think so, and I sure hope not. I think that the critical clinical decision-making skills is really necessary in order to understand what is the problem the patient really has, how do we interpret that, how do we diagnose it, and then treat.

Valerie Gatosother

So what are hospitals doing to ensure that this is more integrative and augmentative, not replacing? and maybe it's just the inevitable of what we're all concerned about with AI is replacing certain jobs, particularly in the information sector. I think, you know, when you're using AI and when we've seen doctors use AI, just to summarize their notes, without the listening devices, I think that, you know, it's like spellcheck. it should be augmentative, not not replacive, if that's a word. And I think that, you know, we're seeing that people are getting sloppy in their records, and then that adds to the data that's out there, and then it's this vicious cycle. And I think that's the concern that people have, particularly in medicine. I think in medicine, when a mistake is made like that, and I think it's sort of a reference to some of the other questions, that when a mistake is made like that in medicine, it can be fatal. You know, when it's legislation or law, that's a different story, but I think that's the concern that people have with medicine is that, you know, what is that oversight to make sure internally that we are ensuring that that information is correct? And it's just a concern, and I'm not sure how you can 100% eliminate that. But I think, you know, people's tendencies are to get a little lazy, perhaps, when technology can replace things. And I think we're seeing it in all sorts of industries. And a simple example is spellcheck. You know, you get documents back that someone didn't reread it, that the spellcheck got it wrong. So it's just a concern we have. But thank you for your answer.

Sri Dussamaliother

Dr. DeSomalia. That is absolutely critical. So I was just going to say that is an absolutely critical and relevant concern. I would say we view this as absolutely augmentative. And I think that the other thing, I was in clinic yesterday using ambient listening all day. And what it allowed me to do was take my eyes away from the computer, focus on the patient, and focus on what I am there to do. Reasoning, judgment, empathy, relationships. And I think that, and the documentation happens in the background. What I think we need to be critically attuned to is how do we then design the interaction afterwards to ensure the accuracy of the documentation. One, the tools, so the tools are as good as they could be. So we monitor that quality and then we design the interaction to promote that active review Thank you Thank you I would thank both of our panelists for taking the time to be with us and inform our committee today our committees and really appreciate it

Representative Dan Frankelassemblymember

And we're going to move to our next panel, which is a panel of two providers, Registered Nurse Lori Kreider and Dr. Kirkland Kaith. I think I hope I got that correct. And again, we'll listen to each of your testimony and then have an opportunity for members to question and comment. Okay. So I'll start with Roy Kreider.

Lori Kreiderother

Good morning. Artite, Chairman, Sarah Say, and members of the committee, thank you for having me. My name is Lori Kreider. I'm a registered nurse with over two decades, almost 29 years now at Hershey Medical Center. I currently serve the SEIU Healthcare Pennsylvania Chapter President, Vice President at Penn State Hershey. On behalf of the 25,000 frontline caregivers in our union, and 2,100 of those nurses come from Hershey, from hospitals to home care, thank you for inviting the workers to speak on behalf of AI. First, I want to say that I am an optimist. I am encouraged about the prospect of AI to make our jobs easier, and especially like charting and patient monitoring I think is very important. Less time-consuming for caregivers like myself away from the patient and sitting at the desk and charting. Over the years, our health care system has been affected in a staffing crisis. Nurses and caregivers are overworked, burnout, and continue to leave the bedside. Patients are sicker, more complex, and our system and workers are being stretched to the max. At Hershey Medical Center, just last week, we had seen our beds at 111% capacity. That means that patients are sitting in the emergency room with no place to go. We have no beds for them because we can't get, we can't, we don't have enough beds for the patients to come in. We can't find placement for the patients that are ready to be discharged. In that high-pressure environment, any tool that can safely streamline the paperwork to get nurses like myself back to the patient's bedside is a win. But we must be intentional. AI can be a powerful co-pilot, but it must never be the pilot. As a union, our position is simple. AI must be used to maximize, not remove the human-to-human connection in the heartbeat of nursing. There's a phrase that we use in nursing, pain is what the patient says it is. AI can process data points, but it cannot provide empathy. It cannot hold a grieving daughter's hand or sense the subtle shift in a patient's breathing that signals a turn for the worse. We must distinguish between technology that supports care and technology that merely replaces care to maximize profits for big tech or insurance executives. Consider an example from my own backyard. We have a program in our hospital called the Emeritus Nurse Program. At Hershey, we have a wonderful program where our retired RNs who no longer work at the bedside due to age, physical conditions, whatever. They are hired to go in. They work as many hours as they want, little hours as they want to come in, and they handle patient discharges. This allows the experienced nurses to provide direct support and counseling to the patients that are making the difficult transition out of the hospital. They help educate those patients. They help go over their discharge instructions, confirm that all their medications are correct. Some of that could be done through AI. But just having somebody sit there and talk back and forth with you about their discharge and being able to read that patient or that family member's look on their face, I don't understand what you're saying to me. Could you say it again or explain it to me in a better way? It's a really good program. And the retired nurses love it because it gives them just enough nursing after several years of bedside nursing. If we move to programs like these, to a remote and AI-assisted process exclusively to cut labor costs, we lose the wisdom of those nurses and the safety of that human touch. Efficiency should never be a euphemism for abandonment. Furthermore, we must address the efficiency myth. We fear that administration will use AI as a justification to increase patient loads under the guys that the computer is doing half the work. Let me be clear. AI does not change the laws of physics or the physiological needs of a critically ill patient in an ICU. The clinically proven minimum standard for safe staffing is one to two in an ICU. No amount of algorithmic monitoring can replace the physical presence required to titrate life-saving medications or prone a patient, meaning put them on their stomach, especially during COVID times, in respiratory distress. Currently, many hospitals fail at times to meet these basic safety standards due to lack of staffing. In this age of AI, we need the protections found in House Bill 106, Patient Safety Act. AI should be a tool that helps us finally achieve safe patient limits by offloading administrative burdens, such as the charting and the EMARS. It should never be used as an excuse to load already overburdened nurses with more patients. A more efficient nurse is still just one human being with two hands. AI cannot be used to stretch those hands across three or four ICU beds. Better staffing leads to better outcomes. That's what we always say. AI should be a tool that helps us achieve those safe patient limits by freeing us from the machine of bureaucracy, not by becoming a machine that stands between us and our patients. We seen the risk of black box technology portrayed in popular culture such as those of you who have ever watched The Pit which highlighted how AI inaccuracies in a clinical setting can lead to devastating outcomes In health care, people make mistakes and systems make mistakes. But when a human makes a mistake, there is accountability and a consequence. When an algorithm makes a mistake, sometimes based on biased data that can disproportionately impact marginalized racial or economic groups, who is held liable? To ensure AI serves patients and workers rather than just profit margins, we need firm legislative guardrails. In SEIU Healthcare PA, we are advocating for several core principles in the approach to AI in healthcare. Number one, human in the loop. The standard that all clinical assessments supported by technology must ultimately be made by human decision makers.

Sri Dussamaliother

Number two, transparency is mandatory. Clinicians must understand how an AI reached a recommendation to trust and verify it. Patients must always be informed and consent when AI is involved in their care, and AI should never be able to pose as a healthcare professional. Worker voice is essential. Number three, frontline workers must have a seat at the table. We are the experts on how care is delivered. We are working to bargain standards in our contracts about the implementation of AI, but all health care workers, union or not, should have a voice. We are calling for labor management committees to oversee the implementation of new patient care technology. And number four, corporate responsibility. As big tech builds massive data centers in our state, often receiving hundreds of millions of dollars in tax exemptions, they must pay their fair share to support the very hospitals and nursing homes where their software is being deployed. Let's use this technology to give nurses their time back so we can do what we are called to do, care for the people. and keep this in the back of your mind, would you want a robot or AI giving you your sponge bath or placing your IV? Thank you.

Representative Dan Frankelassemblymember

Thank you, Ms. Kreider. Dr. Kaith.

Sri Dussamaliother

Thank you, chairs and committee members. My name is Dr. Kirkland Kaith. I'm a board-certified psychiatrist, currently fellow in forensic psychiatry at the University of Pennsylvania, and prior to that I completed college, four years of med school, and then in Philadelphia, four years of my training in psychiatry. I'm here testifying on behalf of the Pennsylvania Psychiatric Society, which is our state district branch of the American Psychiatric Association. And this is an organization of nearly 1,500 psychiatrists practicing in our commonwealth. And we also really try to seek to advocate for the patients and families that we devote our practice to trying to help. And I thank you for the opportunity to share our organization's experiences and hopes for the use of artificial intelligence in the care of patients with mental illness. You know, we've noted with some concern efforts to greatly restrict this modality, which does have a potential for significantly improving care. And at the same time, you know, we've also given written testimony before and should start by defining that, there are a lot of risks in AI in clinical settings and particularly in mental health I sure plenty of people here today have seen in the news AI hallucinations and how that might impact patients who are particularly lonely and trying to seek emotional support from chatbots. And that's not what I'm here to talk about today. Similar to some of our fellow testifiers, I think it's really important to think of AI in the context of augmented intelligence and how we can assist providers, and in my case, mental health providers, with being able to address disparities in care and be able to improve the quality of care we bring to our patients. And our member psychiatrists have countless examples of folks where they've spent too much time interacting with AI before they come to see us, and at the same time, examples of where utilizing AI in our clinical practice can help us see more patients. And thinking about that, you know, what I'm talking about today in the use of AI is in the clinical setting, is in terms of what I'm going to term decision support, as well as assisting and record keeping. And so in this way, you know, I think some of the things I'll be talking about aren't too different from what you've heard today from other health care providers. Thinking about decision support, this is something that is already present and was one of the main innovations of the electronic medical records system in that there's a database that can identify things like drug-to-drug interactions, which, especially with some of the more complex psychiatric medications we use, can be particularly useful. It also can be appropriate for assisting in the dosing of medications, especially certain medications can be dosed by patient's weight as well. And so integrated in the office, this can be a tool that can help assist in improving efficiency but also in improving the rigor that we try to be as evidence-based as we can be in our care. I think the other thing that's important and is especially important in the care of our mental psychiatric patients is when you look at how patients with a diagnosis of major depressive disorder, for example, or other psychiatric conditions often have a lower life expectancy. One of the greatest contributors to that is actually their chronic medical conditions that do not go addressed, whether due to the impacts of mental illness or due to the fact that this patient is often coming to health care providers and seen as the psych patient. And, you know, one hope for AI and for decision support is to be prompting providers to also be checking on mental health patients' medical concerns concerns or making sure that they're going to a primary care provider. Or some of our medications, for example, require screening of at least every six months to a year of a patient's blood sugar to ensure there aren't side effects that aren't being noticed. The other thing that I think is important to discuss is the concept of ambient listening, similar to what you've heard earlier in testimony today. I think that has particular relevance in mental health because there were specialties, before AI that have what we called scribes, people who could be in the room to help document a clinical encounter and then have the attending physician verify that as a way to be able to see more patients in a day and not cut down on any clinical care while seeing more patients in a day And I think in mental health that really was a barrier because in psychiatry we really have the privilege of hearing some of the most confidential things our patients might share with us, things that are often not available to other providers in the electronic medical record. And AI does present a possibility of having ambient listening where it can assist us in helping write our notes in psychiatry as well now and doing in a way that would need to be HIPAA compliant and able to have, if needed, an even higher level of confidentiality, which is often required in certain patient populations. I think the other thing is, as was talked today, nobody likes to be in the exam room or the consult room in psychiatry and see their provider looking at their computer, especially their mental health provider. and when there are certain models where for medication management, for example, a psychiatrist, a physician specializing in mental health, may often only be allotted a certain amount of time to see patients and is expected to use part of that time charting as they're seeing the patient. And so AI can really assist with that. I think the other thing is considering how it can be used to develop and update what are called treatment plans, which in mental health, for example, where I practice in the city of Philadelphia, every single patient whose insurance is administered through CBH or Medicaid provider, every single one has to have a treatment plan every six months submitted. And that is something where augmented intelligence can really assist with reducing the amount of administrative time while ensuring things like this don't fall through the cracks. I do think it's important to note some issues that need to be addressed as AI continues to develop and modernize. And it's that, you know, our field of psychiatry does have a specific level of nuance to it that it's essential that we don't miss with AI. You know, one example that was I found interesting is how, you know, we have a lot of patients who might be speaking about things that aren't reality based. and I've had patients talk about how their pet might be listening to them in a way that a human would and not, as you'd imagine, your normal cat would. And the AI will be more focused on the subject of their conversation and not the fact that this might be, for example, a delusion, something that is a belief that's false and fixed and is something that needs to be recognized by a provider. I think the other part is in thinking about requirements for notes every medical note including by a psychiatrist who's still a medical doctor does have to have requirements with insurance and I think AI does assist it has the potential to assist in writing notes where psychiatrists and mental health providers can focus on the aspects of the note that we really want to ensure have that level of nuance and then we can still verify other aspects of the note. For example, like a medical review of systems that the AI can both help providers remember to check on but also help providers document information. Finally, I think the other thing is, you know, and this is even more in a development stage, but there's plenty of data that's available through the electronic medical record. I, on the forensic side, was reviewing a case that had three years of outpatient records, and I believe that was almost 700 pages. I'm thinking how psychiatry notes are longer, and there's also some repetitive info in these charts. And yet that is a lot of rich detail that no provider might have time to fully go through, and yet there could be very important details, especially for a patient that could have a level of nuance where for them there might be certain signs that they might be having thoughts of harming themselves in a manner that might not be as apparent to a provider that is just starting with them compared to one that's known them for years. and so with AI it does present the ability for us to be able to better organize what is often too much information that's presented to us in medicine beyond that I think it is important as I mentioned earlier the aspect of protecting patient privacy particularly in mental health and particularly when I say mental health also in substance use treatment and so I think that's something that needs to be of concern in considering how AI is going to be regulated in health care and also that patients are properly informed, that these systems are integrated in a manner where patients are aware because it can be very important also that they not become a barrier. I have patients who, for a variety of reasons, may not trust a computer listening to them. And I think that should not mean that they are not able to get care because they're not able to fill out a form or a scale online when trying to get an appointment with a provider that might have a long wait list and they're not able to get through on the phone. So with that, in summary, Pennsylvania Psychiatric Society looks forward to using AI for decision support and the record keeping it can provide and look forward to further discussions with the legislature and the administration on how best to use this exciting opportunity to add value to the care we seek to provide. But thank you all for your time today, and I'd be happy to answer any questions you might have.

Representative Dan Frankelassemblymember

Thank you, Dr. Keith. Let me ask you to start. Apparently there are a number of AI-enabled mental health devices that the FDA has approved, things like Endeavor RX, which is an ADHD video game treatment for children, Daylight RX, a cognitive behavioral therapy device, and nightwear. Can you comment on the efficacy of those?

Sri Dussamaliother

I think it's a very good question. I think the first thing to consider is, and I'm not particularly familiar with some of these examples, if it's FDA approved where it's gone through rigorous testing versus FDA cleared. And I think the separate thing that I think is important through the APA is that AI and the use of AI in mental health does have a lot of potential to extend care. though I think you know one thing that is important to note is that you know there really should be a certain point where a real person is involved especially if there warning signs So I think an example is I had a peer who developed through one of the AI coding a platform to help a patient with an alcohol use disorder. And this was an app or a platform that isn't them continuously speaking with provider, but it's something that's working with their provider and their providers vetted and looked at themselves. In terms of mental health applications that are entirely separate from this, I can have our organization get back to you on opinions on that, but I don't know if I'd be able to comment on that.

Valerie Gatosother

Chair Rett? Thank you, Mr. Chairman. I was very intrigued by your testimony and certainly with the increase about mental health from children to adults. And when Representative Gatos was talking about the spell check example, and then you were talking basically about the AI misinterpreting, and I think your example here, do you mind if I read it?

Sri Dussamaliother

Go ahead.

Valerie Gatosother

You didn't mention it. but the AI system was interviewing a patient with a psychotic disorder, schizophrenia. The patient suffered from delusions, one of which he described as his belief that his cat was in fact an alien which was spying on him, and he was not sure if he should take it to a shelter, but the AI recorded this in a note, patient having trouble with his cat, a very different thing. And that's actually quite alarming, the misinterpretation in the notes. So I think those are issues, just like the spell check and the misinterpretation of what's or not actually grasping the entire situation, is really pretty critical.

Sri Dussamaliother

well, this, you know, having trouble with your cat is a lot different than I think my cat's an alien and he's spying on me. I'm sorry, I shouldn't even chuckle at that. But it was just when I read this, and I was hoping that you would actually mention it, but that kind of, you know, hits home more, I think, a really good example of how I did not really record or grasp what was going on in the whole interview process. So thank you. And it's a lot for us to think about as we move forward, I think, in this whole AI and medical and not just physical, but mental health as well.

Valerie Gatosother

Thank you so much.

Sri Dussamaliother

Thank you, Representative.

L

Thank you, Mr. Chairman.

Representative Dan Frankelassemblymember

Chair Cerisi.

L

Thank you, Mr. Chairman. Thank you both for your testimony. I find it very intriguing. And, you know, the one question, when you record these conversations, who has access to this? And how do we know that someone isn't going to access the conversation, the meeting? You know, how do we know AI isn't going to share it somewhere else? So what's the safeguards that you've seen in place for that?

Sri Dussamaliother

Yeah, I think that's a really good question. And you know I think that is a conversation that also has to happen at the system level of a health care system Because when I think of AI hearing anything that is hyposensitive I think it essentially important that this is you know integrated within a healthcare system where you know, for example, you know, the applications that provide the electronic medical record, like one is called Epic, you know, the goal would be to integrate like AI within that platform and also have AI, and I'm probably not going to do the most justice in describing the technical terms, but essentially organizations and companies already in finance do this where they have very sensitive data and they have a version of AI that is kind of separate from the greater cloud of information. And so that would be where I think it would be really important to be prudent that there isn't a chance that this information could cross over. And that's where I think it's really important and encouraging that. We have other testifiers today who have talked to how they could try to implement AI in a responsible manner at a health care system level. And I know we talk about the technology being new and understanding it from the bedside. of course 100 percent we wouldn't want a robot um to do any of this stuff and i i said it i think the last time i i said again i harken back to that one episode i'm a big star wars fan where the robots delivered the babies and you think is this where we're going in technology um so it is something that i think we need to be cognitive of but in your profession over the last 25 years when you look at ai what would be exciting to be able to see ai be able to do for more so in the future than now that's maybe not so much at the bedside but other things that maybe we're not looking at it doing? Definitely we would want it to cut back on our charting. Our charting is so redundant. We're switching to Epic. We are in Cerner right now, but by the end of the year we'll be in Epic. So I think that's going to help us a lot because we'll be able to communicate among a lot of the systems. You know, we get a patient coming from, you know, Mount Nittany who has a different system than we have, so we're relying on photocopies of their charts, you know, and then the doctors are trying to read through and pull stuff out of there. But at least, you know, if we have the same, if they have Epic, we have Epic, then it just transfers over, which will save us a lot of time and we won't miss things because a lot of times we'll miss medications or treatments that have already been done and blood work that has already been done. And then we're doing a lot of repetitive stuff, which is cost. It will be cost effective that way. I mean, you already did a CT scan yesterday. Why are we doing another one today at another hospital? And so stuff like that. Definitely doing a lot of repetitive work for the nurses. Like, we just did this blood work. Why are we doing it again? Because we can't see the results from the other hospital or whatever the case may be. It gives me more time at the bedside with the patient to be able to do care, do a real good bed bath versus a real quick wipe down, be able to comb their hair, sit with them, talk to them, get to know them, make them feel more at ease, Make them feeling more at ease so they won't need anti-anxiety medications. You know a lot of these patients are by themselves They come from all over So but I think that I want to spend more time at the bedside I hate having to spend all my time charting I used to do it by hand. So, I mean, computer's definitely better, but I used to do it by hand. But we have a system we call the VICU, which is a virtual ICU, which is a system throughout our hospital that it is run by ICU nurses that they sit. I'm not even sure where they are. They're somewhere. Somewhere. They used to be in St. Louis. We have a different system now. But basically we can camera in. They can camera in to us, and they can see our patients. They can monitor our rooms. I mean, it's not AI, but they can see. They can talk to us. You know, if we have concerns and we don't have a physician available, we can hit the button and they'll come up and say, hey, can you just look over their chart? They have all access to the patient's chart, patient's monitoring. They watch their monitors. They're kind of a second set of eyes as we tell the patients. They will help us with admissions. So they'll come in. If we get a new admission from an outside facility or from the emergency room, they'll come on. and you can see the person up there on there, and they'll introduce themselves and ask all the questions, the admission questions that we may not have time to do, get all that paperwork out of the way, and then they'll flag us or communicate with us through our Tiger Tech system, you know, things that we may need to look at. Okay, this patient has had diarrhea for several days. Maybe we should put them on precautions for this, that, or the other thing. You know, stuff like that. But they're wonderful. That, you know, sometimes nurses think they're kind of a pain because they're like, hey, you know, did you see this? Yes, I saw it. But I've had those nurses monitoring my patient's cardiac rhythm and saw them going to a life-threatening rhythm, and I didn't see it because I was in with another patient. And we were able to get that patient off to the operating room, and they ended up with a pacemaker. So, but those are human set of eyes. So.

L

Thank you.

Representative Dan Frankelassemblymember

Ms. Kreider, what kind of frontline training has there been for members, you know, for your coworkers? And has any, where have any coworkers lost their job because of AI? Not to my knowledge.

Sri Dussamaliother

Now, we are really like 100% into this epic. So actually from now until the end of time, until January, none of us are even allowed to take long-term vacations because we're implementing this epic system. So to my knowledge, not at the bedside, no. In fact, we need more nurses at the bedside. And the reason people, I don't think it's a lack of nursing. I think it's a lack of people not wanting to work at the bedside because it is so difficult. I mean, you know, after, if I didn't go to the chiropractor once a week and get monthly massages and go to the gym and swim and all that stuff, I don't know if I could do it after all these years. We appreciate that you and your coworkers do do it.

Representative Dan Frankelassemblymember

Any other comments or questions? Well, thanks to both of you. We appreciate your time this morning and helping inform our committee.

Sri Dussamaliother

Thank you. Thank you.

Representative Dan Frankelassemblymember

We'll now turn to our final panel. panel which are our insurers. We have Michael Yantis, Michael Barber, and Jonathan Greer. Please join us. And you guys are well known to many of us, but please introduce yourselves.

Dr. Robert Kruklidisother

Good morning. Just so everybody understands, I mean, it's a very busy day in the Capitol.

Representative Dan Frankelassemblymember

So we've had people coming and going, mostly going, apparently. I wouldn't take it personally. Understood. Understood. And we're joined by Representative Towards. Go ahead. Yeah.

Dr. Robert Kruklidisother

All right, great. Thank you, Chair Frankel, Chair Rapp, Chair Srisi, and Chair Rotite. We appreciate everyone's passion and interest in this issue. Michael Yantis with Highmark, Michael Barber, my colleague, he is the AI expert. He is the person from whom you want to hear. I am the public policy person for Highmark. We operate as Highmark Blue Shield here in central Pennsylvania, but we're part of the larger Highmark Health family, which includes the Allegheny Health Network, Hospital and Health System in Western PA. We participate in the Medicaid and Medicare program. We also have a presence in Delaware, West Virginia, northeastern and northwestern New York, as well as 30 counties in Missouri and two in Kansas. So that is the lens through which Michael will present the ways in which we are using artificial or augmented intelligence to help create a remarkable health experience for our members and customers. So, Michael.

S

Thank you. Yeah, Michael Barber, Senior Director of Responsible AI at Highmark Health at the enterprise level. So I operate across all business units, managing our AI policies, our AI governance process, AI governance approvals for use cases in production. I also sit on the University of Pittsburgh Responsible Data Science Advisory Board. Oh, is that not on? Light was light. It's just too far away. Thank you. Is that better? Yes. So, yeah, I also sit on the University of Pittsburgh Responsible Data Science Advisory Board and on a couple of committees at IUP. I teach two classes in AI ethics at IUP as well. So happy to be here. Thank you again. Great conversation. Listening to a lot of the earlier testimony, you know, I would agree with most of what I heard. We are doing a lot of the same things. Some of the things that I didn't hear quite as much about that I think are important are the ways we use AI to identify fraud and waste. I think that's very important in controlling the cost of health care. And with the rise and access, easier access to some AI systems, that actually creates more fraud. And so now we've got to counteract that with our own AI. We actually have some systems that can identify images, you know, CAT scans, X-rays, MRIs that were generated by AI and aren't real, which is a very interesting space. We're testing that now. And then we can also use AI to gather information from past claim submissions, identifying providers who submit the same thing over and over with just very minor changes So you know a human would have a hard time doing that going back through all of that information where an AI system can bring that to the front very very quickly We also use AI in processing claims. We still get claims that are faxed in, in this day and age. And so we have optical character recognition systems that can read those faxes, digitize the data, and again, gather the relevant information for a claims examiner. So those are some of the ways that we're trying to control cost. We are also working very hard to improve our member and patient experience. We've got some direct member-facing AI things in operation right now where somebody calls in and the typical kind of chat bot that answers the phone when you call in, It recognizes certain questions, and we'll reroute that call to an AI system that can pull on information in the background and actually have a conversation and answer questions. Obviously, when people call in to a call center, you want to get your answers accurately, quickly, without multiple transfers, without hitting the button 15 times to try to get a human on the phone. So we're working very hard in that space. The other thing we're doing in our call center space is with training of new call center reps, human reps. We've got an AI system that they can actually interact with and have kind of simulated calls so that they can get through their training faster. They can expose themselves to some of the more challenging types of calls that would come in. And that helps tremendously. we've seen great improvement in reduction of turnover in staff. You know, typically call centers have very high turnover. Anything we can do to reduce that not only, you know, saves cost overall for the entire system, but allows people to be operating at the top of their license. They're more educated. They've got better experience. and by rerouting some of the simpler things to an AI system, they can concentrate on the things where we really need a human to answer the questions.

Lori Kreiderother

Good morning. My name is Jonathan Greer. I am the president and CEO of the Insurance Federation of Pennsylvania, which is a state trade association representing insurers in every line of insurance in Pennsylvania, including the commercial health insurers with a presence in the state.

U

Joining me this morning is Megan Barber, who is our Executive Director of Government Affairs.

Lori Kreiderother

Thank you for the opportunity to come before you this morning to be part of this discussion on a really fascinating topic. Megan and I testified in December before Chairman Cerisi's committee on a bill, Representative Venkat's bill, that seeks to regulate the use of AI in health care and health insurance. and we regard this as an opportunity to speak not just on the regulation of AI, but really the promise that it has both today and going into the future.

Sri Dussamaliother

I will turn it over to Megan to speak to some specifics, but what is the common refrain that you hear about health insurance and health care generally is, it's too expensive and it's too cumbersome. Those are the two big problems you hear about it. AI has the potential already and going into the future to cut down on both. And so we understand some concerns about any novel technology I think 30 years ago we were having some more concern about what this internet thing got to do to our lives right And I think that there been a comment or a word used that really bears repeating, and that is responsible use. The responsible use of this technology is very important. It's something we take. It's a responsibility that we take very seriously. We have internal controls within our members, and I'm sure Highmark does as well, governance committees, protocols, you know, NEIC, the National Association of Insurance Commissioners, have protocols that we follow that is constantly being evaluated because this technology, by its very nature, is changing. So we are trying to keep up with those changes as it relates to our governance of it. The only other thing I'd like to add, which I think is worthy of mention from an insurance standpoint, is the NEIC is very active on this issue. There is a national meeting, as we speak, in San Diego. So, Insurance Commissioner Mike Comfries is a leading voice on that conversation. He is, I think, the leader on this issue nationally. So it's obviously something that he takes very seriously as well, and we're working with he and his team as this progresses. But I'll turn it over to Megan just to speak to some examples of how we're using this.

Dr. Robert Kruklidisother

Thank you, Jonathan, and good morning all. The two areas that I wanted to focus on and dive into a little bit are how insurers are using AI, both in terms of care management and clinical determinations. On the care management side, from a clinical operations perspective, AI's value has become very evident in terms of, similar to how the providers described their care management protocols this morning, an early identification of members who are in need of a greater level of care and management than others. And so by using AI, AI tools help insurers identify those members who need that more intensive care management. They allow for early outreach, for care management individuals to work directly with members to route them to the appropriate types of appointments, to the appropriate level of care. And in practice, what we're seeing this do amongst our members is actually reduce avoidable hospital admissions and manage costs for payers, but also, most importantly, for patients themselves because they're getting that care earlier and reducing the level of care that they need downstream as well as the costs associated with that care. We've also seen it improve prior authorization and utilization management. So prior authorization, as I'm sure you all are very familiar, has often been seen as one of the most burdensome aspects of health care, particularly on the payer side. But with the responsible implementation and use of AI, what we're seeing is those prior authorization times actually go down. Because it has historically been a slow process, a while ago there was a manual review of these records that providers and payers alike had to comb through in order to get that prior authorization approved. And we would see our members become very frustrated with that process, understandably so. But with the implementation of AI, that has all been automated, and we have seen times go from upwards of a week down to less than a day and in a number of hours in many cases. So AI is transforming that prior authorization process as well as utilization management It analyzes large volumes of clinical information in literally what we seeing as microseconds It improves the accuracy by reducing human error and also importantly by improving the consistency across cases So instead of introducing some subjectivity in those reviews what we seeing is standard application across the board that is based on clinical evidence guidelines and best practices So by streamlining this prior authorization, we're seeing a reduction in delays and in needed care and ultimately better outcome for patients. And I do think it bears repeating, and Mr. Barber said this as well as the providers that you heard from this morning, from the insurance perspective, we also see this as a tool of augmentation, and in no cases is it replacing those clinical determinations being made on prior authorization or anything else. The last point that I wanted to touch upon, which Mr. Barber has also spoken to, so I'll make this short, is that we are also seeing AI overall begin to make health care on the payer side less expensive and less cumbersome. So many of the administrative tasks that we were relying on humans to do and the processes were often manual, we are seeing claims intake review, documentation extraction and classification, contract and invoice analysis, and call summarization and routing. And by developing these applications and using them widely, we are freeing up and allowing time to be spent on complex clinical issues or direct consumer support rather than paperwork, which we're also seeing make our members ultimately happier because we're able to spend more time with them doing the more complex tasks in management as opposed to some of the more rudimentary ones. The last thing I was going to touch upon was fraud, but I think we've heard how that has been helpful.

Sri Dussamaliother

So, Jonathan, is there anything that you wanted to close out with?

Dr. Robert Kruklidisother

Okay.

Sri Dussamaliother

Thank you.

Dr. Robert Kruklidisother

Thank you.

Representative Dan Frankelassemblymember

Representative Gatos.

Valerie Gatosother

Thank you, Mr. Chair, and thank you, panelists, for speaking on this topic. So we all know that health care costs are skyrocketing. You mentioned that. and, of course, some of the uses of AI are to reduce fraud, waste, and abuse. Can you expand on maybe some of the specifics of how exactly you're doing that and perhaps maybe what kind of savings that you've seen in the time that you are implementing AI?

Sri Dussamaliother

So the image recognition and sort of fraudulent image identification tools that we're using are still in an experimental phase. United Concordia Dental, which is one of our business units, is employing a system now that looks at dental x-rays that are submitted with a claim and identifies whether they actually match what's listed on the claim. And there are a surprising number of mismatches that come in that then have to go back to the dentist, then they resubmit, And, you know, that back and forth not only adds cost, it adds delays. If the patient has a co-pay, you know, that's all uncertain during that back and forth period. So that system actually works quite well in just identifying, you know, recognizing that a dental X-ray matches the claim. at the health plan at Highmark Inc. The health plan is where we're experimenting now with a vendor who's got a product that identifies fake images basically. And can do so by looking at the resolution of the image and comparing that with actual MRIs or X-rays. And a generated image can oftentimes have a different resolution that's identifiable as non-real. And so that's one of the ways that they're doing that. It's really interesting. We don't have any final results on that one yet. It's brand new.

Valerie Gatosother

Can you estimate perhaps what kind of savings that you're looking at by implementing AI?

Sri Dussamaliother

So we have –

Valerie Gatosother

I know that's a tough question.

Sri Dussamaliother

Yeah, it is a tough question. So what I will say is in my world, in responsible AI and AI governance, we are not – one of the discussions we don't have when approving an AI system is cost or savings. What we evaluate are things like transparency, efficacy, potential harms when the system makes a mistake or is wrong. we make the assumption that if an AI system provides the service that we expect it to and in the right way with the right kind of transparency and ethical treatment of people that any savings in cost either for us or the members or patients will follow. So I actually don't personally have a lot of those kind of numbers. I know that what we see are things that have been determined to be problematic And so, you know, there's a problem to solve, and this is the way we'll solve it.

Valerie Gatosother

Well, I'd certainly like to see our state government use AI to be more efficient. I know I currently have a piece of legislation that House Bill 979, which will try to encourage all the agencies to self-audit, and they've actually come up with a number that it could actually save the Commonwealth nearly $2 billion. And it's just patterning after making all the agencies audit. So certainly would love to hear the results from what the private sector does because I think we can probably learn from that and maybe try to reduce some fraud, waste and abuse in state government.

Sri Dussamaliother

Yeah, happy to follow up afterwards. Thank you. I can gather some information. I mean, you know, fraud is obviously an escalating problem in the finance world as well for the same reasons. And oftentimes it's not fraud. It's just mismatching of records. So, I mean, I certainly don't want to impugn anybody on this, but I think that we can be a lot more efficient, and this is where AI can help us in all different sectors.

Valerie Gatosother

Thank you.

Representative Dan Frankelassemblymember

Representative Torres.

S

Thank you, Mr. Chairman. Thank you, panelists. Learning a lot. Can you explain a bit more about how AI is used in prior authorization determinations? Does AI make the initial determination, and is that initial determination then reviewed by a clinician?

Dr. Robert Kruklidisother

and what role if any does AI play in the actual prior authorization process Yeah so Highmark is and with Allegheny Health Network our hospital system is currently engaged in a partnership with a vendor named Abridge That is our ambient listening system that has been discussed multiple times here today. We are adding a component to that system where, as it listens to a conversation between a patient and a provider, It will pick up the need for a prior authorization, and it will actually make the submission that the clinician has to hit the button to send, but it cues that up for the physician to send in such a way that it knows it will be approved. So there are a great many prior authorizations that we auto-approve just because there's no reason to delay certain things. We don't auto-deny anything at all. So any adverse decisions are always reviewed by a human. And then there again, the AI can help by surfacing the right information to allow that person to make the correct determination. But, yeah, so we're, you know, in that space, we're looking for near real-time prior authorization approvals.

S

Okay. Because I sit on a hospital board in my district for the last nine years and have seen the changes in implementing Epic, where we had a lot of physicians who were very unhappy about the changes of Epic and kind of encouraging you've got to get along and make it work. But the AI does seem like it has a great opportunity to move ahead. But we do know that every time you have your AI help you, There is the doctor or clinician has to check it, has to review it, has to approve it. So there's that safety.

Dr. Robert Kruklidisother

And I would just quickly add to that. I think you folks in the legislature actually had the foresight to provide the appropriate guardrails for prior authorization. Several years ago, you passed Act 134, which provided a framework for prior authorization, utilization, management. And that law requires that a medical director actually issue an adverse benefit determination or a denial. So not only does it indicate that a human must do it, it actually has the qualifications for that person, which is a medical director has to be the person that does that.

S

Excellent. Thank you.

Dr. Robert Kruklidisother

Jonathan and Megan, your members?

Sri Dussamaliother

Yeah, it's a very good question as Mike speaks to a very topical question about prior authorization. I think it's important to note Mr. Barber said this a relatively few number of claims are actually subject to prior authorization now I realize the ones that are are the big ones orthopedics and things like that but they are relative in the overall scheme it's low single digits in terms of the total number of claims that are subject to prior authorization those that are and Mike is right

Dr. Robert Kruklidisother

an adverse benefit determination

Sri Dussamaliother

has to be made by a clinician So the AI is there to guide that decision to inform that decision but the AI is not making that decision And if for some reason the information that were to come out of the AI framework was somehow inconsistent with everything else the clinician saw, that would be a moment where the clinician would say, wait, this doesn't make sense. This is saying this here and this is saying everything else I'm seeing here says something else. That's the importance of the human interaction.

S

With respect to going back to something I had asked before, do vendors that you guys are dealing with, I mean, do they accept is there a liability there when something goes wrong, they get something wrong? I mean, is there – are you guys – identify them?

Dr. Robert Kruklidisother

That's a great topic. It's another hearing maybe. So I spend a lot of time with our legal team on AI-specific contracting with our vendors. We work with hundreds of vendors who supply everything from pencils to elaborate AI systems. In the AI world, we are very, very cautious and conservative about what we allow our vendors to do with our data. In the interest of member and patient privacy, in the interest of not allowing vendors to improve their product with our data, to train models that are available to other people with our data. We're very specific about that in our contracting. We require pretty stiff indemnification clauses. But, you know, it's an interesting topic to me because in a lot of ways it feels like vendors can fly under the radar a little bit and we're on the hook for everything they do. You know, I don't know what the answer is to that. I know that we're very strict about it and, you know, sometimes the vendors aren't so happy with us because of that. But, you know, we do what we can. And there's also the issue of, like, very large vendors, you know, even we can't – they won't entertain red lines, right, even with us. So I can only imagine how much more difficult it must be for smaller organizations, you know, to try to hold vendors accountable for things. It's an interesting space.

S

You know, I don't know if that's – we were talking earlier. I don't know if that's at the state level because vendors are operating across multiple states. We are operating across multiple states, but it is an interesting nuance with the development of more modern AI,

Dr. Robert Kruklidisother

where back to a much earlier conversation today so the foundational large language models there aren a whole lot of them right There are a lot of vendors who are making use of those and then kind of modifying things That sits in the background but then they customize some system So because of that, it used to be that a vendor developed their model. They had their thing. It was all contained at that vendor, and it was much easier to hold people accountable. Now there's sort of a layer of onion skins where they're accessing other people's models, you know, and to try to push that all the way through is difficult.

Sri Dussamaliother

If I may just add, a great question. I think what you're really getting at is, is it a product liability issue or is it medical malpractice? I think it's going to be dependent on a case-by-case basis what happened. It will ultimately come out in discovery. as part of litigation. However, it kind of speaks to what we've been talking about before, which is the responsible use. Responsible use is an interdisciplinary function. We monitor it constantly for any unintended bias, any inaccuracy. That's what the governance process that we all employ to ensure that it's being used responsibly. And it will change. As the technology changes, so too will the governance of the technology. But that's something, I said at the outset, something we take very seriously. It's a responsibility that we have, and I think that we all collectively see that and are acting accordingly.

S

Thank you.

Representative Dan Frankelassemblymember

Thank you.

D

Chair Rapp, any closing remarks? No, but thank you to all the panels. I think this was very informative, and like a lot of other hearings, Mr. Chair, Ray, It gives us a lot more questions for future hearings that we would like to have answers to for any legislation that would be discussed. And I think the liability issue is critical that we try to get a handle on, you know, some liability issues as well. So I appreciate everyone's testimony here today. So thank you, Mr. Chair.

Representative Dan Frankelassemblymember

Again, I want to thank this panel and all the panelists today. It's been obviously very informative for us. As you probably know, there have been a plethora of bills dealing with AI across many different disciplines here in the Capitol. And I think it's helpful to have this committee in our kind of corner of public policy be informed. And you guys did a great job. All the panelists, we really appreciate it. And with that, I'm going to adjourn this hearing of the House Health Committee and the Communications and Technology Committee, and appreciate it again.

Sri Dussamaliother

Thank you.

Dr. Robert Kruklidisother

Thank you.

Source: PA House Health — 2026-03-24 · March 24, 2026 · Gavelin.ai