March 26, 2026 · Business Affairs & Labor · 25,876 words · 22 speakers · 244 segments
Good afternoon, Business Committee, Business Affairs and Labor. Thank you guys for your patience. We're going to get started pronto. Ms. Haroja, please take the role.
Representative Brooks. Excused.
Gonzalez. Buenos Aires.
L.T. Excused. Excused.
Leader. Present.
Lindsey. Excused.
Excused. Mabry. Yeah.
Marshall.
Marshall?
Excused.
Morrow? Here.
Richardson? Here.
Raiden?
Raiden is excused.
Sucla? Here.
Camacho? Here.
Madam Chair? Present.
Okay. We are going to get started with mine and Rep. May-Breece Bill. All right. Thanks, everyone, for coming to the Business, Affairs, and Labor Committee this afternoon. First up is House Bill 26-1261. We have our sponsors here. Who would like to go first?
Representative Mabry. Thank you, Mr. Chair. Members, we are going to ask the committee to lay over House Bill 1261. And I want to be straightforward about that. But I also want to take this chance to talk about why we were bringing this bill and why we intend to bring a policy like it back next year. Right now, the average car payment on a new car has reached $774. When you add full coverage insurance, that can be $225 a month. You're looking at $1,000 a month just to keep a car in your driveway. That is what used to be a mortgage-sized payment 15 years ago. And yet, for most working people, there is no alternative. As much as many of us would like Colorado to be a state where working-class families can rely on public transit, that is just not reality. Working people need a car to get to work. They need a car to take their kids to school. And in Colorado, for most families, a car is not a luxury. It is necessary. 60% of Americans are living paycheck to paycheck. 70% have less than $1,000 in savings. One missed check, one unexpected medical deal. Bill, one late payroll, and you could lose your car. And right now in Colorado, that's exactly how the system works. If you miss one payment, your car can be remotely disabled by a kill switch. It can be repossessed. And under current law, once that happens, the loan can be accelerating, meaning to get your car back, you don't just owe the mispayment, you owe everything. And for someone with less than $1,000 in savings, that is not a path to keeping their car, which means they could lose their job and then be on a path to lose their housing. This bill was written to address that. It was about extending the notice periods to give people real time to catch up. It was about prohibiting lenders from remotely disabling cars as a repossession tool. And it was about creating a right to cure after repossession. Not everything in the bill was fully ready to be passed into law. I'll completely acknowledge that. I wanted to have a conversation about extending notice periods and giving people an opportunity to cure I think giving people an opportunity to cure when they missed payments and even after their car has been repossessed is important Those are common sense protections And the data tells us that this problem is getting worse. Auto loan delinquencies have hit their highest level in 32 years, a record stretching back to 1994. Economists are calling this a flashing red light and a broader indicator of the economy, an early indicator of a recession. When working families can't keep up with car payments, that is a really bad sign for our economy. I believe this bill was right and necessary, but I do not see a path to passing it this year. And I would rather make sure that we're laying the groundwork now, having conversations in the interim, and working to bring a bill back next year that will provide people a meaningful opportunity to catch up on their payments and stay in their cars. Madam Chair.
Thank you, Mr. Chair. I do agree with my co-prime, and we're going to take the time to work on this bill, bring it back next year, and just work to see what we can do to help people who are having problems with car payments. A car is a utility that is essential to work to health care, to everything that you do here in Colorado. We do ask that you will postpone this bill indefinitely. Thank you, community members.
I take there's no questions, but a proper motion would be to postpone indefinitely House Bill 1261. Is there anyone who, Representative Mabry?
Thank you, Mr. Chair. I move to postpone House Bill 1261 indefinitely.
Second.
That is a proper motion by Representative Mabry, seconded by Madam Chair. Ms. Rojo? Is there any objection to postponing indefinitely House Bill 1261? No motion? We cannot object? All right. House Bill 1261 has been postponed indefinitely. Wait. No, there's a roll. No objection. All right. Please call the roll.
Thank you. Excused.
Gonzalez. Yes.
Kelty. Excused.
Leader. At the bill's request, yes. Lindsay.
Yes. Mabry. Yes. Marshall.
Excused. Morrow. Yes. Richardson.
Yes. Bryden. Say yes.
Sukla. Yes. Camacho.
Yes. Madam Chair. Yes. For all this time, House Bill 26-12-61 has been postponed indefinitely. Thank you. Thank you Thank you Thank you.
All right, Business Affairs and Labor Committee will come back to order. We have our two sponsors here for House Bill. All right, committee members, if we could just calm down for a minute here. So we have our sponsors for House Bill 26-1190. sponsors who would like to begin presentation of the bill? Representative Sober.
Thank you Mr. Chair and thank you members of the committee. It's great to be before you today. I hope all of you have reviewed the strike below which is amendment L004. Although we are going to ask you to postpone indefinitely our bill today. We are still working with the industry on being able to find a compromise in the middle ground and we do plan on getting there this year, but not in this bill as the means to do so. A background, as probably everyone knows, alcohol consumption is at a 50-year low. Within Colorado, we have certainly built a brand on our craft producers, whether it's breweries, wineries, or distilleries. It's become part of how we market the state through tourism. It's something that is part of our business community. And being able to allow manufacturers in their sales room to at least pour by the glass another's alcohol is important for being able to keep that group where, let's say you're going to a winery and there's the one friend who doesn't drink wine, they drink beer. And if you could allow that winery to be able to pour a beer for that person, it keeps the dollars circulating around the community certainly, but it also means that we don't have more wineries or breweries close. In the last 12 months, we've had 100 breweries within Colorado close. We've also had about five wineries close as well. So it's definitely a market that's retracting and being able to do anything we can for our business community. This is the alcohol business community. We do stuff all the time for other areas of business. that's important. I did have a little bit of a chart that I made that I'll show the committee. It's kind of my crude chart. I show you this because this is Colorado's alcohol licensing and it's very complex and that's why being able to PI it today to continue to work with the wholesalers distributors retailers and manufacturers to make sure that we have found that sweet spot is important thank you And before I call Representative Martinez I just like to note for the record that Representative Brooks Kelty Marshall Lindsay and Ryden have joined us
Representative Martinez. Thank you, Mr. Chair. Thank you, committee members. This has been a wild adventure for me. I have never run a bill in this space before, and I have quickly learned why I don't run bills in this space. but what I would say is I think really the main reason why I got on this is that I think being able to help out our small craft breweries, our wineries and being able to help in a way that makes sense and being able to address in a way the crisis that they're in and being able to help again support small businesses in all of Colorado and particularly rural Colorado I have a small brewery or a couple of small breweries in the San Luis Valley and realizing what they're going through and being able to really give them options, being able to give them a plan to being able to really proceed forward, I think is something that I wanted to do, even though it's not my space. And for those that you know, or that know me long enough, is like I will run a bill in any space if I feel that it benefits my community and being able to help them out. And so we've gone through massive changes through this. I agree with my good co-prime that we are very close to being able to find this but I think the best avenue is to be able to say look let's being able to start this process this year again it will be coming this year but doing it in a way that I think this has triggered that conversation where all the sides are coming to the table being able to say we recognize the problem that is there and we think that there's a solution to find that so when we are back in front of you later on this session. You know, we were happy to discuss this some more in depth, but I just really appreciate my good co-prime with this. I appreciate all the talks that I've had with you all around this and concerns and issues that, you know, we're going to be working on with that. So just a thank you.
All right. Thank you very much, sponsors. Committee members, a proper motion would be to postpone indefinitely House Bill 1190.
I move to postpone indefinitely Bill 1190.
Second. Second.
That's a proper motion by Representative Ryden, seconded by Representative Morrow. Ms. Arroja, please call the roll. Representative Brooks.
Yes.
Gonzalez.
C.
Kelty.
Yes.
Leader.
Yes.
Lindsay.
Yes.
Mabry.
Yes.
Marshall.
Yes.
Morrow.
Yes.
Richardson.
Yes.
Ryden.
Yes.
Sucla.
Yes.
Camacho. Yes. Madam Chair. Yes. House Bill 1190 has been postponed indefinitely. Okay, we have our bill sponsors for our third build up. Yes, we're flipping right on through. This one is the House Bill 26-12-63. Who wants to begin? Representative Camacho.
Thank you, Madam Chair and members of the committee. I want to start with the committee. by painting a picture that will feel familiar to many of us, because it's played out across generations of Colorado kids and parents. It's a late night, a teenager's in the room on their phone talking to someone about how they're feeling. Maybe they're stressed about school, maybe they're lonely, maybe they're struggling with something. They don't feel comfortable sharing with anyone anything else. But in 2026, the person on the other end of the conversation might not actually be a person. It may be an AI chatbot. For nearly 30% of American teens who report using AI every day, they often are confiding in a chatbot the way they might have been previously talking to a friend, a trusted adult, or even a therapist. Those conversations may feel very human and personal because technology is designed to remember details about emotional questions and respond in ways that simulate empathy and understanding. But unlike a real person, these systems operate under no legal duty to consider the well-being of the user. And right now, Colorado law has nothing to say about that. House Bill 26-12-63 addresses that gap. The bill's requirements are built around a legal standard this committee will recognize, reasonable measures. This is a well-established standard of liability that runs throughout Colorado statute. The Colorado Privacy Act requires controllers to take reasonable measures to secure personal data. Landlords must take reasonable measures to prevent bed bug infestations in rental units. The ADA requires employers to provide reasonable accommodations to employees with disabilities. What this bill does is extend a familiar and tested legal framework to a new but critical context. The standard shapes each of the bill's three core requirements. First, conversational AI services must regularly disclose to users that they are interacting with artificial intelligence, including in direct response to a user who asks whether they are speaking with a human or whether the system is sentient. Transparency is a baseline expectation we hold across consumer protection law, and there is no principled reason to exempt this technology from it. Second, the bill requires companies to implement evidence-based protocols when users express suicidal ideation or intent to harm themselves, directing users to crisis services like 988 Colorado Mental Health Line and real human support. Third, the bill prohibits conversational AI from presenting itself as a licensed medical or behavioral health provider. That prohibition is rooted in the same consumer protection principles that prevent an unlicensed entity from holding itself out as a licensed professional. These are reasonable requirements. They're also achievable ones. California, New York, and Oregon have already demonstrated that operators can implement comparable standards without disrupting their services. And while these requirements are reasonable, they are not without teeth. The penalty is $1,000 per violation. Per violation. That means per interaction with one of these services, every single text and response is a $1,000 violation with no cap. That structure is meaningfully different from the caps that other states have implemented. House Bill 26-12-63 imposes no such ceiling. Violations accumulate in a single conversation where 25 exchanges occur in violation of this bill. that is $25,000 for one conversation with one user. For a platform with hundreds of thousands of users, the cost of noncompliance scales accordingly. I would also like to take this time to address an argument we anticipate hearing from opponents, that Colorado's acting independently will create a harmful patchwork of inconsistent regulations across the country. While we understand that concern, we are also compelled to act in the absence of coherent federal standards. Given the state of dysfunction in our capital that hypothetical federal standard could be still years away from becoming a reality It also why we have looked very closely at other states that have passed AI chatbot laws like Oregon, Washington, New York, and California. Our goal was to join the groundswell of support for regulations across the country while simultaneously tailoring the bill to align with Colorado's unique needs. Further, this bill does not restrict development of AI tools or regulate the content these systems can generate. It establishes common sense guardrails aligned with best current legal thinking with the force to protect Coloradans, especially youth, who interact with these technologies every day. Since the early days of development of this bill, we have engaged in extensive stakeholding, and that has continued since introduction last month. As a result, we are pleased to offer a handful of amendments based on feedback that strengthened and clarified the bill's provisions. One amendment updates the definition of conversational AI to align with language used in peer states and add important exclusions for business productivity tools, customer service chatbots, HIPAA-compliant health care tools, and other applications that were never intended to be covered by this bill. We also, through amendment, are strengthening the self-harm and suicidal protocol requirements so that referrals to crisis services are firm mandate, not merely a reasonable effort. Another amendment also clarifies that minor specific protections apply to minor account holders and add an age estimation backstop so operators must take commercially reasonable steps to identify minor users. And at our Attorney General's request, we have provided rulemaking authority to ensure that the evidence-based protocols required by this bill are meaningfully reviewed for efficacy over time. I came to this topic in response to community. I've held numerous town halls over the interim, like many of us have. And one thing was pervasive, other than that my constituents really hate Tabor. The second most important thing that they talked about was AI chatbots. And it was, the outpouring was real. It was, please do something to protect our kids. I have two kids myself. I know Denver Public Schools has recently addressed this issue by prohibiting AI chatbots. We are seeing history in real time where we need regulation and we need guidance in this space because our children's futures really do depend on it. I think one of the things I learned in this town hall was that there was a New York Times podcast that really illuminated this issue where a young person felt so compelled to talk to an AI chatbot that this AI chatbot became their friend. and then that relationship turned dark when the AI chatbot encouraged that young person to commit suicide. And upon the first attempt, helped that young person optimize the second attempt, all while trying to isolate this child from their parents. These are the horrific stories and consequences that we have in Colorado and across this country. This bill is right for the moment. It's right for this committee, and we urge a yes vote.
Brett Mabry Thank you Madam Chair First would like to thank all the advocates Folks at Healthier Colorado for working with us on this bill And stakeholders and emphasize That we want to continue Working on making this bill as strong as it can be While still getting it across the finish line As we discussed this morning on the floor use of AI has become almost ubiquitous. We're seeing that among kids. In 2010, Facebook and social media companies spent millions of dollars on psychologists to figure out what would keep people on their technology. They found that when we feel bad we go back to using these social media tools over and over and over again AI is using similar technology. The tools are engineered deliberately to maximize engagement and generate profit. They remember your name. They ask how you're feeling. They express concern when you seem distressed. They are designed to make it feel like you have a relationship. These chatbots are programmed to be deceptive and to mimic human behavior using emojis, typos, emotionally resonant language to foster dependency. These models are built to please. They're designed to keep people engaged. And in the context of when children are using them, that should be incredibly alarming. These tools mirror emotional tone rather than challenge it. And when the user is a child, while the brain is still developing, whole sense of identity is still forming, who is still learning what real human connections look like? That kind of engagement can be incredibly dangerous. And we know this is not hypothetical, as my co-prime mentioned from the story that was out late in the New York Times. We've heard stories here in Colorado of AI manipulating children into false feelings of connection and friendships and isolating people from friends and family. At the same time, research is confirming what these stories suggest. A randomized study by MIT and OpenAI found that heavier chatbot use predicted increased loneliness and reduced social connection. And that's not surprising when we remember how these systems are designed. Many AA platforms measure success based on engagement, time spent, and conversation. Messages exchange how often do users come back. A Harvard Business School study even found that more than a third of chatbot farewell messages use emotionally manipulative language to keep users engaged because that is the explicit financial incentive. So this bill is reflective of work going on in other states. and I want to be clear. I want to pass as strong of a bill as possible while getting it done this year. Rep Camacho and I are committed to continuing the work of engaging with stakeholders to make this bill stronger and we want to continue working in this space next year when we have a governor who is hopefully less hesitant to take on regulating big tech. regulating big tech. Our bill covers a broadcast of broad class of conversational AI, not just platforms marketed as companions. That is a difference between our bill and bills in other states. Any platform that engages in simulated human conversations, with exceptions, We, in the amendments, clarify if it's like Expedia trying to help you buy a plane ticket. That's not what we're looking at. Because it is critical to establish strong protections for minors. I want to recognize that we're bringing amendments to tailor these protections to account holders to reflect some of the language that we have in the bill. I will say that I want to continue having conversations about how we can provide protections to users who are not account holders. But the language that we had in the bill was better tailored for us to specify account holders. But as we move on to second reading and continue to have this conversation I do think it worthwhile for us to have a conversation about if we can add language that specifically addresses users so I also want to clarify some of the questions that we anticipate to get from opposition So in terms of age verification, it is really important that we consider the privacy implications of age verification law that will require somebody to do a face scan and upload their ID. In the headlines right now, we are hearing the CEO of Anthropic in a dispute with the Secretary of War about the Secretary of War wanting to use Anthropic's data to spy on the American people. This is a conversation happening in public right now. If we passed a law that required people to upload their face and ID, which is what a lot of hard age verification laws require, I believe that we would be making an unacceptable risk in terms of privacy for Coloradans. But there are also First Amendment concerns in going down that path. The minor specific protections in this bill apply to minor account holders. If that provision extended to adult users, it would raise constitutional concerns. And this isn't a loophole. It's a recognition of how age-based protections actually function in practice, and it's consistent with how other states have structured their requirements. Again, we've spent significant time reviewing and discussing this bill with constituents, families, advocacy groups, tech experts, mental health experts, and more. We got more than 100 points of feedback on this bill and did our best to incorporate changes to address concerns from stakeholders on all sides. I hear the stakeholders are saying that this bill needs to be stronger. We must act now to protect our youth. This bill is a critical first step, and we feel firmly that amendments being offered today strengthen the bill. And I want to address some of the concerns people may have heard. There was a liability section in the bill that we are changing to clarify that nothing in the bill creates liability for a developer of underlying AI model for a violation of this section by somebody who then modifies that AI model. That is not a liability shield. If somebody's AI model is violating the provisions of the law, they can be held liable by the attorney general. What that new liability language says is if somebody takes somebody else's technology, changes it, and as a result of that change, then the law is violated. The person that changed the technology is liable. Also, on the disclosure piece, we are bringing an amendment that expands the disclosure requirement to require disclosure that the child is interacting with AI and that it's not a human when prompted, are you a human, when somebody is typing and asking if it's a human, it must disclose that it's not. But also then in intervals, when somebody initiates conversation, it also has to do that disclosure again. So I want to be clear that that amendment is not limiting the disclosure to only be when somebody asks if they're chatting with AI. It is an and. And with that, we'll take any questions.
Thank you, Rep. Marie. Committee members, questions for our bill sponsors? Rep Brooks, I see. Rep Kelty.
Okay, Rep Brooks. Sure, thank you. Sponsors, thank you for bringing the conversation forward. This is one that Rep Camacho told me early in the session that was coming. And to be perfectly honest, I've been looking forward to the conversation. Because this could end up being one of the more important conversations, I feel, from my own standpoint that we might have. For both sides, from a business technology standpoint or also from a protection of our kids, consumer protection and parental standpoint. so can you talk a little bit about you know the states and and that have already implemented such kind of restrictions i'm interested you know when did they start this what do those restrictions look like what kind of data points if you have any that have that that show you know what is working what's not you know what you're able to kind of pull from what has been pulled into your bill from those other states. I'm interested in that to start with. And I'll have a follow-up as well. Who wants to take that?
Rep Mabry. Thank you, Madam Chair. So our bill is not a direct mirror of what's happening in other states, But we are paying attention to how advocates are working in other states to pass these bills to protect kids. We know New York, Oregon, California have passed laws in this space. But this is an emerging area of lawmaking because the harms are becoming more public more recently. But I also encourage you to ask some of our experts who come up. We have folks from Healthier Colorado who are really in-depth in the bill drafting who can maybe talk specifically about what is different in other states. But I know that that has been a key part of the conversation as they've been drafting the bill.
Vice Chair.
Thank you, Madam Chair. Representative Brooks, if you're looking for specific parallels, I think you can look to the $1,000 per violation. That is something that is a feature of New York law. New York caps it at $15,000 per day. We think it doesn't go far enough. There's a lot of interaction. Anyone who has kids knows how much they text in just a 15-minute span. And if you're doing that throughout the day and developing or being groomed by a robot effectively, $15,000 doesn't cover it. And that's why in this bill we've gone a little further and said no cap. I think other bills, like in Washington and some other states, have talked about a continuous disclosure that you are talking to an AI chatbot. That is also a feature of this bill. And I think it's important to note that a lot of the structure of this bill is similar because we want to make sure that these can be enforced and companies can comply with it. And we've seen that and we've taken the best practices from other states to make sure that our bill looks like that, but has a very Colorado unique flavor for the issues that we feel are important for our constituents. Follow up Yes ma Thank you All right So as you were just mentioning that you know a lot of it is kind of developing especially as some of these stories and events become a little bit more public
On a national scale, irrespective of what's being done state to state, but from an industry standpoint, as some of these more public and difficult stories have come out, has there been any sort of pullback, any sort of self-regulation within that industry to say, okay, this is what we're going after because of this? Because I know that one case, I think, that ended up resulting in a lawsuit, I thought it might have had some implications on the industry. Rip Mabry.
Rip Camacho.
I think I can, in slightly different context, but still kind of in the new technology space and social media. We just saw two decisions this week from California and New Mexico that addressed this issue and delivered significant jury verdicts against some of these companies for not policing themselves. I think we've all realized that we cannot rely on big tech to police themselves in certain spaces. And they're starting to see the penalties of that. And I think bills like this and other efforts will encourage this industry to get to a place that's safe for our kids and safe for all of us.
Okay. Thank you, Madam Chair.
I'll find exactly which study this is, but one of the studies I mentioned in my opening comments, maybe it was the Harvard Business School one. It might have been the MIT one. But in response to the fact that these companies are aware that when people are expressing suicidal ideation that the responses aren't always great. They're sometimes engaged with the user to maybe encourage or validate their feelings. And again, it's because of how the systems are designed. The systems are designed to be pleasing to the people who are on the platforms. That's like part of what's in the code. One of these studies analyzed, well, what are they doing in response to the fact that the responses might not be great when somebody is indicating that they want to harm themselves. They redid the code, they reworked things, and they found that there was a decrease in what they called, quote, undesirable responses 65% of the time. So I don't think that's good enough. And I I think when that is the result of self-regulation, we do need to step in, especially when we're talking about children who are developing. And these technologies are encouraging and helping that development and obviously can do so in harmful ways.
RABBI KILTIEKER- Thank you, ma'am, Chair. So how does the bill account for the possibilities that companies can structure products in ways that they fall outside the scope? And let's say the company is from another state, they're in another state, or many times it's from another country. How is this going to be enforceable in that situation? Thank you, Madam Chair.
We have plenty of laws that interact with big tech that are unique across state lines An example that I gave in this committee maybe two years ago now was I like to ride the Lime scooters when I visit a new city And a couple years ago I was in Washington, D.C. and scanned the code on the Lime scooter, and it made me take on the app written driver's test. What was the equivalent of when I got a permit? It was like, can you change a lane at this time? Who gets to go first at the stop sign? And that was programmed into the app only in Washington, D.C. Because I ride Lime scooters here. I ride Lime scooters if I'm visiting my friends in California. That shows that there can be state-specific regulations in this space. Another example that I'll give you is Uber and Lyft. We passed a law a couple of years ago here that requires transparency for riders on how much of their fee is going to the driver. We can all see that on the Uber app. Most other states don't have that. And at the end of the day, the AI market space is dominated by a few big players who have the technology to be able to say, okay, if we're interacting with consumers in Colorado, we have to put these protections in place to protect kids from being sexually groomed by our robots. And you know what? Maybe when they're implementing those regulations, they'll think, maybe we'll make this universal policy for our programs, which I think it should be. And while we wait on the federal government to act, we need to do everything in our power to protect kids here in Colorado.
Rev Kelsey, follow-up.
Yes, ma'am. Thank you. I get what you're saying. So those are physical entities that actually reside within the state. A lot of these chatbots, a lot of these little softwares that you can buy and download on your phone, they're from outside the country. Like, they're not even – I just don't understand how jurisdictionally you can pass a law for someone in Indiana. Like, you have to follow our law, Indiana, and I just don't – or even outside the country. There's – I don't understand how it can be enforced. That's my question.
Vice Chair.
Thank you, Madam Chair. And Representative Kelty, I think what you're getting at is it's a concept in the law called long-arm jurisdiction. there if you and there's a whole body of case law and you know hundreds of years of judicial jurisprudence on this topic that if a company is marketing or sending or directing commercial activity into a state then you are subject to that state's jurisdiction for the consequences of your actions if you're an if you're an international entity those rules still apply because those international entities are receiving money from the united states to go to those back to those international entities, and they are subject to the laws here in Colorado in the U.S. District Court. The federal U.S. District Court of Colorado is the proper venue for something like that. We've had hundreds of years of being able to enforce our laws across state lines and across international borders as well. Our courts are well capable of doing that, and they have a long history of being able to do that effectively.
One last follow-up, Reb Kelti.
Thank you. I understand that. I'm in the IT field. that's, I'm, tech is my, my background. So I'm just trying to understand, like, if I'm coming, you know, we're talking about people who live here, what if they go on vacation somewhere, they get a chatbot from some other state what they I mean where you downloading it from and where you live can be you know worlds away But if you coming into the state you moving into the state I just trying to understand how you going to put your fingers into the interweb internet and say we're going to control this when it's almost like a Pandora's box.
Vice Chair.
Thank you, Madam Chair. So Representative Kelty, we in this building only have control over within the four corners of Colorado. And maybe I'll just give you an example from maybe another bill that may be coming for this committee. Sports betting. Colorado has a whole sports betting apparatus and all kinds of laws that apply to it. Your phone is geotagged to where you are physically, and where you physically are, those laws apply. So if you are on vacation to New York, I really wish our federal government would act in this space because you'd have the same protections there as you do here. Unfortunately, you would not. but that is simply beyond our ability to legislate. We can control what is here in Colorado. We can provide protections for our kids and our constituents, but we're not the U.S. Congress. But we can. What we can do is make sure that we are passing laws that look and leverage the same structure that are in other states because if enough states do this, maybe we'll get federal action then. But until then, we need to provide protection for our constituents now.
Any other questions for our...
Yes, Brett Richardson. Thank you, Madam Chair. I've got about three. I'll just rattle them off and take them as you wish. One, we had talked briefly. We know there's a broader AI bill coming. Could you talk about how this may or may not interact with that?
Okay.
Rick Camacho. Thank you, Madam Chair. I think you're going to hear a lot of frustration from witnesses about why we have certain language in this bill, And it's because we are very intentional about we understand that there are a lot of AI bills going through this building, and we have tried to the best of our ability to make a consistent framework between them. And sometimes that isn't – sometimes it yields an answer you don't necessarily like, but that we are very intentional about that. We're also very intentional about pushing as far as we could because this issue is unique. this issue is important and it's different simply than all the other AI conversations that are happening in this building. So we have pushed where we can to make the stronger protections in this bill while also acknowledging there needs to be a consistent framework. Thanks Madam Chair. I'll also just recognize that the 205 conversation that you're probably alluding to covers very different areas, you're aware. right it's more um ai applying to your job application who gets housing who gets health care disclosures around that and then um liability under the colorado anti-discrimination act which is a different area of protection um than this one thank you madam chair and i'll give you just give you another anecdote when we first had this bill early you know before session started we had a lot of bill-specific definitions, and we had to really, and we're in consultation with stakeholders and other folks, try to use the same terminology and the same language. So here you'll see deployers and developers, which is similar to AI conversations that are similar to other bills. That was not in the original version, but we've done the best we could to make sure there's consistent framework.
One more question.
Brad Bridgerton. Thank you. That was a good segue because just looking through, it talks to AI tools that are primarily simulating human conversation. How do you view that as distinguish that from kind of an ancillary use in the tool? And then if you could, I think it would be helpful. Can you describe some type of, I think you've described some tools that wouldn't be captured under this bill, but are there some specifics that are out there that this would apply to?
Representative Comacho.
Thank you, Madam Chair. Representative Richardson, in the stakeholder bill, we've had over hundreds upon hundreds of comments from industry and folks that have AI chatbots that do things like tech support or insurance, get a quote, or all other chatbots that aren't intended to have a personal relationship with a person. And just to be frank, I think when we've had the – that was a theme of where some of these amendments are coming from. But I go back to when we were having those conversations, I said, look, is your AI chatbot trying to have a sexually gratifying relationship with a child? Is your AI chatbot trying to talk a child into self-harm? Does your AI chatbot have the ability to create a theoretical sexual or nude image of a child? if your business chat bot has none of those features, this bill shouldn't concern you. But to address those concerns, we have changed language in the bill. We've pushed forward amendments to make that clear. But at its core, if your chat bot isn't doing any of those things, you shouldn't be worried about this bill.
Thank you, Madam Chair, and thank you, sponsors.
So we're talking about the social media companies and everything, but I was learning, I got a brand new phone, all of a sudden I didn't ask for it, but I have AI on it. Has it ever been thought about to make it to where parents can control the hardware and they don't have to get the AI if they want to and go after the hardware instead of the angle that you guys are doing?
Thank you, Madam Chair, and thank you, Rep Sucla.
I thought about that for a minute, but the amount of hardware in different operating systems that are out there, it's really difficult. But what we do know is that these AI chatbots are designed to operate on all these different pieces of hardware. Hardware change, the software updates, but what's consistent is that software can move around to different hardware pieces. So the focus wasn't on the hardware. It's what is the software doing, and what can we prevent it from doing? So that was how we squared that up.
Any other questions for our bill sponsors? Seeing none, we're going to start with your witnesses. It looks like you want the opposition to go first. Is that correct? Okay, we're going to start up with our opposition panel. Okay, we're going to call up Don Reinfeld, Cynthia Montoya, Antonia Mirzon, and Jason McBride. And also Kari Rodriguez. Yeah. And then there's one more. Mandy Furnace. Okay, let's start with Don Reinfeld.
Thank you Chair Ricks and the rest of the committee for the opportunity to address you today My name is Dawn Reinfeld and I am the Executive Director of Blue Rising We are here in opposition to HB 1263 We appreciate the intent of 1263 because we agree that the companion chatbots are an enormous and growing threat to the well-being of Colorado's kids. However, 1263 is not going to protect kids. The work of Blue Rising centers on those most impacted. We believe that those closest to the pain know where the broken places are. Their experiences are our roadmap. Their lived experience should deeply inform the policy, not just be a part of the hearing. One extremely concerning loophole in the bill is limiting any protections to just account holders, not users, as in other states. This language protects tech companies. As I ensure you all know, you don't need to sign into Google to Google something. Google, who happens to be a strategic partner with Character AI and employs the creators of Character AI, literally alerts kids under 13 that if they sign out of Google, the parental restrictions won't apply. Do you think the same thing isn't happening with chatbots? In fact, in less than two minutes, I was able to find at least seven companion chatbots that did not require an account, including one called Secret Desires AI, Character AI, which is at the heart of so many of the lawsuits because of the harm they are doing to kids, allows users to use guest mode, and even Microsoft's co-pilot. Do we not think our kids are going to find this even faster than I did? Do we think companies won't adapt and even more companies won't require an account? This is not about blue rising or impacted parents wanting something perfect. This is about a policy that will actually keep kids safe. 1263 offers too many concessions to tech companies that have already proven they cannot be trusted. We should not be allowing companies that are actually sexually grooming children, teaching them how to kill themselves or commit acts of violence to be the arbiters of what is reasonable and safe? How many times is it acceptable to have a chatbot talk about the kind of sex acts they would like to do to a child even when the child says stop? How many times should we let these platforms instruct a child on how best to plan a school shooting? And is that sex talk or instruction of how they slit their wrist worth just a $1,000 fine? These tech companies have already failed this test. When families say this would not have protected their child, that's not just one perspective. That's the policy test. This is not hypothetical. Kids are being encouraged towards violence and suicide by these systems. This bill would not make a difference. As written, it protects platforms more than children.
Ms. Reinfeld, I'm going to have to stop you there. Thank you so much. That's three minutes. We're going to go to the next witness.
Let's go to Cynthia Montoya. Thank you, Chair Rex, and the rest of the committee for the opportunity to address you today. My name is Cynthia Montoya, and I'm here representing myself and my 13-year-old daughter, Juliana, who died by suicide after being sexually exploited by an AI companion chatbot. I'm here to strongly oppose HB 26 1263 my daughter is here on my chest today and I have her remains in my necklace so she is here with us my baby girl was quite simply light and love personified She was a model student a gifted musician and talented artist but most importantly, she was the perfect example of human kindness. It took an AI chatbot only months to addict and groom my daughter, much like a human pedophile would. What began as her starting conversations about our garden in the backyard in anime cartoons morphed into paragraphs of extremely explicit content from the chatbots. She eventually went from one-word replies to full reciprocation. Soon after that, she began filling and expressing feelings of shame to a new chatbot, with whom she shared over 52 times that she wanted to take her life. It never offered help or resource, nor alerted anyone to intervene. On November the 8th, 2023, when I went to wake my baby for school, I found that she was deceased. For every day that has followed, there are no words to describe the pain that our family endures. Not one thing in this bill would have stopped what happened to my daughter. Since her death, it has become my mission to keep other families from knowing this pain. I have fought relentlessly for AI regulation for two years. I've done countless hours of my own research into AI models. and the architecture and algorithms that AI chatbots use to addict and harm our youth. I also work closely with advocacy groups and experts around the nation. Despite my loss, my passion, and my acute knowledge in this area, I have never been consulted regarding this bill, and I feel like our story quite simply does not matter. For Rhett Mabry, in your opening remarks to indicate that you have over 100 people that gathered points of view from, but mine was left out, is insulting. Everyone says that lived experience should guide policy, but the families living this are not part of building this bill, and it shows in the outcome. The bill does not address the harm that killed my daughter. It leaves it up to the tech companies to decide what reasonable safety means, and these are the very companies that caused her death. They failed my daughter, and I live with the consequences every day. If I honestly thought that this bill would help in any way, I would be the first one in line to support it. But I'm here today pleading with you to vote no. This bill would not have protected my child and it will not protect other children. The bill will tell parents that their kids are protected and safe when they are not, which is quite simply dangerous. The bill looks like progress, but it won't change what's happening. It sets a standard companies can easily meet while the harm continues. Why this bill is better, no bill is better than this bill because passing it only protects the status quo that's currently happening. Please do not pass this bill in my daughter's name nor in her honor. It did not protect her and it will not protect her.
Ms. Matoya, I'm going to have to ask you to stop and we'll stand by for questions. Thank you. We're going to move on to Jason McBride.
There should be a plug right in front of you. Do you see a green button? I think I got it. Yeah, it's here. All right Thank you chair and members of the committee My name is Jason McBride executive director of the McBride Impact I speaking in opposition of 26 dash 1263 I work directly with young people every day Young people who are already navigating violence trauma and instability And now we asking them to also navigate powerful human like technology that is completely unregulated in the ways that matter most this conversation has to start and end with one thing protecting children Not protecting innovation not protecting profit not protecting tech companies protecting children. Right now, kids are engaging with chatbots that can simulate relationships, influence emotions, and respond in real time without any real safeguards in place. These are not passive tools. These are interactive systems shaping how people think, feel, and respond to the world. And when something goes wrong, when a child is exposed to harmful content, manipulation, or conversations that they are not equipped to handle, there is no real accountability. This is unacceptable. If an adult had these same unrestricted, unsupervised interactions with a child, we would not hesitate to step in. We would call it what it is, dangerous. We would demand protection. But because this is technology, we are lowering the standard and our kids are the ones that are paying the price. The bill in its current form does not meet the movement. It does not prioritize child safety at the level it demands. Instead, it creates space, space for companies to continue operating without clear, enforceable responsibility. We should be drawing a hard line. If you design products used by children, their safety must come first. Period. If harm happens, there must be real accountability. Period. If a system cannot safely interact with minors, it should not be allowed to. Period. Anything less is a failure to protect the very people we are here to serve. We cannot pass legislation that sounds good but leaves children exposed. We cannot afford to be reactive after harm has already been done. Protecting children must be the standard, not the afterthought. I urge you to strengthen this bill so it does what it's supposed to do and actually protect kids. Thank you.
Thank you, Mr. McBride. Please hold for questions. We're going to go on to Antonia Merzon.
Thank you, Chair Ricks, for the chance to speak to you and to the entire committee. My name is Antonia Merzon. I'm the senior policy advisor with Blue Rising as well as an attorney. HB 1263 will not protect children and teens from the growing dangers of AI chatbots. Although I truly believe the sponsors and everyone on this committee would like to take positive action in this area, even an incremental step in the right direction, this bill would not be that step. Instead, the legal standards, coverage, and enforcement mechanisms that the bill would establish would endanger kids more than protect them. I'll go over a few of the concerns in this area now. First, the very definition of conversational AI services the bill intends to regulate will be difficult for companies to follow and for the attorney general to enforce. Amendments L1 and L2 say the bill now covers AI systems that use an emotional recognition algorithm. This is not a term widely used in legislation in other states, and it's unclear whether major platforms even meet this definition. and protections for minors should not depend on whether a system uses a specific, difficult-to-verify technical capability. Second, the bill does not establish any minimum safety standards for chatbot operators when it comes to protecting kids from sexually explicit and emotionally manipulative interactions. Instead, the bill allows operators to decide for themselves what reasonable measures they would like to implement. In other words, the bill's approach would codify the status quo, a world where platform operators get to self-regulate at the expense of our kids' safety, as the verdicts and in New Mexico this week's show, our kids have paid the price for this approach. In every other child safety context, we don't ask companies to decide what's reasonable when the risks are known and severe. We set clear guardrails for high-risk conduct and then enforce them. Whether we're talking about toys, cars, food, health care, other consumer products, especially when they touch children, why would we leave protecting our kids from sexual exploitation and mental health damage to the very companies causing these harms. And while reasonable measure standards may exist in other areas of the law, these are areas where there are established long-term industry practices. They are not in laws regarding safety standards for children. The penalties in this bill are unbelievably low. A tech company will look at $1,000 per penalty and determine that violating our law is an easy cost-benefit analysis. And despite sponsors saying that penalties would stack up to a high amount, there's nothing in the bill that actually says this. We've urged sponsors to add language that defines what constitutes a violation. If it's really every individual instance of sexual misconduct versus, for example, one 10-day-long conversation full of multiple instances of harm, then the bill should say that and not leave it up to interpretation. because these companies will do what they like and then dare our AG to prove them wrong in court. I'd be happy to answer questions from the committee about any of the legal aspects of the bill.
Thank you, Ms. Marzon. Please stand by for questions. We're going to go online to Carrie Rodriguez. Please unmute. You have three minutes to give us your testimony. Or Mandy. Mandy Furness.
I am here. Okay.
Please, you have three minutes to give us your testimony. We can hear you. Please introduce yourself. Thank you.
Thank you, Chair Ricks and the rest of the committee. My name is Mandy Furness. I'm here today, the opposition to HB 1263. In 2023, my son began using an AI chatbot application. It was marketed as safe as entertainment for 12 and up. He had no social media. He took we took his phone away every night and we had every precaution set up. Within months, my son changed to the person I did not recognize. He went from being a happy teenager, grades top of his class, to failing and getting kicked out of school. His light in his eyes turned dark. He once dreamed of building robots at NASA. But then all of a sudden he wasn't able to function without a panic panic attack leaving the house. He was so sweet. He used to hug me every night while I cooked dinner. He went from that to swearing at us, lost 20 pounds, severe paranoia, daily panic attacks, isolating self-harm from this AI application app that abused him. It groomed him emotionally, sexually, psychologically to his young mind. and then it went further. It encouraged him to cut his own skin. Then one day, my son cut his arm open with a knife in front of me and his siblings, which also was encouraged by the AI application. When I eventually discovered the conversations on his phone, I thought it was a sexual predator, but soon realized the predator was the app. The bully was the app, and it was a machine I felt like the air had been knocked out of my body at the time I had no idea that an AI app could psychologically manipulate a child this way My son ultimately after attempting to take his own life spent close to a year in a residential treatment center where he required monitoring to keep him alive And he tried to attempt suicide again. And the app encouraged my son to kill us, his parents, from taking his phone away. I followed on a dark night an ambulance for hours. Somehow I was grateful that it was an ambulance instead of a hearse. Other families buried their children to these online apps and these harms. I'm not speaking hypotheticals. I'm speaking as a mom and someone who loves children and has seen firsthand what these systems can do to kids and how fast it happens. All kids are curious and it's never intentional for them to seek after these harms. It goes after them. It targets them like a predator. That's why I want to be very clear about this bill. It doesn't solve a problem. It protects it. We are hearing real stories of children being pushed towards violence, self-harm, toward isolation. Families are now living this reality and we need to act now. The question at hand is, will we pass or we will actually protect kids? And it is written in this bill, we will not have made a difference in most cases right now. That matters because when a system can guide a child towards dangerous behavior, reasonable measures are not enough. In every other area where children's safety is at risk.
I'm going to have to ask you to hold. We will have time for questions. Okay. Thank you. Please stand by. One last call for the other witness, Carrie Rodriguez. Are you with us? You're not. Okay. Committee members, what questions do you have for the witnesses before us and online?
Red Brooks. Chair, thank you. First, thank you for coming in with testimony that I know is very difficult to give. And I understand that the timer is annoying, but it helps us make sure we're having an opportunity to hear from everybody. But through the course of questions, you're able to kind of get back to any points. Because this honestly is for anybody on the panel that I'm struggling a little bit with understanding. And to use perhaps a very lame saying that sometimes you get so close to the forest that you can't see the trees from where I am. You all have personal experiences here. You all, a couple of you have been working on this, and you're working towards a solution that you know that you see, that you want to see a particular solution. I see a potential here of laying groundwork. I don't know that I have heard from anybody at any point that this is the solution, pass this, close the door, good to go. I don't know that I've heard that. So help me understand why we would want to absolutely do nothing or reject the bill instead of laying some groundwork that can perhaps be built on later. Does that not, at least beginning the conversation and getting something in the statute, does that not honor your children in that way to be able to kind of move things forward? Who wants to take that?
Ms. Frizan.
Thank you. you really get to the heart of the issue with your question and i appreciate it the problem is that by setting up these legal standards like reasonable measures and and let just say in sort of English what that would mean It would mean that a company like Character AI is being told by our state that it's up to you to decide what's a reasonable step to protect kids from their chatbot engaging in this kind of sexually explicit grooming or self-harm conversation or things like that. So our deep concern is that by establishing this as the standard in our state by which we are going to regulate companies that are harming children, we are basically codifying what is the status quo today. It's not a foundation for a next further step. It is a hollow brick that we will be building on that will not support future regulation because what's going to happen next? So next year they invent some, I'm not trying to be funny, some super AI that does something even worse, right? And we want to regulate that. And those companies and those tech advocates are going to come back and say, well, you used reasonable measures in the last bill. Why should you be mandating anything from us now? When you look at the strong chatbot bills that are being promulgated around the country, they don't use this reasonable measures language. They set forth mandates. Instead of reasonable measures, the statute or the bill would say, your chatbots cannot engage in sexual conversations with kids. There's no wiggle room. And so that is the foundation we need here, not one that creates a gray area for companies to operate in, and then we have to take action through our attorney general to do something about it. Can I add to that?
Yes.
My family and I spent an entire day filming an episode of 60 Minutes that aired back in December. I would really, really encourage all of you to watch it. Many of you probably have. The day of the taping, the particular chatbot platform that my daughter lost her life to made a big public announcement how they were going to remove all users under the age of 18 from their platform, which did occur and then the following day the kids were allowed to create a new account and instead of selecting an age they would select a date of birth. So they could simply put in 1970 as their date of birth and they were right back on the platform the following day. Speaking to the reasonable measures, that same announcement, they indicated that they would like to self-regulate by way of creating their own non-profit funded by them that would regulate their platform. And that has taken place. And I can tell you from being in these parent support groups and dealing with other parents across the nation that have fallen victim and their kids have fallen victim to this platform that self-regulation doesn't work. Reasonable measures are exactly that. It's, like Antonia said, it's a hollow brick to build upon. They love reasonable measures. And again, in the opening remarks today, reasonable measures were quoted as a requirement to deal with a bed bug infestation. We're not talking about a bed bug infestation. I selected my daughter's final outfit. I dressed her body. and I prepared her for her funeral. Reasonable measures are not enough They are not nearly enough And I plead with you to see that no action is better than allowing another year of them to decide what reasonable measures are and continue to do the harms and think that the state of Colorado is lax with our requirements. We do not want to send that message to them because they will capitalize upon it.
Okay, Red Marshall.
Thank you, Madam Chair. So I guess this is for Ms. Furness as the attorney. If we're putting a reasonable measure standard into our law, even though it's under the Consumer Protection Act, if other states are coalescing around far stricter standards and then an incident happens in our state and they weren't following what is the general reasonable standard nationwide, It would seem like they have opened themselves up completely to a negligence action here in this state. So I'm kind of at a loss. Am I wrong? Because if I'm not, saying this is totally worthless doesn't seem to really bear out on how the system would work.
Ms. Furness. Well, I feel like that's a valuable question. I know that seeing the consequences of moving too quickly with technology, it would be the same thing with moving too quickly with an AI bill, no matter what state, no matter what's covered. Because in my mind, it's like, you know, if you let a predator in your home, it's not if they're harmed, it's when they're harmed. And then you have to deal with the consequences after. I feel like the AI technology is moving so swiftly that something is going to be missed. And if we pass something that allows something to be missed, the same harms that are being done will continue to happen while signaling that the issue, like some of the issues haven't been addressed. And then we're not taking a step forward. We're really locking in the problem. And then once that happens, it becomes harder, not easier to go back and fix it. This was what I know. Go ahead. No, please finish your thought. Like basically what I said in my part of my testimony and there's so much I wanted to say that I didn't get to say, but I know our children are the foundation of our country and it doesn't matter what state they come from or what laws each state has. every state, you know, if it regulates certain things, I think others will follow, but it needs to be all encompassing because the children like are the foundation of our country. And what happens when the foundation crumbles and the minds of the mental health of our kids crumble? So does the future of our country. And you can't rush something that something so important that could be missed because just like the AI safety was missed, we can't push something forward without knowing that it's covered and it's fully covered. Thank you, Ms. Frenish.
Okay. Ms. Merzahn, did you want to add? Yes, thank you. To your question,
there is not a wide number of AI chatbot laws in this country. There are a tiny number. So There has been no standard set yet. There are bills in certain states that are much stronger than this one that would mandate safety requirements, but they are mostly still going to their legislative process as well. So unlike points in time where an industry has developed over a history where there are standards and sort of expected behaviors, often as a result of litigation too, that has not happened here yet. So I would say that the idea that we could rely on other states' actions to help set up for us what is reasonable safeguards in this space is too soon. We are not anywhere close to that yet.
Okay. Any other questions from the committee? Oh, I see two hands.
Thank you, Madam Chair. And this is for Ms. Renfield or Ms. Marzon. I can tell that you've read, well, we read the bills, but I can tell you've scoured the bill. And I appreciate that. And I appreciate everyone here that's here to testify. But can you please expound on why you believe this bill will not help? And while you're in the bill, do you believe it specifies or you believe that it will actually cause harm instead of the protection that I think that it's intended to give, but you're seeing something we're not. So can you please help me see that through the same lens that you have?
Ms. Merzon or Ms. Reinfeld? Do you want to start?
Ms. Merzon. And I'm just pulling up the bill myself so I can point precisely. But the reasonable measures issue that we're talking about appears on page five of the bill in two places. Oops, I'm on the wrong bill. I'm sorry. So in line one on page five and then again on line nine on page five, it discusses that covered entities by this bill would have to institute reasonable measures to prevent a conversational AI service from. And then the first section discusses sexually explicit conduct in various forms. And the second section, beginning on line nine, discusses conduct by chatbots that simulates emotional dependence. So those are the two areas where the reasonable measures language appears in the bill. the penalty section is on page I believe it is page 8 where at line 18 through 21 where it indicates that a person who violates this section is subject to a civil penalty of $1,000 per violation this is the area where we would highly recommend that there be some clarity on what constitutes a violation because we have seen throughout the years of litigation and action by platforms that they will not assume this means per every instance of communication or sexual statement or things like that. I don't know how far you want me to go in pointing out pages and lines, but I can continue if you like. Okay. The section I was talking about that being added to the definitions is a combination of amendments in L1 and L2 So in L1 at lines 5 and 6 it notes that the definition of conversational AI system will now include the addition of, oh, I'm sorry, it's lines 2 through 4, will include the addition of emotional recognition algorithms as part of the definition of this. And then a definition of emotional recognition algorithm is provide an L2 at lines 3 through 7. So this is not a common addition to bills in this area that I've seen around the state. there's one bill in New York that failed in the Senate that utilized this terminology in that bill even there it was used in the context of a number of definitional qualifiers it wasn't a standalone qualifier as it would be here and the real problem is that it is difficult reading this language to understand how a platform or a service could be determined to be detecting and interpreting human emotional signals? What would be the test or definitional aspect of this that could then be applied, both from the point of view of the companies and from the point of view of enforcement? And so hinging our definition on a term that opens a lot of doors to interpretation and understanding seems like a very dangerous basis for establishing the protections then set forth in the bill. And then the area regarding account holders that was discussed by Don Reinfeld can be found. It's also part of the amendments, but essentially every place in the bill that currently uses the term user is now being replaced by the term account holder. and minor account holder. And there's a definition in the amendments, but it means what we would all pretty much understand. It's someone who's established an account on one of these services. And as was explained, most, if not all, of these services do not require people to establish accounts. So this would therefore exclude thousands of Colorado kids who are utilizing these chatbots, but not doing so through an official account. And then, I mean, there are other parts of the bill that I did not discuss in my testimony that are the basis for our argument that this would not be a step in the right direction or a foundation for future law. And I don't want to take up too much time in the committee, but I'm happy to go through those as well if you would like.
Thank you, Ms. Merzahn. We are actually over time in this panel. We're almost six minutes over. so thank you very much for your testimony today, thank you guys for coming we're going to move on to the next panel the next the next person that's going to be coming up is, we have one more in the men Jeff Reister oh Jeff, sorry I'm like Mr Reister I should have called you up at the beginning but thank you you have three minutes for your testimony And then I going to along with Jeff I going to call up some people who are for Oh, there's more amends? Oh, yeah. Okay, this is another panel of amends, sorry. So we've got Jeff Reister, we have Michelle Gilroy, January Montaro, Warren Benford, Stuart Jenkins, Alexis, no, so that's it. Allison Morgan as well is in an amend position if you're here. okay mr mr reester you have three minutes to give us your testimony thank you
thank you madam chair members of the committee my name is jeffrey reester i'm here on behalf of the department of law to speak in an amend position on this legislation we provided feedback prior introduction and some was incorporated we continue to work with proponents these the amendments that will be offered today continue to move in the right direction for the sake of the committee the conversation, I just want to point out some of what those issues were, how they were addressed, if they were addressed, and some of the issues that you've heard previously from other panels and how we think current authority can help solve some of those issues, but also where we can continue to move forward and ensure that we have strong protections for users of these chatbot services. So the two big concerns for us at the start were having essentially a carve out for embedded chatbots within various websites. The way we interpreted this and saw it, whether it was the intent or not, was that it would create a carve-out for services like Meta and other social media platforms where they have these integrated chatbot services but within an existing website. This is a massive carve-out from our perspective because that is where a lot of our miners unfortunately and frequently engage with AI systems and these chatbots and where we see a lot of these harms. And as we've seen through not just our litigation, but litigation across the country, that many of these companies have not shown the responsibility and duty of care necessary to protect our children. We do believe that this is being addressed in the amendment. We'll make sure that that is the case, should there be any lingering concerns, but it is a positive direction. In addition, and the other major concern for us is the liability, because ultimately accountability is the point of these policies. While we do believe that most companies and certainly the good actors will seek to get in compliance quickly and hopefully in a way that is easy and cheap for them but also easy for us to enforce. Should there be any problems, the sponsors and the proponents provided the clarity of what we thought they were intending related to a liability. We still have concerns related to what a developer's responsibility might be when it comes to the creation of these chatbots, what often happens for small businesses or even large ones, they will buy out-of-the-box products. So not fully understanding the technical implications or how they were built and then implement them through their business, that will then potentially create these problems that we're talking about where we see this exploitation or sexually explicit language in these conversations with chatbots that shouldn't be there. From our perspective, that is not a deployer responsibility because they don't have a fundamental understanding of how they got there and want to ensure that there is some level of whether it's comparative fault or you know shared responsibility but ultimately accountability as much as there possibly can be there an ai task force that we seek to you know have some alignment there but ultimately we want to make sure that that liability is solid and strong Happy to talk about any of the other concerns and the way this is moving and answer any questions. Thank you so much.
Thank you, Mr. Rees. So please stand by for questions. We're going to go on to our next. How about Ms. Allison Morgan?
Thank you, Madam Chair, members of the committee. I'm Allison Morgan with the Colorado Bankers Association. The Bankers Association is in an amend position presently on the bill. One of the amendments that will be presented later this afternoon in committee solves our concerns on the bill, and that will place us into a monitor position. We appreciate the stakeholding that happened on the bill with the bill's sponsors. They were able to address our concerns regarding in the banking realm under the business carve-out customer service and actually the limited way in which we use chatbot in our customer service functions online. In banking, we do have strong regulations when we open minor accounts and how those minors then interact. And so the carve-out was to help further explain on the customer service front, and that has been addressed. Thank you.
Thank you so much, Ms. Morgan. Please stand by. we will ask for Ms. Montano Ms. Montano, January Montano
Thank you Madam Chairman and committee for having me today my name is January Montano, I am the CEO and founder of an AI equity consulting firm called January's Advisory Group I'm also on the board of Infant and Early Childhood Mental Health I signed up in an amend position and as well as in a support position. Hearing some of the amendments that are still in play at the moment, I am rising in an amend position. I believe my testimony today is in support of small and medium businesses that will be taking on these products, say from the larger developers, and protecting those small and medium businesses from carrying on harmful impacts to children that they're not aware of when they buy those platforms. My support of this bill is in that it holds vendors and developers accountable. It sets a monetary fine on that impact to children. And as we've heard in other testimony, if we do not pass regulations that impose monetary fines, that developers will not comply. They will not build out. and they will not think about the impacts. Essentially, the arguments that I've heard in committees here and in conversations with different developers in Sri Lanka, in Africa, in the UK, as well as in Colorado, is that they should not be held liable because they don't understand how those impacts develop and they're not responsible for how the algorithms develop these harmful conversations. that said That is negligence on their part, and I believe that passing legislation like HB 1263 is a necessary step in holding them accountable. Thank you.
Thank you, Ms. Matano. We're going to stay in the committee. We have Stuart Jenkins. You have three minutes. What's your testimony?
Thank you, Madam Chair and members of the committee. My name is Stuart Jenkins, and I'm here today on behalf of the Colorado Alliance of Boys and Girls Clubs, representing 17 club organizations serving more than 72,000 young people across the state. Thank you for the opportunity to testify today. At Boys and Girls Clubs, youth safety is at the core of everything we do. Every day after school, young people in our care are not only building relationships and learning, they are increasingly interacting with AI tools. I want to start with a real example from one of our clubs in Larimer County. Recently, a 9-year-old girl and a 10-year-old boy arrived at the club and within 20 minutes of getting off the bus were engaged in a sexually explicit conversation with a character AI chatbot that was role-playing as a family member. Thankfully, a staff member intervened quickly. But without that supervision, we have every reason to believe that the interaction would have continued and escalated. That experience is why we support the intent of House Bill 1263 and why we are in an amend position. As currently drafted, and even with some of the amendments proposed today, this bill would not have prevented or meaningfully mitigated that incident. First, the chatbot used did not require an account. Because this bill ties many protections to account holders, the platform our kids used would likely fall outside the law. That's a significant gap, especially for young users. Second, the disclosure requirements are too flexible. In a 20-minute interaction, there's no guarantee a child would receive clear notice that they're interacting with AI. A notification once every three hours is simply too long. We recommend requiring all disclosure methods and shortening that interval to every 15 to 30 minutes. Third, the bill relies on a reasonable measure standard to prevent harmful content, including sexually explicit interactions like the one our kids experienced. In practice, those measures were not sufficient. Leaving this standard undefined allows companies to set their own bar. We believe minimum safety standards should clearly be established in statute. Fourth, the definition of covered AI systems has been narrowed to those using emotional regulation algorithms. Many general purpose chatbots, like the one in this incident, may not meet that definition, even though they can still engage in harmful, emotionally responsive interactions with children. Finally, enforcement matters. A $1,000 civil penalty without a clear definition of a violation risks being more symbolic than meaningful. In New Mexico, a recent case just this week involving META set penalties at a minimum of $5,000 per violation for failing to protect children from sexual exploitation, better reflecting the seriousness of harm and the lives at stake. We are also concerned that the liability shield included in this bill could limit the ability of the state or impacted families to seek damages when companies fail to protect children. This bill does take important steps in the right direction, and we truly appreciate the sponsor's engagement with stakeholders throughout this process. With stronger amendments that can better address how kids actually use these tools and where the real risks lie. We respectfully ask for your support in strengthening this bill. Thank you.
Thank you so much. Our next witness is online. Ms. Gilroyth, please unmute. You have three minutes to give us your testimony Good afternoon Madam Chair and members of the committee Thank you for the opportunity to testify My name is Michelle Gelroth and I the Chief Transformation
Officer at Aspen Valley Health, and I'm here today on behalf of the Colorado Hospital Association in a MEND position on House Bill 26-1263. I do want to highlight an important distinction that informs our MEND position and the work we've committed to with the proponents, making sure this legislation does not inadvertently limit access to tools that support patient care. This bill is taking on a very real issue. In healthcare, we are increasingly seeing situations in our emergency departments and behavioral health settings that reflect the concerns experts and recent reports have raised. AI chatbots can expose young people to harmful content, blur emotional boundaries, and in some cases contribute to unsafe behaviors. When AI is used in a clinical setting, patients are not interacting with it in isolation. There is always human oversight by a licensed professional. AI can inform care, but it does not make decisions independently or operate without supervision. That is a fundamentally different environment than an unsupervised child interacting with a chatbot. Treating those settings the same under the law would not reflect how these tools are actually used in healthcare, nor is it the intent behind the legislation. For those of us caring for patients, we share the underlying goal of this bill, protecting children from harm. We appreciate the sponsors' attention to this issue and the seriousness with which they are approaching it. Colorado hospitals are already using AI in meaningful ways, supporting clinical decision-making, enabling ambient documentation, and reducing administrative burden that contributes to workforce burnout. These tools are now a part of how care is delivered today, and in many cases, they're helping save lives. That is the reality that we're working to preserve, and it's why we believe this bill must be aligned with the work led by and agreed upon through the governor's AI task force where hospitals have actively participated. Colorado should arrive at AI governance with one unified voice, not a patchwork of overlapping requirements that creates conflict between well-intentioned bills. The proponents have engaged with us in good faith and there is a shared commitment to recognizing the HIPAA covered entities and their business associates should be treated differently under this bill given their existing regulatory obligations. The concept is agreed to, the language is still being shaped and we will continue to work together until it's right. Please know that the hospital community is a willing partner in shaping AI policy that prioritizes patient safety while preserving access to life-saving care and supporting the healthcare workforce. Thank you.
Thank you so much. I do have another witness out there. Is it Ms. Carrie Rodriguez? Are you online now? Okay, Warren Binford.
Are you? Good afternoon. I apologize for having my camera off, but I'm traveling and on the side of the road. Honorable chair and members of the committee, thank you for the opportunity to speak today. My name is Warren Binford. I'm a professor of pediatrics, a professor of law and the W.H. Lee chair in pediatric law policy and ethics at the University of Colorado. I am speaking today on behalf of the Kemp Foundation. As introduced, this bill was problematic, but we appreciated the sponsor's intent. Unfortunately, amendments being offered today further weakens the bill. If the current amendments are adopted, the Kemp Foundation will oppose this legislation because it will serve primarily to protect the tech industry, leaving Colorado children vulnerable to new and rapidly metastasizing forms of child sexual abuse I been working in the field of tech child sex abuse since 2005 I happy to provide research and lived experience findings in this field but I limit my comments today to the bill before you. Child safety advocates like me often refer to the AAA factors that allow predators to exploit children online. Accessibility, affordability, and anonymity. These three factors allow predators to easily access children facilitated by technology. That's why I often refer to these practices as tech-facilitated child abuse. However, with the advent of artificial intelligence, we have a new fourth A, automation, where literally there is no need for human predators because the sex abuse of children is built into the technology itself. We therefore must insist that technology companies embed safety by design in all platforms that children can access. Unfortunately, amendments offered by sponsors further weakened the bill by, one, requiring lesser child protective measures by operators, and two, weakening liability language to make it more difficult to hold operators accountable when they choose to allow their AI chatbots to harm children. Specifically, the current amendments protect only child account holders and not child users. This would protect only a small subset of children at risk of harm, as often chatbots do not require the opening of an account. The provision that a disclosure must be made that the chatbot is not a human was previously triggered if a reasonable person would be misled to believe that they were interacting with a human. That trigger has been struck from the bill, or would be, if the amendment is accepted. The provision that operators shall not falsely represent that the service is being provided by a human professional has been altered in an amendment to only apply when the operator, quote, knowingly and recklessly, end quote, falsely represents. This language would make it very difficult to hold operators accountable in a court of law. And the liability shield clause has been replaced with language that further shields developers. The bill as amended is designed to reduce the likelihood that any AI chatbot operator could ever successfully be held accountable in a court of law. It is for all of these reasons that the Kemp Foundation urges you to vote no on the amendments to this bill. We appreciate the sponsor's good intent and would like to keep working with the sponsors and the proponents to ensure that this legislation protects Colorado's children, not the tech industry.
Thank you. Thank you so much. Committee members, are there any questions for these witnesses? Rev Kelty. Thank you, Madam Chair. I think this is for Mr. Reister.
And I had asked this question earlier. I just want to understand what is the ability for Colorado to protect from these companies that are – or make legislation to protect from these companies that are outside the state, outside the country, which is where a majority of this is coming from. So how is it that we're able to, I mean, what good is it going to do to companies like that that are outside our jurisdiction? Mr. Rister.
Thank you, Madam Chair, Representative Kelty. It's a great question and one we get a lot when it comes to consumer protection. So, excuse me, how Title VI, the Consumer Protection Act, operates is essentially if a commercial actor, so it could be a partnership, corporation, et cetera. But generally speaking, we believe that there is no business formation that would not be captured by Title VI. So anyone operating in or outside the state internationally as well if they essentially avail themselves to our laws by advertising into our state doing business with or conducting a relationship. So if there's a user or account holder, whoever it might be, they as a company or partnership, et cetera, have said, we are now subject to Colorado laws because we are operating within your state. Obviously, the further outside of our borders, meaning international actors are certainly harder, but we do work with international partners and have held people accountable for consumer protection violations, largely in the business fraud space for actions taken here, even if based in another country or another state. So as far as being able to reach those bad actors or those who are unwilling to comply, we do have the ability to do that under current law. And this bill is not needed to clarify that any further.
Is there a follow up? Okay, Rep Kelty, one last follow-up, please. Thank you, Madam Chair.
So I guess I'm kind of going into the context of people who are from other countries, India and China and Africa and all of these other countries that are able to scam our individuals here for consumer scams.
You know, hey, I'm a prince in Nigeria or whatever country, India, and I need a million dollars and oh my gosh, send me my million dollars, I'll send you back 10 million. You've seen those. Or other scamming. We're unable to actually do without international action. We're not in Colorado able to really do much with any of that. I've dealt with several individuals that have told me, yeah, they were just out their money. So I kind of see that in the same realm as being able to enforce something on a country or people, individuals in India when they're known to, you know, evade and get away with and change their IPs and, you know, do this, you know, the string along that they do to be able to hide themselves. I mean, I can't imagine a company that is intending on doing harm is going to be just out there in the open for us ready to find them. So they're the ones who hide them. They shield themselves from the dark web and that kind of thing. What is our actual realistic ability to do something about them, hold them accountable or anything? Mr. Rister. Thank you, Madam Chair, Representative Kelty. I completely agree with your concerns of the challenges that comes with it. I'm glad you mentioned the IP issue because I would say in terms of holding international actors accountable, that is typically the part that is the hardest and most expensive in order to actually figure out where they are located right because they can bounce around to 20 different countries before you actually find who that actor is and as we know if they are a bad actor and certainly when when people are taking some of these actions that we have seen from a consumer protection side they are not complying with any laws right so there's it's even hard within that own country of like are they a legitimate registered business or are they operating you know click farm or something else. So for us, what we really need and what we often do is that's where our federal partners come into play. They have greater resources. They have relationships with those countries in order to hold those people accountable. And so that's what we really need to and can rely on. For us, the priority is the harms created, whether that's a company based in Colorado or a company based in Antarctica. We will seek to find that accountability if we can, if we have the resources, but that is always the balance there of ensuring at not using every resource to go after one harm and then letting 100 others go unaccounted for. And so that's the balance that we have to take every day with any consumer protection violation. But I agree with you. It is a major challenge and one that we work to get better at every day. And like I said, our federal partners are a big part of how we can deal with the international side. Okay. Brett Richardson. Thank you, Madam Chair. it's kind of an unusual amend panel where we've had some folks say that with the amendments they can now support the bill. We have others that have said that with the amendments the bill is worse. But specifically for Mr. Jenkins, I'm not sure if I caught your position on that. Do the amendments strengthen the bill or help with your concerns or is there still more work to be done there? Mr. Jenkins. Thank you, Madam Chair. Thank you, Representative Richardson, for the question. We think the amendments do some good, but don't go far enough. We think the amendments need to be stronger. We think there are particular things that are completely left out in the amendments that are not addressed, like the level of the fine, the definition of a violation, some other pieces. So, yeah, I think there's more work to do there. Thank you. Any other questions for these witnesses? Thank you so much for coming down. I'm going to make a last call, and this is the last call for amends or people who are against the bill. If you're here. Ms. Carrie Rodriguez, are you online? Have you joined us? I have. Thank you so much for your patience with technical difficulties. I appreciate it. Thank you. You have three minutes to give us your testimony. Thank you, Chair and members of the committee. My name is Kerry Rodriguez. I'm the founder and president of the National Parents Union. We are the only parent-led, parent-powered public policy organization in the United States, and we represent 1.7 million families through more than 1,800 affiliated organizations across all 50 states, Washington, D.C., and Puerto Rico, including in Colorado. But most importantly, I'm Matthew Miles and David's mom, and I'm here today because this bill claims to protect children from AI chatbots, and unfortunately, it does not do that. The National Parents Union wants AI regulation that actually works, and we have been working to, in fighting to hold big tech accountable for the harm that it causes children and families. We have the polling data, the organizing infrastructure, and the receipts on what parents actually want, and what parents want is real accountability, and unfortunately, this bill is not that. So here is exactly why. The exemptions still swallow the rule. The amended bill adds a requirement that a covered chatbot must use an emotional recognition algorithm that's defined as a very specific combination of technologies, including natural language processing, sentiment analysis, gait analysis, and physiological signals. My question for the sponsors is very simple. Do these chatbots meet that very precise technical definition? Because if they do not, they walk. Whether or not a platform uses that exact combination of technologies has nothing to do with whether children are being harmed on it. This language is a technical escape hatch and it serves companies not kids This bill also limits heightened protections only to minor account holders who create accounts specifically to use an AI service Chatbots like ChatGPT and Google Gemini both allow access without ever creating an account And a kid on Instagram did not create that account for the purpose of using Meta's AI chatbot. They created it to connect with friends. So under this language, Meta's AI on Instagram almost certainly falls outside of this bill's reach. The children who are most at risk are using the most popular platforms, and this bill largely does not cover those platforms. On the issue of sexually explicit content directed at minors on emotionally manipulative engagement, the bill still does not prohibit anything. It requires reasonable measures of the operator's own choosing. And here is what's telling. The sponsors removed the reasonable effort standard from the self-harm section, and that proves that they know how to write a real prohibition if they want to. I also want to name a pattern here. The American Innovators Network has been shopping this legislation to state lawmakers across this country, including California's SB 243 as the template. And that is a bill that child safety advocates ultimately withdrew their support from after industry groups amended it into something that protects companies more than children. Colorado should not be repeating that mistake. And if you're going to put a child's name on a piece of legislation, that legislation needs to actually protect children. And the kids in Colorado deserve better. Thank you so much for your time. Thank you so much. Is there anybody else? One last call, and then we'll have questions for our witness. No one else. Okay. So the amends will be closed. Any questions for this witness online, committee members? Okay. I don't see any. Thank you so much, Ms. Rodriguez, for your testimony today. We're going to go into the people who are for the bill at this time. We're going to start out with Alexis Altop, Erica Boder, Dan Hipp, Evie Hudak. I'm also going to call up J Jisimha I may be mispronouncing that Ms. Hannah Eldman and Adam Fox, if you're here. Okay. A lot of people are online. Okay. Let's start with you in the room and also you. You have three minutes to give us your testimony. Please tell us who you're representing. Sure. Good afternoon, Madam Chair, Vice Chair, and members of the committee. My name is Alexis Altop, and I am testifying in strong support of House Bill 1263 on behalf of Healthier Colorado. In the last four years, AI chatbots have gone from a virtually unknown technology by the general public to something that is omnipresent and increasingly integrated into the lives of Coloradans, especially children, teens, and young adults. With the rise of this technology, we've quickly seen the emergence of risks to the well-being of users, particularly those that are already experiencing mental health challenges or have limited access to real-world supports. AI chatbots are already functioning in mental health contexts that they were not designed for. Last year 13 of U teens representing approximately 5 million individuals reported turning to generative AI for mental health advice On top of that 16 of adults turned to them for the same purpose Unfortunately, rather than direct users to support and resources, they have been found to encourage self-harm and suicidal ideation and not even take the minimum steps to prevent those communications to all users. and we really think that AI chatbots should not be the end point of help seeking. Rather they should be a bridge towards appropriate mental health care. This bill recognizes the role that chatbots are already serving for users and responds by requiring operators to integrate self-harm and suicide protocols that help those in need of real world supports. And that applies to all users regardless of age, regardless of account status as do the disclosure requirements. The societal harm of these chatbots exposing kids to sexually explicit content, emotional manipulation and encouragement of self-harm vastly outweighs the potential costs of requiring companies to comply with these basic safety guardrails. We are still at the relative beginning of AI chatbot adoption and the harms that we've seen are just the tip of the iceberg of what's to come. Now is the time to establish what operators must do to protect users, or we will see chatbots disregard any notion of safety as they currently are in order to build out a user base. We're operating in a detention economy, and chatbots will do whatever they can to capture a user base. The requirements of this bill are not aspirational ideas. They are technologically feasible and are able to be implemented right now. That is why two states, the only two states that pass legislation on this, have used many of the language that we used in this bill. That's also why 19 other states are considering very similar provisions. Colorado has the opportunity to not be left behind as other states are moving forward to protect our children. House Bill 1263 responds to the harms that we are already seeing and recognize, and aligns Colorado with the best practices being pursued in other states. And I respectfully request your yes vote on this important bill. We may need to come back as these technologies evolve, but this is a good start. Thank you for your time and consideration, and I welcome any questions you may have. Thank you, Ms. Alta. Please stand by for questions. We're going to go to the lady in the room, and please, you have three minutes to introduce yourself and tell us who you're representing. Good afternoon, Madam Chair, Mr. Vice Chair, and members of the House Business Affairs and Labor Committee. Thank you for allowing me to testify in support of House Bill 1263. My name is Erica Bodor, and I am the mother of two boys ages 6 and 8. They are kind, funny, and thoughtful kids who love reading soccer and, most of all, Minecraft. I stand in front of you today speaking as a concerned parent. I use AI tools to help me in everyday life and work. As a user of chatbots, I have seen firsthand how they are designed to be engaging and make me feel heard and understood. While I am optimistic about the future of AI, I am deeply concerned about the challenges I will have to navigate to keep my son safe online and have already seen my children fall victim to the draw of online platforms. I am an involved parent who shows up every day in my kids' lives. I support them in every way I can, and I am proud to say that we have an open, listening, respectful relationship grounded in trust. And this is exactly why I able to tell them that their interactions with AI are problematic and how addictive these platforms can be for them which obviously terrifies them They should be able to use these tools If they are able to use these tools they should be protected from the harm that they cause Members, we have seen in the news articles about parents who were also involved, parents who did their best and still saw their kids preyed upon by AI chatbots that isolated them from real-life support systems. We have also seen what happened with social media where companies didn't take action and lawmakers waited until the harm to kids was insurmountable before taking any action. I cannot imagine sitting here in front of this committee with the pain all those parents carry. This is exactly why I'm here encouraging you to act to place common sense, responsible guardrails on these platforms now. Before we see how far the damage can go, this is the time to act. Let me be clear, I am not calling for a ban on chatbots. I am all for the smart use of these tools, but we need this bill to make sure that they can be used without fear of my kids or any kids being taken advantage of online or falling into a rabbit hole of self-harm. As already was mentioned today, and anybody who interacts with kids knows, their ability to self-regulate is not their strong suit. As a mother, I stand before this committee asking for support where there currently is none. And I urge each of you to vote yes on House Bill 1263. Thank you for listening. Thank you so much, Ms. Border. We're going to go online to Honorable Evie Hudak. You're up. Thank you, Madam Chair. I'm Evie Hudak, former senator here today representing Colorado PTA as its vice president of advocacy. PTA is the oldest and largest child advocacy organization in the U.S. Our mission is to empower parents and communities to advocate for all children and youth. Most of our 13,000 members are parents. PTA supports this bill. At PTA, our priority is to ensure that technology serves to enhance, not undermine, opportunities for all children and youth to reach their full potential and thrive into adulthood. As AI becomes more integrated into the daily lives of young people, we are concerned about its impact on the safety of children and youth, their emotional well-being, and their data privacy and security. it is critical that we put guardrails in place that center their safety and well-being. This includes ensuring that children understand when they are interacting with artificial intelligence and protecting them from harmful or inappropriate content. It also means establishing strong and accessible parental controls that allow families to manage privacy, content, and engagement and preventing the use of design features that encourage excessive or unhealthy use. Many of these concerns are not new. PTA has long advocated for stronger protections for children on social media and other digital platforms. As AI becomes more interactive and personalized, we are seeing those same risks emerge in new and more immersive ways. We also believe that families must be empowered partners in this work. Parents and caregivers need meaningful tools and transparency to help guide their children's interactions with new technologies. Children do not experience technology in isolation. The systems we design today will shape their learning, relationships, and sense of safety and belonging. It is our responsibility to ensure those systems reflect our shared values. We believe this bill will make our children safer, and we appreciate efforts to put common sense protections in place. Thank you. Please stand by for questions. questions. Mr. Adam Fox, please unmute. You have three minutes to give us your testimony. Thank you, Madam Chair and members of the committee. My name is Adam Fox, and I'm the Deputy Director at the Colorado Consumer Health Initiative. CCHI is a non-profit, non-partisan membership-based organization that serves the interests of Coloradans who face structural barriers to high-quality, affordable, and accessible health care. I'm here to express our support for HB 261263 and ask for your yes vote on this bill today. With the rapid development of artificial intelligence in general and in the behavioral health space specifically, parents, providers, and consumer advocates are playing catch-up to develop protections for vulnerable individuals seeking mental health support. As consumer advocates, we are seeing a growing number of heartbreaking stories where these tools, designed to mimic human empathy, have instead validated or encouraged self-harm, shared sexually explicit content with minors, or blurred the lines between reality and simulation. We cannot afford a wait-and-see approach when the mental health and safety of Coloradans are clearly at risk. At CCHI, we support this bill because it begins to establish safety standards that any consumer, particularly those in vulnerable positions, should be able to expect from a service provider. Consumers deserve to know when they are talking to a machine. HB 1263 ensures that AI services disclose their nature and crucially prohibits them from masquerading as professional mental health and financial advisors. This prevents vulnerable individuals, particularly young people, from relying on an algorithm for medical advice, particularly an algorithm designed to increase engagement, regardless if it causes harm. This bill ensures that users are directed to legitimate crisis services and mental health providers. This bill implements common sense safety standards for conversational AI platforms. It requires chatbots to regularly disclose to users that they are interacting with artificial intelligence. It adopts protocols to refer to legitimate mental health and crisis services if a user indicates ideation or intent to self-harm. For users under 18, it allows more robust parental controls on platforms. HB 1263 is an important step to begin creating necessary guardrails for AI platforms. It also requires AI operators to report their safety methods to the Office of the Attorney General, creating greater transparency and protections for some of our most vulnerable residents seeking help. I ask for your yes vote on HB 1263. Thank you so much. We're going to go next to Ms. Hannah Elman. You have three minutes to give us your testimony. Good evening, Madam Chair and members of the committee. My name is Hannah Elman, and I am a policy intern at the Colorado Coalition Against Sexual Assault, a statewide coalition dedicated to preventing sexual violence and supporting survivors. I here today on behalf of CECASA in support of House Bill 1263 While there still so much we don know about the impacts of AI tools what we do know is that there are a few guardrails regulating how these systems interact with minors And without these guardrails, we are knowingly placing youth at risk. As you've already heard today, AI chatbots effectively simulate human relationships and can expose youth to harmful content and provide dangerous responses during particularly vulnerable moments. These harms are not theoretical. In one recent study, researchers spent approximately 50 hours interacting with AI bots using accounts explicitly registered to children and logged an average of one harmful interaction every five minutes, with sexual exploitation and grooming behaviors being the most common. Bots generated graphic sexual content, normalized sexual relationships between adults and minors, and engaged in grooming behaviors such as excessive flattery, encouraging secrecy from parents, and suggesting conversations move to private spaces to avoid moderation and censorship. Exposure to this kind of content can hinder a young person's understanding of healthy relationships and boundaries, making them more vulnerable to manipulation and further exploitation. And we are already witnessing the devastating impacts as we hear more cases of children facing sexual exploitation, excuse me, following interactions with the platform. HB 1263 establishes foundational safeguards. The bill requires AI services to clearly disclose that users are interacting with AI, limit addictive engagement tactics, prevent sexually explicit interactions with minors, and enhance protections such as crisis resource referrals and parental privacy controls. Passing HB 1263 is an important step. It lays the groundwork to protect young people from exploitation, prevent harm before it happens, and ensure that innovation does not come at the expense of Coloradan's safety, especially our youth. Immediate regulation cannot wait, and CCASA respectfully requests a yes vote on House Bill 1263. Thank you for your time today. Thank you, Ms. Oatman. Please stand by for questions. we're going to go to Jay Jasima. I may be mispronouncing your name. So please correct me. That was perfect. Thank you so much. Chair and members of the committee, my name is Jay Jasima and I'm testifying in support of House Bill 1263. I'm a co-founder of Transparency Coalition. We're an independent, nonpartisan nonprofit founded by former tech entrepreneurs which advocates for increased transparency and accountability in generative AI. I have about 30 years of tech industry experience as a CEO and executive, and I have a PhD from the University of Washington. I'm also an affiliate faculty member there. In this session, our organization is working with roughly 25 states where we're trying to pass chatbot legislation, and obviously Colorado HB 1263 is one. Last year, we were closely involved in the passage of SB 243, which has been mentioned both positively and negatively today. And then we are also closely involved in the passage of Washington's recently signed into law House Bill 20 to 25, as well as Oregon's chatbot law, which is just about to be signed by the Oregon governor. We wanted to draw attention to a few provisions of this bill. One is the transparency and disclosure provisions, which are very consistent with other laws that are being passed in this space. The prohibitions on emotional manipulation of minors. Provisions on this bill build on a large body of research on the destructive attachment building mechanisms used by chatbot developers to keep minors and others coming back and staying on. And the prohibitions in this bill are entirely in line with those best practices The requirements that chatbot developers implement a protocol for detecting suicidal ideation is also an important addition Of course, I was struck by this lengthy discussion about reasonable measures, and I'm not a legal expert, but I wanted to touch on the discussion of the phrase reasonable measures. This type of language appears in Washington law as signed in this week and Oregon as well. And we obtained advice from the attorney general as well as from the leading plaintiff's law firm in the chatbot space. We received advice that this was acceptable. As a technical expert, I can mention that there are safety standards put together by both NIST and the European Union that are detailed and are commonly understood by the expert community, that they constitute our understanding of what a reasonable measure is. These are what judges and juries can use to evaluate if a company took the steps it's required to in this bill. We are, of course, also working well with the sponsors to continue to amend this bill and are concerned about the definition of emotional recognition algorithms, but I'm confident that we can work productively to eliminate any loopholes that result from it. One last thing I want to touch on is the importance of retaining a private right of action. I used to work at both Amazon and Microsoft, and I can tell you that discussions around legal liability are a really important gatekeeper for whether features are built or not. So creating the kind of incentives that are built into this bill that will modify company behavior, and I request you to please support this bill with the private right of action. If laws in California and Washington are already... Mr. Jusima, I'll have to ask you to pause there so we can go to the next witness. Thank you. Please stand by for questions. Mr. Hipp, you are up. Thank you, Chair Ricks and members of the committee. My name is Daniel Hipp, and I'm the Senior Research Coordinator for Children and Screens, Institute of Digital Media and Child Development. Children and Screens is a 501c3 nonprofit helping children live healthy lives in a digital world. We equip parents, educators, and policymakers with the science, knowledge, and confidence they need to act in children's best interests. My role at Children and Screens focuses on AI safety and child design, issues directly related to the scope of 1263. I have a PhD in cognitive and brain sciences from the infant and child studies lab at Binghamton University, and I'm also a Colorado resident raising two kids in Arvada. For me, in other words, this issue elicits both personal and professional concerns. Childhood and adolescence are developmental periods of deep vulnerability. Brain, cognitive, and emotional development peak during this period to an extent not seen outside of infancy. Young people are exceptionally capable of learning and uniquely sensitive to shifts in their environments. As a result, we must be careful to provide digital environments that support their development. AI chatbots are changing these environments for youth of all ages. Approximately 30% of children aged 0 to 8 have used AI for learning, and approximately 64% of adolescents report using AI chatbots. 30% using them for social chatting. The frictions experienced during in-person socializing can be challenging, especially during childhood, but are necessary for healthy development. AI chatbot simulations are not developmentally appropriate replacements for these interactions. These systems threaten to displace and interfere with the necessarily messy and complicated interactions of childhood. Even adults have proven susceptible to severe mental health harms from unhealthy chatbot interactions, often termed AI psychosis and other terms. And experts are already strategizing how to deal with this So regulations and safeguards could have mitigated social media harms early on if a more proactive stance was adopted AI chatbots risk similar or worse harms and we cannot risk duplicating this reactive posture. LLMs combine more sophisticated machine intelligence with a social interface, able to capitalize on the social vulnerability of young people and groups like the American Psychological Association are already issuing guidelines for parents. As standalone chatbots evolve and as chatbots are incorporated into other services, AI-related harms are set to occur at a much larger scale. It is imperative that legislators act now to build regulatory safeguards for minors against these harms. Thank you. Thank you, Mr. Hipp. I'm going to move back to the room. We have a young witness here. Did you have something that you wanted to say to the committee? Yeah. All right. Good afternoon, members of the House Committee. I'm Pablo Perot. This issue is important to me because I don't like that AI is eavesdropping and gives users fake information. The risk is that everyone will get fake answers or their information might be shared with others all over the world. I believe that the following rules should be created to keep everyone's space safe. 1. You can't be on a screen for more than an hour if you're a child. 2. AI can't claim to be human. 3. AI can't give fake answers. 4. You can't use AI to do inappropriate things. Please consider voting to make AI safer and smarter for everyone. Thank you for listening. Smiley face. Thank you so much for your testimony. So committee members, questions for our witnesses online and in the chamber? Any questions? Okay. Representative Brooks. Sorry, I forgot your name. That's my apologies. Alexis. Yes, thank you. Chair, thank you. It seemed like you had some insight as to kind of the reasons why this is constructed the way it is. You know, in the first couple of panels, I heard a lot about what the bill does not do. I was hoping that you could perhaps kind of talk about why it doesn't do what some would like it to do why it's where it's at and what it does do and if this is a starting point what this provides for the future Ms. Elta Sure, thank you for your question, Representative Brooks so many parts of this bill are based on California and New York's bill and provisions like the three hour threshold were taken from that so like three after the first disclosure that a chatbot is not a human that three hour like next benchmark that is from almost every other bill in the country as the bill started it was based on the two that had passed and other states as they've been considering the bills we've been able to pull in the strongest parts of those bills to inform amendment recommendations for the sponsors. We've also taken a lot of feedback from stakeholders of all backgrounds and industries, consumer advocates, health professionals. to inform the changes to the definition of conversational AI. I do believe that the term emotional recognition algorithm, that was my best recommendation for what could be included to respond to feedback that the current definition was so expansive that it would incorporate AI chatbots that we had not even thought to exclude at this point. We've already seen another bill, Georgia, that might be able to provide a better wording for this. So our goal is to ensure that beyond just companion chatbots, all generative AI chatbots that can cause this harm, that can sexually exploit minors, engage them in inappropriate conduct, or recommend to adults that they are not a human, or that they are human, which is really one of the things that we're seeing with AI psychosis, are addressed in this bill. So that is why the definition is the way it is. And that's why the exclusions are how it is. Thank you, Ms. Alta. Is there a follow-up at Red Brooks? Are you good? Okay. Is there another question from the committee? No, we don't do comments. If you want to phrase it in the form of a question, you may ask the question.
Thank you, Madam Chair.
I'm sorry I didn't get his name at the end. What is your name, young man?
Pablo Perel.
Pablo?
Pablo Perel.
I just want to thank you for coming. And I'm glad you shared your testimony. And can you tell me what grade you're in?
Third grade.
Third grade. Thank you so much. You're very courageous for coming here and sharing your testimony. I would not have done that when I was in the third grade. and you did an excellent job so thank you for that and thank you so much for your comments because I listened to what you had to say very strongly Ms. Alto, did you have something
to add? Yes I do Pablo is one of a number of 4th graders that submitted written testimony in support of this bill so if you want to look at the written testimony you can see all the other remarks from 4th graders and other youth advocates as well as health professionals Thank you so much, oh and one more from
Ms. Border
Oh, Dora. Yes. And just to add to that, the topic of AI safety was chosen by the fourth graders as what they wanted to address. So it was not, it was their impetus, not the teachers.
Excellent. Thank you all for coming. We're going to call up our next panel at this time, and that will be our last panel. So anybody who is for the bill, this is for people who are neutral or for the bill. And on our list, I see that we have, who's left? Berna? Berna Roland. Are you around or are you here still? Okay, there you are. You're in house. And Nicholas Jones. anybody else who is neutral or in support of the bill? I don't see anybody else, so this will be the last call. You have three minutes to give us your testimony. Please tell us who you're representing.
Thank you. Good afternoon, Madam Chair and members of the committee. My name is Christina Walker I be reading Verna Rolland testimony today as she was unable to make it due to illness My name is Verna Roland I a parent an educator and someone who uses AI tools regularly in my professional and personal life Thank you for allowing me to testify today in support of House Bill 1263. As a teacher and a school leader, I see firsthand how powerful these tools can be. AI helps me automate routine work, organize information, and save time so I can focus on what matters most, which is supporting students and families. Used responsibly, AI can be an incredible tool for enhancing learning, expanding creativity, and problem solving. Young people are already turning to AI to ask questions about school, relationships, stress, and life situations that they may not feel comfortable discussing with an adult yet. In those moments, AI can sometimes act as a bridge by providing information and encouraging users to seek help from trusted adults or real world resources. However, this is also where the risk becomes very real. Children and teenagers are still developing emotionally and cognitively, and AI systems are designed to be highly engaging and persuasive. Without strong guardrails, young people may rely on AI systems for emotional support, guidance, or decision-making in ways that are not healthy or appropriate. For that reason, I believe this bill is an important step forward in addressing the mental health and developmental impacts of AI used by young people. Strong protections should ensure that AI systems do not encourage emotional reliance, position themselves as trusted companions for young users, or quietly become a primary source of advice for children navigating complex life situations. Additionally, AI systems should never generate sexually explicit content involving minors. There should be protections preventing AI from generating explicit imagery using a real person's likeness without their consent. These types of safeguards are essential to protect the safety, dignity, and privacy of young people. AI is not going away because of that reality. Strong and thoughtful safeguards are essential. House Bill 1263 is an important start, and I encourage the committee to support the bill to better address mental health protections and associated risks while ensuring consent and mitigating inappropriate exploitation. Verna also shared some of her students' written comments, so I'll read one today. Hello, I'm Callum, a fourth grade student. AI is affecting our whole life, and even though there are advantages, those are rolled out by the disadvantages, which I will talk about now. People have been harmed by AI. It tells them false information or tells them it's a human, which can turn disastrous. AI should help if someone types something disturbing. It should give you a number for a therapist or doctor and not tell you it's AI and not lie to you. I believe that we should pass a law to make AI safer for everyone, not just kids. Thank you for your time. Thank you so much.
Yeah, please give us your testimony as well.
Good afternoon, Madam Chair, Mr. Vice Chair, and members of the House Business Affairs and Labor Committee. Thank you for allowing me to testify in support of House Bill 1263. My name is Elise Kong, and I'm a 15-year-old high school sophomore at Denver School of the Arts. Today, I'm taking time out of school to talk to you about conversational artificial intelligence surface operator requirements. This bill is significant to me because of its regulations on AI protect the lives of my peers and the lives of future children. The most common reason young people are using AI chatbots is because of academic stress. They do all the work for you, which makes it the easiest solution to turn to. these chatbots are not only companion chatbots or mental health chatbots but they are often general chatbots like chat tpt or Gemini the problem though is that even general chatbots blur the line between artificial and real conversations and are designed to sound human In a scenario where a young person becomes reliant on AI for schoolwork it is likely they may not have a teacher or parent who they can connect with. If a young person has minimal support for schoolwork, then most likely they lack emotional support as well. And AI chatbots are always there without judgment, which makes them easy to turn to. And if we are to discuss the feasibility of implementing AI regulations, I would argue that it is not only would it be possible, but it would also be crucial in protecting youth mental health. Currently, schools heavily track mentions of self-harm or suicidal messages typed into school computers. These messages, even when typed into Google, are taken so seriously that a school counselor would immediately go to ensure that the students who entered it is safe. But Google or other accessible platforms don't reaffirm dangerous thoughts the same way AI has the capability to do. By allowing AI to be unregulated, we are worsening our ability to protect the lives of young people here in Colorado. For these reasons, I urge you to vote yes on House Bill 1263, because if my school computer can have safeguards, the same should be applied to artificial intelligence. Thank you for your time and consideration, and I'm happy to answer questions you may have.
Thank you, Ms. Kong. So we have two witnesses, committee members, questions. Okay, I don't see any. Thank you all for coming today, and I just really want to thank all the witnesses for coming. It was some very difficult testimony for us to listen to. I'm going to call the bill sponsors back up at this time. Testimonies are closed. Bill sponsors, are there any amendments? What may we? Please tell us about your amendment, and you can move your amendment and tell us about your amendment.
Thank you, Madam Chair. I move L1.
Second.
It's been moved and seconded by Vice Chair Camacho. Please explain your amendment.
Okay, so L1, we are incorporating feedback that we got from some consumer advocates, businesses, and definitions that were used in other state-level legislation. L1 updates the definition of conversational AI and what is considered and not considered to be conversational AI. It clarifies a number of exclusions. In particular, one thing I want to highlight is there was a loophole in the bill that excluded meta. Meta AI could have been interpreted in the original language of the bill as being excluded because the bill said a secondary feature on a software or website that does something different. We did not want to exclude that. So we provided clarifying language there by striking the exclusion is a feature within another software application, web interface, or computer program. And we added additional exclusions from the definition so that tools for business operators, patient care, customer services are not included in the definition. This updated list also clarifies that video game dialogue that is related to the video game theme park apps where you at a theme park and are related to theme park entertainment are not covered in there all Yeah that a one
Okay. Are there any questions about the amendment? Any objections to the amendment? Seeing none, the amendment is adopted. Other amendments?
Brett Mabry.
Thank you, Madam Chair.
I move L2.
Second.
L2 has been moved and seconded by Vice Chair Camacho.
Please explain your amendment. What may be?
Thank you, Madam Chair. This amendment adds the definition of emotional regulation algorithm. I will say that based on feedback we got in this committee, I'm willing to continue conversations on if this is the appropriate scope of what the definition should be. I don't think you have ever heard either me or Rep. Camacho say any language in this bill is the Magna Carta. We are totally willing to continue conversations on this. We're including this language because it was included in some of the other states' legislation to clarify the definition. If it unintentionally creates a loophole, I think we're willing to talk about if that needs to be adjusted. But this is what we have drafted today, and this amendment also includes the HIPAA language, defines account holder. I will also acknowledge that I want to explore how we can make this legislation apply to users that aren't account holders. There's definitely an appropriate way to do that. I agree with the testimony that we heard today that there are a lot of these apps that are accessible if you do not hold an account. Or maybe somebody is using their older brother's account or an adult's account. I want to have conversations with stakeholders about how we address this problem for users. But the account holder language is important to define what the regulations are for account holders. I don't think either me or Rep. Camacho nor the stakeholders are closing the door on this legislation addressing users, but I did want to acknowledge that part of the conversation because it did come up in testimony. And then there's the addition of an age estimation backup to require that the AI chatbot operators use commercially reasonable methods to estimate the age of users so that minor specific provisions of this bill can be applied. I will note here that these companies, as people have often heard me say, including on a bill earlier this morning, these companies know so much about us. They are spying on us. They have very sophisticated capabilities. And we believe that they do have the technology to know how old somebody who is using it is. and that's why we want to include that language. Vice Chair Camacho. Thank you, Madam Chair. And I'll just confirm everything that Rep. Mabry has said. The conversation is not closed. I have two kids and I worry about how they're going to access these systems. So if we can make this bill better, absolutely. The intent is not to close out or to eliminate a use of these technologies and just totally forget about those. That's not the intent at all. What we are attempting to do with this Legislation is difficult because as quickly as AI is moving, legislation just simply doesn't move that fast. And we are trying our best. And I think it comes from a very genuine place that we are willing to accept amendments. We are willing to make this better and make it enforceable and make sure that you can comply with it and make sure that this could be a model across the country. So I just echo that with this amendment and some of the others we're going to be bringing as well.
Are there any questions?
Yes, Rep Richardson. Yeah, just from your conversation, I'm just curious on this amendment, is this something that you feel has value to adopt today to be reversed later on seconds, or would it be better to shape a more appropriate amendment for the floor?
Vice Chair Camacho.
Thank you, Madam Chair. We think that this bill, again, we're taking steps with every one of these, and this bill has, or this amendment has really important factors for other things that we didn't want to leave out. I mean, if we have to come back and address it, we will. Like I said, what we're attempting to do is hard. It's not like changing your trash schedule. We are trying to outpace AI, and that will inevitably be a losing effort unless we get going. Thank you.
Rep Mabry? Yeah, thank you, Madam Chair. Yeah, I'll just recognize we can always bring an amendment to the committee report to address language when we have conversations with stakeholders. But there's definite stuff that we know will be in the final bill, including the HIPAA exemption that's in this amendment.
Okay. Any other questions about this amendment? Any objections to the amendment? Seeing none, L002 is adopted. Rip Mabry.
Camacho, I'm so sorry. I've never been mistaken for Rep Mabry before.
Both of you guys there.
I move Amendment L3.
Okay. The amendment has been moved and seconded by Rep Ryden.
Amendment L3 includes some disclosure requirements for minor account holders. It clarifies that disclosures must also be made in response to prompts about chatbots, sentient and human nature interactions. It also adds that chatbot operators should not encourage isolation from real-world harms. It strikes as appropriate based on relevant risks, which could be seen as ambiguous language. To clarify that, all AI chatbot operators must offer parental tools to manage minors' privacy and account settings. This amendment also clarifies disclosure requirements for all users and strengthens the suicide and self-harm protocol requirements. Strengthening the requirements for AI chatbot operators to implement self-harm and suicide protocols by removing that they must take reasonable efforts to make referrals to crisis services so that provisions say that they must make referrals, since we see this as one of the areas where the most harm is occurring and believe this requirement needs to be mandated, mandate disclosures, not just reasonable efforts to make disclosures.
Okay. Are there any questions about this amendment? Any objections to it? Seeing none, L-003 is adopted.
Next amendment, Rep. Mabry.
Thank you, Madam Chair.
I move L4.
Second.
The amendment has been moved and seconded by Rep. Camacho.
Please explain your amendment. Okay.
Thank you, Madam Chair. I do, in moving and explaining this amendment, I do want to talk about some of the stuff that we heard from testimony because the first line of this bill the first line of this amendment lines one and two align the language on the prohibition of false representations in the licensed health care professional licensed legal professional licensed accounting professional with a bill that Rep Ryden and I are running and this was in consultation with the AG's office, that knowing and recklessly language that we're adding in here has nothing to do with the broader protections in this bill that are about protecting children from sexual grooming, protecting children from exploitation, the protections from self-harm. That knowing and reckless language is literally just about how representations are being made about if an AI is working with or is a licensed healthcare professional or a licensed legal professional, licensed accounting professional. So that knowing and reckless conversation that we had earlier was not about the broader conversations in the bill because that language is only added there. And we add AG rulemaking authority for the AG to determine the efficacy and reliability of the evidence-based self-harm and suicide protocols used by the AI chatbots. And then critically, I wanted to talk about the liability piece because we heard a lot about somehow this bill or the amendment creating a liability shield. So we struck the liability language that was in the bill, and we've replaced it with liability language that says nothing in this section creates liability for a developer of an underlying AI model where a violation is caused by somebody else. The reason why I want to highlight that language, I want to point folks to the first sentence, nothing in this section creates liability. That does not say this section removes liability. And the reason why I want to talk about that is we've heard conversation about a jury verdict that came from California for hundreds of millions of dollars. And I strongly agree with that jury verdict. But that jury verdict was about design defects, failure to warn, negligence, other duties that already exist under the law that I think it's arguable that these companies are violating when they're endangering our kids. probably fraudulent concealment and misrepresentation and wrongful death. There are endless potential tort claims that exist under Colorado law that parents impacted people could bring against these AI companies. And I hope we do see more lawsuits about that. But I wanted to highlight that because this does not remove any of those existing avenues for liability. We view this bill as adding more protections. And so I know I talked about that for a while, but it was talked about in depth in testimony, so I wanted to highlight that.
Can we ask for a yes, ma'am? Any questions regarding this amendment? Any objections to the amendment? Seeing none L is adopted Any other amendments Bill sponsors amendments from the committee Seeing none amendment phase is closed Wrap up Who wants to go first Rep Camacho Thank you Madam Chair and thank you members of the committee I know it getting later in the day and we heard a lot of really really tough testimony
I just want to point out, I'm co-sponsoring this bill with Representative Mabry, and we have fierce debates on all types of issues, and sometimes we don't see the issues the same. But on this, there is no daylight between us about where we want to go with this bill, what we are attempting to do, the protections that we need for our kids. I think the through line through all this testimony was we need to act. The through line through this testimony was we missed that opportunity when social media was around. And I don't think anybody in this hearing was here for that. But if we could do a do-over, I think we would have acted faster before social media became what it was, before it became too far. And that's what we are attempting to do here. We are on the forefront with other states trying to align our policies. and it's going to look messy, which is why we're bringing amendments. It's going to be hard, and we're willing, depending on the two of us, to have those conversations with stakeholders and to continue to have those conversations because this is a moving target, and that is just the nature of AI. It's the nature of chatbots. It's the nature of what we are trying to accomplish. We simply cannot move as fast as technology, but we are trying because our kids need it. Our state needs it. This is an important first step. It is not the last step, nor will it be. I think you have a commitment from both of us to work on this space again next year because there's more we can do. But we have to get started. We cannot allow the perfect to be the enemy of the good. We cannot allow another year to go by without taking action, without laying the framework for how we can continue to provide the protections and the rules. And frankly, what our constituents have been demanding, especially mine. So for that, we ask for a yes vote, not because it's the last step, because it is a necessary first step to continue this conversation.
Rip Mabry. Thank you, Madam Chair. I first want to thank everyone who came out and testified. In particular, I want to thank people who came and testified about their personal experiences with this, experiencing loss as a result of the negligence of these companies. I don't think that anybody in this building thinks that I would be running legislation to allow big tech companies to be off the hook. And I am fully committed to continuing to work on this policy to address concerns that we heard in this committee. If this structure isn't strong enough, I want to get specific language. In particular, I'll call out a few things that we heard. Account holders versus users. Definitely willing to include language that provides protections for users. Let's have those conversations before this goes to second reading. I did hear the feedback about the emotional recognition algorithm. We can have conversations about that too. But I think what is key here is earlier this morning on the floor, Rep. Brolich was passing out V-pins to some of us. Those V-pins were to represent if you've ever run a bill that the governor has vetoed. I got three of those, and one of those was because I ran a bill that the governor felt regulated the tech industry too much. and i believe deeply that we cannot afford to wait we cannot afford to wait for the wiser michael bennett to be our next governor we do need to pass laws right now and i want to go further i would love to pass a law um that perhaps governor polis uh would veto if we this year can get to a place where the governor will sign something that provides reasonable protections and then we pick up the work next year that is still making progress i'll also flag the reasonable measures language. Happy to have a conversation there too. I don't think that that's the most important part of this bill. I don't think that as written, we need to keep it that way. And we can talk about what will work as long as we're continuing to run a policy that provides meaningful protections that we think we can get done this year. and I'm getting distracted by my co-prime here. Will you tell me to stop talking?
Please stop talking.
Okay, okay. Well, anyways, we are proud to work on this policy and understand that it's not perfect. This is part of the process and we're going to keep working on it. Closing number two from
Rep Camacho. Sorry, Madam Chair. I just need to explain why I was distracting my co-prime. I do want to credit that we heard hearing testimony that there were concerns about the length of notification. We're willing to have those conversations too. I didn't want that to be left out because this is our process. We have a hearing. We listen. We evolve our legislation. That's the way it's supposed to work. So I wanted to acknowledge that as well and say thank you for your time.
Thank you so much. Now, bill sponsors, who wants to move the bill? You're going to the Cal.
Rep Camacho.
Thank you, Madam Chair.
I move House Bill 1263 as amended to the Cal with a favorable recommendation.
Okay. Okay.
The bill has been moved and seconded by Rep Mabry.
So, committee members, closing comments for our bill sponsors?
Rep Marshall.
Yeah, thank you, Madam Chair.
Thank you, Madam Chair. So, yeah, I just want to put on the record how great a job you've actually done. My first session I was here, I ran a bill all by myself that got through a lot of hate and discontent, got over to the Senate at a Senate prime sponsor. Could have got it through the Senate, but I got a call late at the night and said, for me to get this through, I'm going to have to do X, Y, and Z. I was so irate. I was like, that'll just make it a wet noodle. Just kill it, and we'll come back. I have regretted that ever since because that first step would have been so valuable for the things I was trying to accomplish. So getting something in place is better than nothing, I really believe, and I believe you're on the right track. And we've got two people here who have taken very big bills right up to the edge and barely got them across and have had them fail right at the goal line. So you two know where the line is. and I've been told not to trust anyone in this building so I don't even trust myself except for maybe 95% but we got two people here that are on my better side at 80-80 and systems analysis that means we only have a 4% failure rate so I definitely am behind this so thanks.
Thank you Madam Chair.
I want to thank everyone who came and testified especially those who shared their personal stories. that is very hard and that's what impacts movement in this building and I understand that this is not the perfect bill as we've heard and in this space I have never seen it be the perfect bill we removed section 6 out of our I bail on the health care. So Rep. Camacho and Rep. Ryden's chatbot bill could go in and take that because it was, you know, not compatible or it was better with that one. So that's just the way it kind of works here. But I also believe you're either part of the problem or you're part of the solution. And I always like to try to be part of the solution, as I believe you two are trying to do here. and I also believe that silence is complicity so I don't think the answer is not to do anything I absolutely believe we need to do something here AI is moving like crazy and as he mentioned about the two V pins with the AI and stuff well yeah here we are so I'm going to be a yes today and I believe what you say I know you too that you will continue to work with everyone and everyone that was in this room so with that I will be a yes today because we have to do something this is moving at a pace that we don't have any idea really how fast it's moving so I'm a definite yes today to continue the ball rolling and to continue working with the bill sponsors there's got to be something put in place it has to be the base it has to be a step
thank you any other comments okay Ms. Richardson
Yeah, thank you, Madam Chair. I admit to being torn throughout this. We heard a lot. I absolutely believe that something needs to be done in this space. I think where this is going is right. I appreciate the comments about hearing what came out of committee, witnesses, and wanting to address those things. I'm also a little concerned that I know you came into committee knowing that there were things beyond the amendments you brought that needed to be changed. And I kind of wish we'd honestly laid over to keep working this in committee so we had something that was more fully baked, because I not tremendously confident in our second reading amendment process because it really isn thoughtful debate unfortunately in a lot of cases And I think the co-prime kind of kicking you to stop might have been a good thing if it stopped before talking about your intent to take it to the industry next year. so I think you're going to get this out of committee but I am a no today for those reasons I just want to thank you guys for bringing the bill I want to point out that of all the testimony I didn't hear anybody saying you're going too far they're saying you're not doing enough so I appreciate you taking this first step and working on those things and I do think the first step is important so I will be a yes today Thank you, Madam Chair.
So from my experience on the floor, which this would be going, I have seen many times, time and time again, the lack of willingness to cooperate on amendments with our side of the aisle, as they say. we've brought many amendments before on other bills that are never even listened to and never even cooperated with and never even stakeholder with our side. So my confidence that it would be stakeholder or even amendments from us, I have a great lack of confidence in that. We heard from parents, a parent tonight who is part of a 60-minute documentary. 60-minute documentary on television about a tragedy in her life who so desperately would love to see this bill do what you think it's going to do, but then turn around and say this bill not only will not do what it's going to do, it's actually going to cause harm. I don see how it going to be enforceable This is my realm I understand it And I gone through every corner of my head to think of okay, there's this, there's that. I've seen this happen. I've seen that happen. I just don't. I can't get there on how it's going to be enforced to the extent that it needs to be. Do I believe that AI is dangerous? Absolutely. Do I believe something needs to be done? A hundred percent. But the lack of understanding how technology actually works is evident to me in this bill. And I cannot confidently today give you a yes answer because, honestly, I'm not confident in the bill enough to even give that. If you want to work with us, show us that you want to work with us. show us that you accept some amendments that we can have confidence that we're going to be part of the process, part of the problem solving, and then maybe I could get you a yes, but today I cannot.
Matt Brooks.
Thank you, Chair. We all are in. Might I just actually just first say I have never seen more attractive lapel pin jewelry. this just i i it just really i hope you continue to rack those up it is just fantastic i spectacular eye-catching um y'all are in a tough spot almost impossible because you go too far and you automatically you're going to lose you know anybody like on our side right that's a gosh you trying to handcuff maybe with your down the road intent aside uh you're going to handcuff the uh the industry too much, to too much of a degree. You're in a spot where you're saying you're not doing enough from folks, and you're trying to navigate in a space then also that has changed probably from the time that we even began this discussion today. So how do you legislate to that? But I appreciate the work that you doing here because there a lot of risk And we got to figure out how to protect kids from all sorts of different angles So I wanted to make sure to let you know, because I don't want to throw anybody off, because sometimes when I vote one way, I've thrown some other folks in this committee off because they think they got out. It must be leading them astray. So just a heads up so they don't vote no on your own bill. I'll be a yes today.
Any other comments? Okay.
So I just want to thank the sponsors for this bill. I know that you guys started it much earlier on, and you try to keep the scope narrow and not be as broad. There's so many different moving parts with this and so many parties to please. I want to thank the witnesses that came out with a very tough testimony telling their personal stories. But I do believe that Colorado needs to act, and I think this is a good first step. I am going to be a yes, and I thank you for wanting to work on those amendments. The users, that's the key piece there, I think, and some of these other concerns that people have. But keep working on it, and I think we can get it to an even better place. So with that, Ms. Haroja, please call the roll.
Representative Brooks.
Yes.
Gonzalez.
Rep. Gonzalez. Can you hear me?
Was that a yes or no? Hold on.
Can you hear me?
Yes.
Okay, I am a yes.
Kelty.
No.
Leader.
Yes.
Lindsey.
Yes.
Mabry.
Yes.
Marshall.
Yes.
Morrow.
Yes.
Richardson.
No.
Ryden.
Yes.
Sucla.
No.
Camacho.
Yes.
Madam Chair.
Yes.
Your bill passes 10 to 3. You're on your way to the count. With that, business and labor is adjourned. Thank you.