March 17, 2026 · Privacy And Consumer Protection · 36,233 words · 21 speakers · 1000 segments
My mic's on. Good afternoon. We're going to call this hearing of the Privacy Consumer Protection Committee into order. This is an informational hearing that we are hosting today on online safety controls. And I want to start by thanking all of our panelists for attending and participating in our hearing today. And of course I want to thank the privacy staff, which is, as I
believe, one of the best in the
building, the rules committee sergeant's office and other support staff for helping to organize this hearing. This hearing was really born out of my experience serving on this committee for now seven years.
We hear and we'll begin with the
lived experience of parents navigating social media with their children who say that the systems are broken for their kids and they aren't working. And then we kept hearing from industry. The answer is parental controls. And so I decided that we needed to put these two perspectives together and have a real moment to talk about the parental controls, how they work, what they are, what they're doing. Do people know they exist? Are people using them? How and why? And can they fail children? And how do we navigate a path forward online that is safer for California's children? I want to really acknowledge all of the speakers who are here today to have that conversation with us. As I said, we're going to start with the Hinxes who will really share what it means to be a parents navigating this world. We're going to hear from folks representing children.
It's a voice that often isn't loud
enough in this building. But we have two incredible organizations, Common Sense Media and Children now, who will be here to speak on behalf of California's children. I also want to give a huge debt of gratitude to the four companies, CA companies, I will say Google, Meta, OpenAI and Roblox who all agreed to be here. Often we have to go a second round to get people to participate, but every single one of the companies that we invited in the first round agreed to come and have a conversation with us about parental control. I really want to express my gratitude.
I think this is an important conversation. Thank you for being here to have it with us.
Then lastly, we'll have a panel to discuss potential solutions with some experts that research this space every day. And I think this conversation will hopefully help us navigate a path forward in the social media space that is informed by what is happening online, the realities of these products and the experts research. Because I know that all of us
hopefully are committed to a safer online future for California's children.
I will say that I'm Entering this conversation personally with the fundamental question in my mind that if we know that these, in some cases, these online spaces are designed to be addictive and to keep our children engaged, can any amount
of time be safe for them?
And so that's where I come at this. But I also think that it is the reality that our kids are growing up with. And so we need to figure out, you know, what is the way for California to create the safest spaces for our children. So with that, I want to turn it over to my colleagues.
If they have any opening remarks. Senator Lowenthal or Wicks, I'll be very brief.
First of all, I just want to thank our chair. This committee has led with moral clarity in a way nowhere else in the United States has, actually, including our federal government. And I am grateful as a father. Thank you, Madam Chair, for today and every day that we do this work. And I also want to thank everybody that has come here. I believe we are all a community together. All of us ultimately want the same
things for
a healthy consumer, a robust business, a future that we all know this generation is surpassing the generation before it and so forth. And so I look forward to having individual relationships with each and every one of you. And I know that everybody on this committee feels this way. And it's just a joy that you, that you showed up today. So thank you so much.
Thank you, Madam Chair, for pulling together this hearing and for your leadership in this space. And I also want to thank my colleague here from Long beach who's been a tremendous leader as well. I've been working in this space now since day one when I got to the legislature. And, oh, my, has technology changed in those almost eight years now? And, you know, I've done a number of bills, many of which have resulted in being challenged in the courts and continue every single year to figure out how we keep our children safe. You know, one thing I'm inspired by, honestly, is the fact that you have lawmakers who are first and foremost parents before they're Democrats or Republicans. And we have a bipartisan group of. Of parent lawmakers who are just trying to figure out how to keep our kids safe. That is the goal, and we welcome industry in that conversation and being a part of the solution to that problem. We love our tech companies. They're a big part of our economic engine in California, and they need to make sure that our children are safe. And I think we can have all of those things and obviously appreciate the expertise and the diverse points of view of the advocates, the children's advocates who are part of this conversation as well. I also know that often what we do in California leads not only the nation, but the globe. And we have regular conversations with our counterparts in the European Union and in the UK and other places as well. We're looking at what other countries are doing and modeling work from them and learning from some of their lessons. But I think we all stand here committed to making sure our number one job. I've always said this is the most important thing we need to do, is keep our community safe and from my perspective, most specifically our children. And that is my goal, my mission in an incredibly complex, technologically evolving, complicated space that is evolving all the time. So we also want to create legislation that can be implemented, is implementable, is doable. And so that's where I always welcome conversation from opposition. I genuinely actually love conversation with opposition because you learn more about what you're trying to do in that context. But it's also, I think you get better policy when you are really in the weeds trying to figure out again how to adhere to these, these guardrails, but in a way that can be implemented. So with that, excited to be here
and thanks for your leadership.
Thank you Assemblymember. With that, we will start our first panel, as I mentioned, will be or
opening remarks will be from Victoria and
Paul Hinx, who are advocates for social media safety. So if you guys want to come up. And as you get comfortable, I just want to express our gratitude for you being here. I think you provide a really critical, humanizing voice to any conversation around social media.
So thank you for being here. Thank you.
Thank you so much for having us. Good afternoon. My name is Victoria Hinks and I'm a survivor parent who lost our daughter Alexandra Hinks. Everyone knew her as Owl Forever 16. We lost her to suicide 587 days ago and she was a beautiful girl on the inside and out. She was kind. She was cross country runner.
She wanted to be a preschool teacher
one day and have a family. This is a loss that has so profoundly changed our family's life. And I it's left me living with severe ptsd. And since her death, I've dedicated my life to speaking out about the ways that social media can impact vulnerable young people and families. And I share this story so that other families will not have to endure the horrible tragedy that happened to our family. So hopefully it can help bring awareness and accountability and stronger protections for others so no other family has to go
through what we went through.
We didn't solve Car deaths with parental controls. We fixed the product itself by implementing mandatory seatbelt laws. So Owl would be graduating from high school, from Redwood High School in Marin county in June. And while her friends are all eagerly awaiting their college acceptance letters, we've been eagerly awaiting her headstone finally being put up. And I brought pictures of that for you all today. So these so called parental controls never worked. She found a way around and we never really stood a chance. And this is why the work that you all are doing is so important to us. Because this could be anyone's child, doesn't discriminate. Republican, Democrat. It's, you know, she was a, she
had a bright future ahead of her.
So the grace that we live with is the most painful thing ever. And it could be anyone's child. We thought this is something that could never happen to us. So that's, thank you so much for having us here.
Thank you. Victoria.
Afternoon. My name is Paul Hincks. I'm Victoria's husband, Alexandra's father. I have been a software engineer in Silicon Valley and San Francisco for over 30 years. We as a family considered ourselves to be tech savvy. And our children grew up in a house full of gadgets, video games, smart TVs, speakers. Our house is pretty much controlled by an app. We never let our children have a TV in their rooms. We always discouraged prolonged tech use. And we didn't allow devices at the dinner table or out in public. And we held off getting them phones until much later than their peers. Her older sister had already been through this successfully. We weren't starting from scratch. When we finally gave in to the inevitable and bought 13 year old Alexandra an iPhone. We thought we did everything right. We researched the dangers. We made her sign a contract acknowledging that the phone was our property and that we controlled it and that we could take it away from her at any time. No phones at night. She happily agreed to show us what was happening on it and to keep track of her own usage. We set up screen time limits, age appropriate content restrictions, and a firm 9pm curfew, after which the phone could only be used to play music or call her family. We felt prepared. This was an Apple device. They make great devices that just work. So what could go wrong? We weren't naive parents stumbling into this blindly. We thought we genuinely understood the dangers our daughter was being exposed to. We'd attended meetings at school where online bullying was discussed and the contagion of self harm and eating disorders among teenagers. We took it seriously, but the threat felt manageable. Local Even her school friends talking among themselves. A kind of thing that could be sorted out with a phone call to another parent. After all, we had all the parental controls on. We had devices on our network that were supposed to filter out dangerous websites. No random stranger from across the world would affect our child. What we didn't realize was that the dangers were coming from inside. Some of the apps that Apple told us were trusted. Initially, we did not allow social media at all. Slowly we added apps as our daughter grew older and wanted them to keep in touch with friends. Each app had its own parental controls and we set them up to keep her as safe as possible. But again, surely the app manufacturers had their customers best interests at heart. Surely they would not allow dangerous content to reach the screen of a teenager. What had worked at 13 did not work at 15. Our daughter began obsessing over her phone. She seemed very fragile and upset all the time. We didn't know the cause. There were probably many. She was transitioning from middle school to a high school that none of her friends were attending. Her older sister, who she was very close to, had left for college. She was desperate to make friends and some of the people she chose were not great people. She felt isolated and turned more and more to social media for companionship. We were aware of this, but it wasn't a major concern at the time. Surely social media major benefits was to keep her in touch with friends from her old school. And we had all the controls and limitations turned on. Surely nothing bad could be going on. She was happy to show us the apps when we asked, but she had ways of hiding things she did not want us to see. When we finally accepted that something was seriously wrong, we had lots of fights. We began to suspect that the phone was causing her problems. We restricted her use of social media to one hour a day, not realizing that these restrictions were broken. She could simply tap to ask for more time and stay on the phone as long as she wanted. We took the phone away from her for days, weeks at a time. That helped. She would apologise and ask for the phone back so she could keep in touch with friends. Her therapist told us that taking the phone away was actually harmful and isolating and letting her use it, even to listen to music, would help. So we agreed to this. Who wants to totally isolate their teenager from their friends? These devices can be made safe. Consider what happens when a company issues a device to an employee. There is an IT department, there are policies, there are people whose job it is to ensure that that device is configured correctly that dangerous content cannot reach it and that someone is accountable if it does. The company has legal obligations. The device manufacturer has contractual obligations. The chain of responsibility is clear. But when a parent buys a device for their child, there is no IT department. There are settings buried in menus that most people cannot find and that are ambiguous as to their effect, connected to restrictions that can be bypassed with a tap. There are app manufacturers shielded from liability by law and a platform company that takes no responsibility for what is displayed on its screens. The chain of responsibility leads nowhere. Nobody is accountable. The companies don't care. They will happily feed a 15 year old girl content about self harm if that will keep her engaged and scrolling for longer. And the people paying the price are children. Our daughter was presented with content that painted suicide as a rational and reasonable way to deal with her problems. Eventually she was able to use social media to find the best way to kill herself. Thank you.
Thank you both so much for being here. I know this cannot be easy, but your advocacy absolutely makes us better.
Thank you so much for having us. Thank you. Elsa.
Well now move to the first panel which is an overview of the types of parental control challenges and reasons for failure.
We have Sunny Liu, Director of the
Stanford Social Media Lab, the Shawn Francis, policy analyst and advocate for children. Now
Anik what Anika how do you
pronounce her last name?
Buffon.
Did I get that right? Anika yeah. Who's the PhD and founder and CEO of Clara Clear AI Risk Assurance and
they will be opening our first panel and then we'll take questions after they finish.
Victoria thanks so much for sharing your stories and have courage to be here.
As a mom myself, I start to
research about online harms because tragedy like
this, like so many parents, we simply
just want to protect our children. Madam Chair and community members, today I will share about our research at Stanford Social Media Lab on the challenges parents facing digital parenting.
The view present here on my own
and should not be interpreted to represent the views of the universities. So I'll start my presentation.
So we asked about 500 parents and
kids across the United States. I know the next panelist will talk about children's perspectives so I want to briefly highlight the key findings here.
So we asked kids age from 10 to 18 what do they wish their
parents to know about their social media use and online world.
The answer is mostly are trust them more more like give them more clear
guidelines and also have clear expectations when
we ask parent so when we ask parents what they are most concerned about
their children and experiences the most Concerned
are excessive use, harms, risks, privacy and
also impact on mental and social well being.
If we look at really all those different perspectives, we can see both alignments and disalignments both children and parents. They're aligned on the goals they want
a safe and healthy online world.
The misalignment centered on the approach. So how to settle boundaries and what
is the way to control that?
So currently what our parents are doing now, so what are the ways they
really prevent harms and protect their children online? So what they do is by parental control.
So what are parental controls? Parental controls are those tools and features to parents use to manage their kids digital access from screen time limits, content filters, app limits, for example Apple's family sharing, Android's family links and third party
links like Barks Custodial net nannies.
Those tools definitely exist. But we're here still today talking about
how to protect our children and reduce harms. Clearly those tools are not sufficient to prevent harms as we want today.
I want to share about the research
at our lab Core challenges that parents facing to really protect their children online. The first one is digital parenting is challenging. The second is tech is complicated. Third, there are constraints on parental controls and last, accessibility and equity gap.
Digital parenting is challenging.
A Pew report suggests that 2/3 of parents today think that parenting is harder today than 20 years ago because of technologies like social media and smartphone. Digital parenting is just one part of parenting. Parenting is challenging. Here's why mom shares with us.
There is a pressure to be everything
everywhere all at once to your children.
The sense of constantly needing to do more, to be around more and be
more of this and be more of that within the environment that doesn't really support
online make it even more challenging. Parents have to constantly understand and really
navigate those complicated online safety worlds. So here's one parent share with us.
It's a struggle to make sure my child doesn't see inappropriate content, image or pornography, knows who he interacts with and
that he not bullied.
So in our research at the lab we identified 22 type of harms young
people can encounter online.
From cyberbullying to sextortion to harmful content. Online hate algorithm risks. Parents have to constantly navigate those involving
technologies and involving harms.
And third, there is a knowledge gap. So there is a knowledge gap. Kids know those technologies better than their parents. Parents always feel that they are one
step or even 10 steps behind of their kids. So what's happening online?
So those 43 points really makes digital parenting really challenging.
And tech is complicated.
There's so many different platforms, features, interfaces and products.
Parents has to constantly navigate all those different settings.
As soon as they figure this out
their updates they have to relearn everything again.
The settings at device level usually don't work at apps level and apps level
settings will not work at device level.
I have a 16 year old daughter loves to use instagram and I delete her on her phone and then she
now use on her laptop which might
be even more risky or more interfere
with her study more or her life more.
The third point is there are constraints on parental controls. Kids circumvent so they find all the
different ways to bypass old parental controls.
Some parents call wi fi at midnight, then kids go to a neighbor's house for connections and protections and controls can backfire as well. Overly controlled or two restrictions can sometimes enroll parent and kids cohesion enrolled trust
and increase Conflict in families Screen time is the number one conflict in family. Now lastly I want to highlight accessibility and equity gaps.
Not all the parents have the time
and energy to constantly moderate.
So we have simple family, single parents
families and families that have parents have
multiple jobs or caregivers like their grandparents and older siblings. They don't have the time and energy
to really constant monitor tools do exist,
but not every family can afford those tools. Third party apps from custodial to bark costs from $10 per month to $40 per month. Our research show that the currently parent
controls don't work for four main reasons.
Digital parenting is challenging, tech is complicated
constraints or parent controls, accessibility and equity gap.
So I'm so glad as we can see it's really a complicated issue and the stake is so high for this reason.
I'm so glad that the committee take this seriously and bring such a wide range of stakeholders here.
So I hope that my research here
will help to framing the discussions.
I look forward to hear from the
future for the other panelists, witnesses and I look forward for questions in the discussion.
Thank you so much Ms. Francis.
Thank you so much Madam Chair Members, My name is Lashawn Francis and I'm with Children Now. We are a statewide research, policy and advocacy org focused on the whole child. Our organization also leads the Children's Movement, a California network of more than 6,000 direct service, parent, youth, civil rights, faith based and community groups dedicated to improving children's well being. Our goal overall is to sound the alarm about how kids are doing in our state in regards to mental health, addiction and online spaces not well and the data makes it clear that digital spaces are both a reflection and a driver of that crisis. I know that today the header of this hearing is social media, but I'm going to talk broadly about digital spaces. I grew up in a time of AOL online chat rooms, and obviously that has changed. And so I'm going to say digital spaces more broadly because the iteration of things is constantly changing. That's just the nature of tech. In 2021, Children now wrote a letter to the governor asking him to declare a state of emergency for California's youth due to the mental health crisis. That declaration was never made, and the urgency around the mental health crisis for kids actually remains today. The connection between mental health, addiction and social in digital spaces has never been clearer. According to our 2025 youth poll, about 94% of young people in California report experiencing regular mental health challenges, with one third describing their mental health as fair or poor. Nearly all of those reporting poor mental health, 98% were youth of color. More than 1 in 3 LGBTQ youth in California seriously considered suicide in the last year. For transgender and non binary youth, that number climbs to nearly four in 10. Indigenous youth in California bear the highest rate of suicide deaths among any youth group by a wide margin on overdoses. Fentanyl has transformed the crisis entirely. Adolescent drug fatalities remain more than twice pre pandemic levels, 708 deaths nationally in 2023 compared to 282 in 2019. The National Crime Prevention Council estimates that 8 in 10 fentanyl overdose deaths are connected to social media contact, with dealers actively using these platforms to reach young people. Psychiatrists warn that generative AI affirms, enables, and fails to challenge delusional beliefs. The digital crisis connection to these mental health and addiction outcomes are no longer speculative. According to our youth poll, nearly a third of California young people say social media has been harmful to their mental health. About one in three report being cyberbullied, and roughly seven in 10 say social media contributed to a negative body image. So what is the industry offered as a solution? Parental controls and one of the things that I really do want to flag in this talk when we talk about parental controls and one of the reasons why I spent the majority of my introduction on the state of the mental health and addiction of young people is to realize that we're not actually answering a tech problem, we're answering a child safety problem. And once we understand that, I think the solutions will be clear. We need to be a little clear eyed about what parental controls actually are and where they come from. The design and definition of parental controls have so far been dictated by tech companies themselves. That means the industry has control the narrative around what safety looks like, and too often it looks good on paper while doing very little in practice. When companies use parental control features as a public relations shield, it allows them to sidestep the deeper systemic problems, harmful design, exploitive engagement algorithms, and inadequate privacy protections. A 2025 report titled Teen Accounts Broken Promises tested 47 of Instagram's teen safety and parental control features and found only eight worked as intended. Most were ineffective, unavailable, or easy to bypass. Fairplay found that parental controls do not accurately reflect what a teen is actually experiencing online. Parents are not notified by default when their child reports a poster account, and children can easily open a FINSTA account with no indication appearing in parental supervision tools. In 2025, pediatric experts warned that YouTube Kids still allows low quality and borderline harmful content to slip through even when parental controls are enabled. Because creators can self label videos as for kids and game the system with friendly thumbnails and keywords, these aren't isolated glitches. They reveal a pattern parental controls designed to look like protection without actually having to provide it. Young people see through that when we talk to youth about technology and online safety, parental controls are rarely what they bring up. In fact, when I bring it up, they actually chuckle. And it's not because they don't care about safety. It's because they know these tools don't work. Many of their parents aren't fully equipped to manage or understand how these systems work. Setting them up requires technological skill, time, and patience that parents simply don't have. And even when parents do engage with these tools and young people say the controls are set in such a way that they can easily navigate around them. So when I ask what will be effective because I know they care about their safety online, they say instead of focusing on parental controls, they want online literacy, digital responsibility, and corporate accountability. They understand that the online environment they inhabit is not shaped by personal choices alone. It's engineered by the design decisions tech companies make about platforms, algorithms, and engagement tools. In their view, teaching young people to critically evaluate content and understand data practices is more empowering than any parental dashboard. Young parents also want their parents to be educated not just on how to use parental controls, but on how to have open, informed conversations about tech, they want collaboration, not surveillance. When parents understand digital culture, social media norms, gaming communities, content creation spaces, they connect with their kids on a human level rather than a policing one. Importantly, the approach to digital safety needs to evolve as children grow. One of the things we see very often is that we Write legislation where the tech rules apply to a three year old in the same way that it would apply to a 17 year old.
That is not sufficient.
Perhaps what's most telling is this. When I spend time with youth advocates and ask why they keep using platforms they clearly dislike, their answers reveal just how much the stakes have changed. They tell me they feel compelled to participate not for entertainment, but because school announcements live on social media, political activism happens on social media, and job opportunities are shared online. For today's young people, these platforms are not a fun pastime like my AOL chat rooms. They are infrastructure. Opting out isn't really a choice. That is precisely why the burden of safety cannot rest on families alone. The real question before us is not how to build better parental controls. It's how to shift the conversation entirely away from tech companies defining what digital safety means and towards families and young peoples, young people, and policymakers outlining what is expected from corporations that provide products to our kids. This should be no different than the safety protocols for vehicles, car seats, toys, cribs, and the like. We need policymakers to come together with urgency to examine which rules and regulations need to change, address the structural crises in our digital spaces, and put meaningful guardrails on corporations. Because our children do not feel that they have the ability to leave these digital spaces that are offering them different ways to engage in life. The resources and reforms we pursue must reflect the full scope of this both offline and on. Thank you for your time.
Thank you for that insight. And now we will turn to Dr. Meyerberg, the phone.
Okay.
Okay, you can hear me.
All right.
I'm Monica Buffon. I am a PhD trained social psychologist and positive psychologist. And I've done most of my research
on well being and empathy until I
transitioned actually to tech itself where I spent seven years. And unlike a lot of other researchers,
actually I was on growth and safety teams.
So I understand the full stack pretty well. And then the last two years I
worked on age assurance, which is particularly on the Youth well being team. In this particular space
I'm here, I founded a company that is a nonprofit that is very, very new. But the goal is to do research
based advocacy and to build the right products in this space and to have the right conversations and suggest the technical solutions that can actually work.
And so I want to start maybe sort of break code a little bit here. You know, last night here was the Nine Inch Nails concert. I don't know if anyone here was there, but. So when my parents were parenting me, their biggest worry was that I might like Marilyn Manson and that I was going to go to the Love Parade. And so they said no to both. And today, parenting is so much more difficult because we don't actually know what the kids are seeing on these different apps because a lot of it is hidden to review. And so I get a lot of times the question, like, is it the algorithmic change that we need? Do we need to raise minimum age? Do we need to change minimum controls? And I think my answer is that this isn't the right question because we
need a lot of different layers. And we need a lot of layers
because every family is different. And so we're not going to convince every family to be as strict as possible. We're not going to convince any family to be as loose as possible. And the rules that we make have to work for the conservative Christian parent that wants to shield their children from certain ideologies and from the parent with the LGBTQ teenager that wants to protect their child from hate speech and so
on and so forth.
And so we need the whole stack to address the problem. If raising hands were appropriate, I would ask which of us in the room have changed their child's age on the device up because things otherwise weren't working
and things were broken.
Like me, a lot of us have. A lot of us have noticed that when we put on parental controls, things we want to try to be using don't work anymore, be it they can't listen to an audible book for bedtime, be it that they can't get sent Apple Cash so that we can have them try independence and go to the store with their friends after school. It really needs to be a redesign of the whole system. So I think, again, we need everything. Age assurance is the number one barrier, right? Because if we have kids on with false ages because things otherwise break, then we can't protect the child in the end of that because the child will be assumed by the app to be an adult. So privacy assuring age assurance is really, really important. I get the question, can't we just verify everyone with an id? My personal opinion is that that may not be the right approach because a
lot of adults don't want to do it.
And so it just leads to circumvention of adults.
But it can also lead to circumvention
of parents because, like 70, 80% of parents say that they're really concerned about the privacy of their children, about data breaches, about, you know, so now what
is the good news is we have
a lot of technology now that can go beyond IDs and can go beyond these approaches and have unobtrusive ways to get to age assurance. And California passed this amazing device level age assurance law, which is really great because it opens up a lot of
privacy preserving methods for children.
Then there's device level controls. Those are really great, but they're very high levels. You can set things like screen time at the very high level, but you can't actually touch what happens in the app. So it's kind of like Vegas. What happens in the app stays in the app. You can't really see it, you can't really influence it. Or global Google regulation usually can't touch individual countries legislation. So you gotta understand how that works. And then on the app level, that's where you can set different controls. But as Sunny was saying, if then every single app has their own interface and different symbols and some things that exist somewhere but not on the other app, then it just gets very thin on what you can actually control. And then there's the third party tools, which are great gap fillers, but they have the same problem. They can't really see very deeply inside of the different apps and they cost money, which is an equitable issue which both of my previous speakers have spoken about. Again, I think privacy preserving age assurance is really, really important. And having multiple different ways parents often help kids. The younger the child, the more likely it is when they're on something the
parent has helped them.
Kids of course, can also get around. And then what we see now is kids moving to less safe apps. So there's all these apps coming out, like some of which you probably haven't heard of before. I haven't really either before I started doing the research. So there is Yubo and Lemonade and Locket Widget and Coverstar. And some of them have atrocious things happening on them. A lot of them are actually trying to do the right thing. They're trying to be safe for children. But if you're starting out, you're not going to have a huge safety team. Right.
So you're limited.
So when kids leave TikTok and Instagram, they might go to these other places. And so we just have to make sure that we make them safe everywhere. Right now it's about 120 hours of setup required of a parent. Between initial setup and setting up every single app and doing all the monitoring, these controls are often hard to find. Multiple clicks, like often if you.
Did you say 120 hours?
100, 220 hours a year? Yeah. This is research Based. So this is based on expert opinion of people that have tried. It just takes a long time. You have to find all the different settings, you have to set it up. It's just a lot of work. Then at the end you have a lot of awesome dashboards that have a lot of data and no information again and the settings break. A lot of parents give up. Parents tried linking. A lot of times it's just a link that gets sent to an email or some QR code being scanned. So that's very easy to get around Sometimes as adult verification, we have hardly any cases of actual parent verification or guardian verification. A lot of times parents need their own account to supervise for the app, which is also I think unacceptable. There is silent graduation a lot of times. There was a famous example where kids got an email, hey, you can soon unlink yourself from supervision. So obviously we don't want to do that. Kids usually can remove supervision unilaterally if the parent is lucky. That app has decided that parents should get a notification also not always the case. And then there's false positives. So I get, every few days I get a your daughter got a nude picture alert. And it's never a nude picture, it's just 12 year olds taking really bad pictures and sending pictures of their warts and whatever, things like that. So again, parents can only control and see the tip of the iceberg. It's like things like screen controls and you know, some content, like sense of content can be blocked. But what does the algorithm optimize for? What kind of profiling is there? What kind of advertising, what you know, what is what autocompletes for search like posting privacy. One of my daughter's apps, when you do a challenge, a dance challenge, suddenly it becomes public. So my engineering husband had to flag that and so she no longer has that app. But it's sort of, you find out over time.
Which app was that?
That was Coverstar. So AI chatbots are the next unregulated frontier. Kids use these apps but no one is really empowered to watch. Really. OpenAI is the only app that has had any meaningful in my opinion controls here and H checks and really I think especially for AI, it really is very disappointing to me personally as someone that has worked in tech because we have seen social media and so the fact that a lot of age vacation is just a checkbox and that there isn't parental controls is a really big concern, especially with how powerful these apps really are. And parents really are in the dark. When you look at research, parents don't know how these apps really work, like what to worry about, how to keep their kids safe. Teachers say the same things. And of course we've already seen some
pretty bad harms happen to children.
The data and tools, and this is sort of one of my last points here, absolutely exists within the companies. Companies have the data, companies have the capabilities. And now with large language models, it really is in reach. It used to be harder admittedly to classify content and to provide some of these controls, but it is absolutely possible and it is being used in other ways. So companies, of course safety investments can create a lot of competitive disadvantage like age verifying everyone. It loses a lot of adults and youth are important to the business. And this isn't some earth shattering fact, right? Like I think every company, be it like Nike or social media, they want the next generation to be customers too. And so I actually do see myself
also, also as an advocate for the safety researchers that are working in companies today.
Because I have been there and a lot of us want these, you know, these things to, you know, to happen. We have a lot of, you know, there's a lot of values, like this is my personal opinion obviously, but like a lot of the values in like wanting to do the right thing but
aren't always empowered to do that work.
And especially now a lot of safety researchers have been laid off.
And so I think there's even less of feeling protected to speak out and to really advocate for the change internally, which is one of the reasons I'm doing this work outside right now.
The chair Rebecca Vauer started with saying, is it safe at all? And I think there is sort of this idea of Pleasure island at Pinocchio where the kids go and they get handed cigarettes and whatever. I do think if you have apps that are optimized for, for engagement, optimize for content that is meant for adults and then you tack on safety, in the end, it may not be good enough. One thing that will need to happen is really thinking through how should these apps work? And also is there a responsibility to make the safe version of the app just as fun and entertaining as the adult version?
Because otherwise that also will drive circumvention.
Self regulation is not working. This is my last point. We need independent standards. We really need to know what are the base rates of kids with false ages on the app? Are the control features and the age regulation features, the companies that they are putting in, are they actually reducing that rate? What is the harm base rate? What are the interventions? Is it going down? Every intervention and safety feature that isn't
meeting that bar really isn't good enough. And we need the standards for that.
My four takeaways, kids and youth. Safety needs a lot of different layers. We need multiple approaches. We need minimal standards. AI chatbots really need more regulation than there is right now. And we need independent standards so that we have really a baseline for cause
and effect and we can make sure
that we can really assure that kids
no longer get harmed. So, yeah, that is my pitch.
Thank you. Turning to questions, facilitator Lowenthal.
Okay, before I say anything to the panel here, I just want to acknowledge the Hincks family. It is so important to have your voice in this conversation. And I know how. Well, I don't know, I don't know how challenging it is for you to come up and relive this all the time, but I can tell you that your presence here is meaningful to all of us and helpful for this conversation because we are able to make it real. So thank you for being here. I'm struggling and let me tell you why I'm struggling because I don't understand what safe is all about. Is safe meaning that we are, we're, we're stopping harm, crisis type harm from taking place, interactions that can be deadly, suicidal ideation, you know, things that are absolutely catastrophic harm. What about intellectual harm, academic harm? The empirical data that we're hearing right now, which is about our kids no longer surpassing this generation, I alluded to that earlier, which is a grave concern, I think, to all of us. I want to ask an open question about that and I'd like to hear how you answer that. That'd be great. And I also want to ask you about China and your feelings about what's happening in China. China, to me, is the only country that I know of that was ahead of this from a regulatory standpoint. I don't think of them as a beacon of civil and human rights whatsoever. And clearly they don't have a constitution with the Bill of Rights that we have here in the United States. And yet I wonder and do we have any information, empirical information about mental health disorders with youth happening in China right now as a result of those things. And I know that their efforts have been quite draconian. But to me, when it comes back to this issue of harm, they're very focused on STEM and steam. They're very focused on making sure kids raise the bar on their goals and their dreams and their hopes. They're focused on teaching kids healthy lifestyles and healthy choices and so forth. To me that's very attractive. I just wanted to ask for your comments and thoughts on these things.
Yeah, thanks so much for those two questions.
I think those are the same question. I think fundamentally how we can support children to have a healthy development we don't know. I think that our research relates to technology is limited.
But we know a lot about what
will make kids thriving. So there are fundamental needs both I think physical, psychological, mental, emotional. Those parts we know those psychological teachers have that.
So I will answer those questions in three ways.
First is I think that if you
think about how to think about harms,
I think that reducing harms is one part of how to supporting kids development. Kids cannot thrive when they are bullied. When they see all those online hate when they have those shoulders and harmful content and incontinent risk harms. So that's one aspect.
But without harm does not equal to benefit. So we not only we don't want harms but we also want kids to
develop healthy their identities, who they are fulfill their potentials. Maybe it's intellectual, maybe it's social, maybe it's emotional.
So for those parts I think that it's really so one thing about without
harm that's one aspect. But how to make our environment both online and offline support kids development their fundamental needs. I think that's all parts of picture. And then I think that as about China. China for a little bit background. I think a few years ago China had this regulation for video game specific. So kids can only play video games at. I don't remember details but maybe half hour on Fridays, one hour on Sundays.
It's less than two hours per week
for all the kids. And they actually they implemented that really all the platforms they have to cut all those kids and the families have to take responsibility. You cannot have those kids to play video games.
There was one piece of high evidence
research coming out I think a few years ago. I'm happy to share that article.
It shows that actually kids time to
play video games did not decrease.
So the policies in the implemented did
not decrease kids time online.
But I think we do need more
research to understand do those kids under those regulation develop better, have more time to play with friends to intellectual development. Happy to do more research and then figure that out.
Thank you. And I assume Ms. Francis was really nice. Yeah.
So I'll say first I don't know much about China, so I can't answer that question. What I will say about how I know we've operated in the US unfortunately is that corporations and businesses seem to believe they have more rights than individual children and families and they will sue to prove it. And that tells me everything I need to know about how we are engaging here with corporations and who is really trying to set the bar and the parameters for safety. What I'll also say in the mental health context, in terms of what is even healthy for and safe and thriving is we know healthy face to face interactions are the best. That is the gold standard, not online interactions. So the gold standard is face to face in person interactions. The ability to read micro expressions, the ability to hug someone, the ability to, you know, put your hands on over someone's and show comfort. That's the gold standard. We have began talking as if the gold standard are online spaces with interaction. When it comes to mental health, whether it's through how we provide therapy or how we find community, it's not the gold standard. It's what we've done because we have a workforce shortage, but it's not the gold standard. So I just, I just want to say that, that it's actually something that we should be thinking about as secondary, not as primary.
So I think my answer is that
I think there absolutely are good things that kids can do online.
And so one of my favorite examples actually was Meta's portal, which like, I don't know anyone besides me remembers, but it had a story time feature and kids were able to talk to their grandparents. Their grandparents would turn into like the big bad wolf and like, and kids were. And it would, it actually did something remarkable which was it let kids talk to their grandparents and actually want to
keep talking to them.
And the grandparents thought it was weird
at first because you don't look super
attractive as a big bad wolf as grandma, but it sort of worked. And there is lots of equivalence of that also in online spaces. So I think that it absolutely can have benefits for kids to connect to different interests that they may not have
a community for themselves.
Like to like minded kids that, you know, maybe have special needs in the same way that they do. But I think that these benefits can't really be reached in a safe way unless we have the right safety minimal
standards and the right controls.
And so I think it absolutely is possible. But when I sort of review what's
out there instead of what the safety protections are, it's just not where we need it to be.
And so I think
there really is
a lot of research that's needed to see how can we make sure that we create these right spaces for kids
so that they can claim the benefits
Obviously, I do agree that in person,
experiences are the best. Then the second piece, I think is
that in terms of the oversight model,
I think there's also something really broken about how oversight was ever created because.
Because it actually is a process where
the tech company ends up winning because
it puts up kids against their parents.
Right.
The parents are the police officers in a system that's not even working with
false alerts and all these different things.
You accuse your kid of something they didn't actually do and there isn't any
education, like what you were saying, Francis.
And I think really there needs to be some accountability, I think of companies
as well, to be part of educating kids to what the dangers are.
You know, like, how can you tell that, like, something is upsetting you? What are the controls you can use?
How can you report?
And I think there needs to be
accountability on what happens to all these reports. Like, you know, how many reports that
kids sent in about, like, this is
eating disorders, this is this. How many get actually action on? We have no idea. Like, I've never seen that data of what the percentage is, of what reports are just getting dismissed. And so, you know, like, it's fine.
So I think there's just a lot
of accountability that we can ask for. And I think then we can make real progress on answering that question.
Thank you.
Yeah. Just one thing that I think Ms. Francis said that's really interesting is the
First Amendment law that is coming out
on this because the First Amendment is
not allow for all speech with no exceptions.
Right.
When there is a public health risk
or another risk to a First Amendment. You can't hate speech, for example. It's not protection under the First Amendment.
And yet the case law on the social media companies appears to be protecting
everything they do under the First Amendment with no exception.
And I find it fascinating because that
is not, as I understood it as a young law student and practitioner, the
way the First Amendment works. And hopefully we will get to a place where we are weighing both sides
of that debate evenly in courts.
So I just thought that was a really interesting point.
With that, Ms. McKinnon, thank you guys
so much for coming and testifying today.
I have one question.
Is online harm today more a technology problem, a business model problem, or a regulatory gap?
Good question.
All of the above.
Yeah.
Wow.
That was a short answer.
Oh, I'm sorry.
One last thing. What does success look like? How should we measure whether platforms are
actually safer
reducing harm?
So I think, I mean, this is
my big point, right.
The only way we can is if
we know what the base rates are.
So, like, for example, what is the estimated percentage of children under the age of 13 on these different apps? And if we change our age predictions and we improve it, does that rate go down? Like, if we see certain harms that are emerging, if we have better safety
systems and standards controls, do those harms go down?
I think without research to really see the data on cause and effect, it will be really difficult.
And as much as we can, experimental
data, but also seeing if the interventions
are actually working, because I think that's one of the big problems.
If the mandate is do this thing and then the implementation of that doesn't
actually fix the problem, then really we're just. It's just lip service. Right.
And so it is accountability that I think benefits everyone.
It benefits the tech companies, it benefits
business, it benefits regulators, and it benefits
the families that we're trying to serve as well.
And are we. One more thing. I'm so sorry.
Well, I was just going to say no. An interesting point on that is that
somebody, remember Gabriel, had a bill a
long time ago while I was here
on disclosures on hate crimes on social media platforms. It was challenged by industry and struck
down by the courts under the First
Amendment, saying that they did not have
to disclose as materials, which makes it harder to track all of what she's saying.
So I just thought I would point that out.
Thank you.
That answers.
And do you mind if I add really quickly one of the things. Every young person I've talked to has used one of the reporting features to report content or something happening online. Every single one of them said they've never heard back. It just kind of goes off into the ether.
Yes. Yes.
Sonny, remember Wicks, thank you for the testimony. I don't know a single parent that feels great about their tech situation with their children. They're like, this is awesome. Like, it's not a. You know, every time it's like a war, it's a fight. It's like the parents trying to navigate a very complicated. I mean, I can't even navigate my own phone. You know, I can't keep up with all of it myself. And then to manage your children's as well. So parents are just at their wit's
end
in the most generous terms and the most horrifying terms. We hear testimony from Victoria and Paul, and I also want to recognize their testimony. And this is obviously the worst case
scenario as a parent.
So thank you for testifying. And that is like every parent I've ever talked to about this. And when I do pick up and drop off, this is what parents are talking about. When taking your kids to birthday parties, going to soccer practice, it's all consuming, and everyone's looking for a solution and they need help. And, you know, they're eager for government to take action because it feels like if it's a parent against a tech company, it's just an unfair fight, especially when the kids are often aligned with the tech company, you know, because they want the product more and more and more. And so that's why a holistic approach, I think, is critical. Ms. Francis, I'd love to ask you a question. Is there any benefit to social media access for kids? And if so, at what age does that benefit outweigh the risk? And the answer might be no, but I'm just kind of curious because I don't know the answer to that question. I'd love to know your thoughts.
So is this a personal question or am I. You know, there's how I feel and there is what young people tell me. So I want to be clear about that. What young people say is that they see a benefit. That they see benefits because it's how they are engaging politically, it is how they're finding jobs, it's how they're interacting with their school, and there's a social benefit. I remember a time before social media, so I'm not as convinced that we need it. Right. So my personal feeling is it's probably not that great of a product. We probably shouldn't expose it to children of any age. I know there's been a lot of conversation around 16. I think 16 is an arbitrary number. The science and data tells us that your brain doesn't really fully develop until 25. So I'm not really even thrilled about that. I know we would never get something through that. Banned social media for 25 and under. So I get that desire. But to me, that train has left the station, unfortunately. And one of the things that I am concerned about is creating an environment where young people feel like they have to sneak and use social media. And that's what I'm also trying to avoid when I talk about this, is I don't want to create this environment where they're hiding social media now because that's even more harmful and more problematic. So, no, I don't love it. I'm barely on social media these days. You know, social media didn't come out until I was already an adult, so the impact was completely different. But they. They want to be engaged with the world Differently and I think we should make sure that it's safe for them to do so.
Right.
On that note, are there and I'm happy to entertain your response, but others as well on this, are all social media platforms created equally? Like, are you seeing any of the companies actually put forth meaningful guardrails? And again, the answer could be no, I don't know. But I'd be curious your thoughts.
I think my website on this just went live today,
so I will share that with you. But I think my answer is that
a lot of them have things where they're better than others, but I don't
think there is one that is better
than all the others.
So I think that for example, Instagram
teen accounts I think was an important step forward.
TikTok has certain like minimal safety standards that are quite good. So it really depends on sort of the area. But I think it's like the problem
is that not one of the plan firms right now is all across the board doing the right things across all the different controls. And I think that's where the legislation is needed. That's where like mandated standards and like minimum standards and also parental controls are going to come in as really, really important.
And to your last question very quickly, I think one problem that I see is that right, like today kids, they go from activity to activity, they're so busy all day, all afternoon, they have no free minute. And so I think technology ends up becoming the solution to that. You can only talk to your friends
for five minutes between soccer practice and tutoring.
And I think that's just like this is a societal, you know, in some
ways problem where kids are expected to be in all these different activities and have no unstructured time to play and to be, just to be free and to be with each other.
And so in some ways like these
technology companies have picked up on sort of a need for kids to want to socialize as teens and to be independent. And I think we have to understand that ecosystem that they're operating in.
This doesn't mean like I think tech
is good or social media is good
or bad, but it does mean that like you know, the, we have like,
if we take certain things away and
there isn't space for that to be
filled with real life interactions, that that's a problem too. So like this is not really answering
the question of should it be. But I just like, as a social psychologist it's important for me to point out like sort of why, why are
we having the system? And you know, and maybe why is it even that kids want to be in these apps as opposed to being in person. So I thought that was important to mention.
Thank you.
Thank you, assembly member Pellerin.
Yeah, this is hard stuff. It's taken me a while to really digest everything and I want to thank Victoria and Paul Hinks for being here and your story is so powerful and I know how hard it is to tell it. And thank you for being here and sharing Alexandra with us and thank you all for your testimony. You've given us lots to think about. I mean it seems like shouldn't we be designing safety systems from the very start and not it seems like we're putting a lot of it on the parents to control and is that happening at any level of speed and urgency?
Yes, we should be. No, we are not.
Okay.
I love how succinct Ms. Francis is. Okay.
I feel like I want to scream. Mental health is something that's very concerning to me and the connections that we're seeing between social media use and youth anxiety, depression, self harm. Are there platform features that are most harmful to a healthy kid or I mean have we identified. Yes, there's.
Yeah, I think that usually those harms
are not equally distributed. So they are mostly target those extremely vulnerable populations. And kids, when I talk about vulnerabilities, mostly they have those at risk factors in their daily lives. They don't have those supporting systems in their offline world and online setting. Also don't have those guardrails for those as well. Well kids have those for example eating disorder and the algorithms driving them more toward those kind of content. So those algorithms and enhanced all those their offline vulnerabilities and make them even more vulnerable. So I think that's. Yeah. Not sure as a question I think
very concretely like I think end to
end encryption is a big problem. I think that's extremely unsafe.
Like children in these chats, like there's
just no way if a predator talks to them. Like it's even hard for law enforcement to track those conversations. So there are certainly features I think private versus public posting, like whether kids, you know, expose themselves publicly. I think there's group chats.
So for bullying and harassment I'm quite concerned about them. Even in imessage like you don't even
have to go to social media and tech. Like there's school wide text message threads going on like in my own kids school. So yeah there's definitely some features that are particularly concerning. I agree also with the way that the algorithm is designed and on your
question on Apps, actually, there was famously
the example of Instagram kids that got shut down.
And I think that tech companies probably
do need more guidance on when such apps are designed.
How should they be designed so that the attempts to do so really can actually be successful. And of course, my personal opinion is
that those apps should not be optimized for engagement, because I don't think that
is ultimately safe because it will easily
lead to these rabbit holes for unsafe trajectories. But I do agree with you, and I think that is where regulatory support can really come in on what that should look like.
I'm grateful my kids are 28 and soon to be 31. I can't imagine raising young children in this environment right now. And quite frankly, I feel like we should just ban, you know, smartphones for kids age 16 and under. And I know you raised a good point, and that was good, because I need to hear that, because that's just how I feel. I feel like this is an evil device for them and this is hurting them and it's causing kids to take their own lives, and I can't stand by and watch that. I just want to take it all away.
Oh, I feel the same way.
Okay.
Yeah.
It's just not realistic, but I feel the same way.
I know. So I guess I'm just struggling. I mean, is there, I mean, other countries are doing, I think, bolder, more aggressive actions, and are those successful and should we be thinking about those for here? And I'm trying to. I know we're all trying to navigate this to the end path where everyone's happy and thriving and no one's having mental health crises.
But I think it depends on. I mean, the thing is, the honest answer is we don't know yet because we would need a lot more data. And these interventions are all so new, and I think there is a good
chance that it will reduce the number of kids on these apps.
But there's also kids that are moving
to these newer, less safe apps that
aren't as well regulated, that also aren't part like they're not affected by the regulation.
And so my biggest concern is I think that trying these things, and maybe that is, I can't really weigh in
on is it right or is it not right?
But there's definitely concerns on is it
keeping the child with less supervision safe as well. Right.
Because if, you know, if there is parents that are willing to help the kids circumvent and give the kid a phone and telling the phone the kid is 18 at that moment, that child is less safe. And I think it's very hard to
say this is the right versus the wrong way.
I think it should have a lot
of data behind it when we make those kind of decisions.
But those are the trade offs that I think about is sort of like
where are kids wandering to which kids
are least protected and so how do we keep them safe in the end?
And I think those are all very tough questions that need a lot of data support that we just don't have yet. I think it'll be monitoring those countries and sort of what happens there.
Okay, so since I can't ban social media and phones from 16 year olds but what does the research tell us about which safety tools actually work and which ones are largely ineffective?
Yeah, so I would share a little
bit more about the solutions in the third panel but I think that the safety.
Yeah briefly answer.
I think that safety tools works when that first they had a report from bottom but the report has to really connect kids understand that there's action taken. So kids have to feel that they're empowered and have efficacy and efficiency to use those tools and those tools actually work. So I think that's first part is the tools actually work. The second is education part.
Kids know that they have to know that sometimes they don't want to report
because they don't want to get their friends get into trouble or they have this one so they never even reporting offline.
So I think there's.
We do need educate our kids and family know that here are those functions and how that work setting education and make those tools actually work. I think those are two things really important to keep those safe online as well.
What's currently missing I think that on
the offline world we all figured out we have schools, we have people, we have communities, we have coach.
We built those circle of care and
circle support and circle of safety for the offline world. But online world we don't have that yet. So that's why those kids very likely when we cannot even protect them and hold them up because we don't have those circle of protection and I think have those safety fears.
My answer is screen time works. Taking the phone away works too.
So with my own kids that's what I do. They have screen time controls and the rule is they charge the phone in
my room at night and I constantly
just take it and put it in my pocket. And I think that is about the
extent of the parental controls on the device. I think the only one that I really trust actually is the screen time one and all the other ones I think have holes right now.
I've actually implemented that for myself. So thank you.
Me too.
But then you can just press that button that says ignore. Yes, I know.
Nine out of ten times.
Thank you all.
And I think that some important points were made. I will say that my own children,
my mother in law was a second grade teacher her entire career and they
would when they were little FaceTime with
her for hours and she had puppets
and she lives far away and read to them. And it was the most, it was amazing. It was honestly really positive connecting time that was happening through a device. And so I am a huge believer
that actually there is a way to do it that was real connection with a real person.
And I will say that I love what you said about circles of trust
online because one of our sort of,
I come from a very large southern family. One of our safety mechanisms is that
aunties get to follow their nieces and nephews online.
And so I watch all my nieces and nephews Instagrams and they know I'm watching but it's different than their parents. And so I do think it is
about building circles of love even in these spaces of people who care about you and you trust. And so I think again, these are not technological solutions, but they're important things to think about as we navigate the
future in a way that really centers public health. I think is we keep trying to
talk about this in a way that is a public health centered way because it's so important that we remember that at the bottom of this problem, it's not technology, it's the health and safety and well being of California's kids.
And I really appreciate all of you being here. Another thing that I wanted to point out was something you said, Ms. Francis, about kids reporting content not getting a response. One of the things we are working on this year that I hope will be successful as a bipartisan coalition is
a consumer facing regulatory regime that will allow customers to come to California and say this isn't working for me, I need your help regulator because we do
this for so many other industries and
yet we have not done this for technology. And I think that that would be game changing. And so I hope that in the
future when kids do face that they
have the state to turn to. And I really appreciate us continuing to have this conversation in a way that
helps our kids because their lives matter. And so I want to close lastly
by just reiterating my immense gratitude for
both Victoria and Paul, who
I know
I've had many conversations. And you say that coming here and telling your story is part of what empowers you every day.
But I am just so grateful for
it because as a mother, as an
auntie, as someone who cares deeply about
California's children, at the end of the
day, we want to make sure that no parent experiences what you experienced. And it will take hard work to do that. And you just remind us that that work is worth doing and showing up for every day. So thank you.
And with that, I will turn to the next panel. Thank you, guys. So the next panel is our industry panel. We're going to hear about these online tools, some of which I know from my own experience, have been updated, so we may get some updates on what is new and exciting online. First, we have Nicole Lopez, who's the director of God Global Litigation Strategy at Meta. You guys can sit wherever you'd like. We have Emily Cashman Kirstein, and I apologize if I'm butchering Kirsten. Thank you. Child safety manager at Google, Lauren Haber, Jonas, head of Youth well being and families at OpenAI, and Eliza Jacobs, senior director of product policy at Roblox.
And I will say that I didn't
plan to have all female panels.
We didn't choose who's here, but I'm
not, not mad at it.
So with that, we will turn it
over to who is supposed to be first.
Nicole Lopez from Meta.
First, do I need to press anything here or can you hear me? Okay. All right.
Okay.
So I also want to thank Victoria and Paul. I appreciated and it meant a lot that you shared your story today. And I'm chairman as well as assembly members. I'm Nicole. I'm here testifying on behalf of Meta. But first and foremost, I like you, I'm a parent. I have two tweens who are online quite a bit. Screen time is the battle that we fight often in our household. I'm also here as a California resident, born and raised in Oakland, where I live five minutes from my parents. Today I joined Meta roughly three and a half years ago, where I've continued to work both in the policy as well as legal side of the house on what I care deeply about, which is the safety and well being of young people. I have done this for the bulk of my career, both in the private and public sectors, including eight and a half, almost nine years as a prosecutor in California, where I did two stints in the domestic violence unit. I worked on child endangerment, child abuse, child exploitation cases, and then I worked in the community violence reduction unit where I focused on violence impacting teens and their families. I care deeply about protecting young people online as well as supporting their parents, which we've touched on today. Parents are supporting their teens navigate these online spaces. I want to talk first about Meta's approach to teen safety. I think it's really important as a backdrop for how we build these features and experiences for teens. At Meta, our teams work together to build safe, positive and age appropriate experiences for teens and their families. But in order, and we've been talking about this today, to design products with the right mitigations to support users who are actually using them. It's critical, it's complicated. That has come up today as well and it's really critical to bring the right voices into the room. And there are a lot of voices that matter. Teens. You have regulators, policymakers like yourselves, internal experts at Meta as well as experts externally who are going to have different focus areas and who are going to come with a very blank slate because they're not actually working at Meta. They have their own experiences to bring to bear. But importantly and relevant to the question that you posed at the beginning, we need to listen to parents. No kid is the same, no teen is the same. I say this from personal experience, having two very different boys who are 10 and 12 and parents know their teens best. In terms of the approach that we take to building, it is not a one and done static experience. As technology changes and assemblymember Wicks talked about this, it's evolving really quickly. It is complex. We have moved into a different era than not just the AOL chats, but even four years ago it's constantly shifting and so we need to continue to listen, to build and to improve. It's not static. And I'll zed discuss. We have to get parents feedback and it's not just about parent controls. I want to make sure this is not a dichotomy. It's parent controls are important, very important. But so are the baseline experiences that need to be protective of all teens who are using the apps. And in terms of how we get parents feedback, we do it in a number of ways. One way that I've been deeply involved in includes listening to parents live in person. Meta's hosted Screen Smart events in California, I hosted one in San Francisco. We've had them in la, we've had them in San Diego where we provide hands on workshops for parents so that they actually understand how the tools and experiences work. We want parents to feel confident about raising their teens in an increasingly digital age. And we also want to make sure that they have boundaries and protections that are going to work for each family. Because again, it's not just that every teen is different, every family is different in what they want. So I want to take a step back and share some of the work that we've done to address parents concerns, some of which actually predates my joining meta. Before I joined, we started building out a number of parent supervision tools. And I'm not going to spend a lot of time on every tool that we've built because there are a lot. I just want to highlight some that I think give you an understanding of how things have shifted over time. We've given parents the ability to view how much time their teens spend on Instagram, set time limits, get notified when a teen reports an account or content, view what accounts their teens follow and the accounts that are following their teens. See who their teen has been speaking to in the last seven days. Again, hoping that parents feel empowered to have conversations with their teens. These conversations, as I said, are ongoing and they're continuing to shape and improve how we design experiences for teens. And so more recently in the last two years, again, this is a trajectory that continues to develop. Parents said they wanted to feel more confident around their teen social media use without having to worry about the top three concerns that again, things shift over time. It's what content their teen is seeing, who their teen is talking to, and how their teen is spending their time. And that's why we launched teen Accounts, which was talked about earlier today in September 2024 for Instagram, Facebook Messenger. And I think this is really important. All teens are defaulted into protective settings that address those three concerns. Who talks to their teens? We limit messaging, we limit the content that teens see and we make sure that time is well spent by putting teens into sleep mode at night. And again, any teen under 16 cannot wiggle out of these defaults, these strict settings, without a parent allowing them to do so. We also heard from parents more recently that they have different views on what's appropriate for their teens. Think about this as a parent. You know, parents look at content on Instagram and they looked at millions of pieces of content and there were thousands of parents who looked at it and they all had different views on what the feedback was and what was age appropriate. We took that feedback and we distilled it into how we draw lines across content that teens can see and that expanded again iterating improving the teen accounts experience. We revamped our content policies inspired by 13 movie criteria and more specifically, parent feedback. That means now that teens under 18 are automatically placed into these 13 plus experiences and they'll see content similar to what they'd see in an age appropriate movie. They also can't see 18 content anywhere, whether it's recommended, posted by a friend if they're searching for it. We also listened to parents and they told us they may not want their teens to see 13/plus experience content because again, not every teen is the same. A 13 year old may not be as mature as another 13 year old. So we created even more restrictive setting that parents can put their teens in. So again, every family's different. We took in that feedback, we implemented that feedback. We've also taken a similar approach to providing age appropriate interactions for teens who use our AI. Teens can access information and educational opportunities through Meta's AI assistant, again with default age appropriate protections in place. And we're continuing our work to give parents insights into those conversations. We're again using content guidelines that are inspired by movie ratings for 13 +, meaning that AI should not give responses that would feel out of place in an age appropriate movie. The other recent announcement that we made that has been highlighted today earlier is that Instagram will start notifying parents in supervision if their teen repeatedly tries to search for terms related to suicide or self harm within a short period of time. The vast majority of teens are not looking for this content, but when they do, we already have a policy in place to block those searches and to direct them to resources should that happen. These new alerts though, are designed to make sure that parents are aware if their teen is repeatedly trying to search for this content and to give them the resources they need to support their teen. And again, we worked with experts on this, but we heard directly from parents that they wanted to know and we incorporated that feedback. I think what's been raised today, and I really want to revisit this because it's been said so many times in a different, you know, variety of conversations, is that parents and myself included are feeling overwhelmed. Teens, and I'm sure Australia will come up at some point during the conversation, are fleeing to apps that we've never heard of. Teens are on average of, according to a University of Michigan study, 40 apps per week and parents have no idea what they're doing. And again, you know, we supported assembly member Wix bill to require operating system providers and app stores to implement an age assurance signal. And that's important because in order to get teens into age appropriate experiences, you absolutely need to know how old they are. And everybody here at the table will tell you it is complicated and it is hard to know how old somebody is. And so we applaud that bill for passing. We supported it. But I think what we're getting at today here is that parents want visibility into what their teens are doing online. They want to be able to decide whether their teen is ready for an app or not. And that's why we've supported OS App Store legislation that requires app stores to get a parent's approval before their teen downloads an app. And under this approach, if a teen attempts to download an app, the parent would get a notification on their phone and it's a one stop shop. They approve it or they don't. And again, it addresses parents concerns that they don't know what's going on and it puts them in the seat. It still requires all of the apps to do the work to create age appropriate experiences. That work is not done. It's work that we're still going to be doing. I want to close that. I actually know the people at this table, I think industry wide. At Meta, we all care. We're all parents. We care about creating safe experiences. We want to make sure that teens who. We've been told by an expert today that teens want to be online. My experience of AOL chat room I did get on when I was 16 is not the experience of my kids today. It is here to stay. We need to support them and we need to do so in a way where we're part of the solution and we're empowering apps to continue doing the work that they're doing, but also making sure that parents are in the loop and that parents have visibility and can support their teens while continuing to require that we develop protective experiences for teens as a baseline. Thank you.
Thank you.
And I'll say as a kid, that
was in those AOL chat rooms, there
was filth in there too. So I'm Safe
experience, which I shared with another person here. Not safe.
Yes.
No, I would agree with that. Through lived experience.
Okay.
And now we will turn to Emily Cashman.
Kristin. Kirstine. Kirstine.
But I got it.
Nope, not at all.
Yes.
So I'm Emily Cashman Kirstein. I lead child safety public policy at Google. And I'd also like to thank Mr. And Mrs. Thanks for being here, for sharing your story and for your advocacy. I come to this job from industry today, but I've also worked on the NGO side. I led public policy.
Oh, sorry.
I led public policy work at Thorne, the nonprofit to combat child sexual abuse material online and on the government side, working in the US Senate. I appreciate the opportunity to be with you all today to talk through parental tools, but also how Google frames it in the larger context, how we're thinking about building for kids and families overall. I think you all have slides and we have them up here. As Assemblymember Wick said, there have been a lot of updates and wanted to put those in front of you all today. So our overarching mission at Google is to organize the world's information, make it universally accessible and useful. And when it comes to youth, we want to be doing that in a way that offers them the benefits and the utility of the online world with the appropriate safeguards in place. And that last part, bolded, underlined, underscored all of that. And meaning, of course, we want to protect kids in not from the digital world. And how we're doing that is based on these three pillars here. The first is protect. This refers to everything from baseline protections for for all users, including our industry leading efforts to combat child sexual abuse, material and exploitation online. Two default settings that we have for under 18 users that are backed by age assurance. Respect is the core of what we're talking about today, which is parental tools and knowing that each family has a different relationship with technology, how do we respect that? And third, the empower pillar is how we're building those enriching, not just okay activities, how are we building enriching educational experiences for youth online and building the digital skills of the future, learning to use the latest technologies again in that safeguarded environment. And so starting with protect, you know, we have default settings for under 18 users even before we get to parental tools. And I'm going to go through a little bit of these here. So on search, for example, we have safe search on by default that helps filter explicit content. Location sharing is off by default, 18/apps are blocked on play. We'll get into YouTube and Gemini in a bit more depth, but I do want to emphasize here that Google does not serve personalized ads to minors. And on YouTube, regardless of parental tools, again, for all under 18 users, we've built in protections in our personalized recommendation systems to ensure that teens aren't overly exposed to specific kinds of content. That while they're not violating our policy guidelines, they may be innocuous in a single view, but if they're put in repetition, if they're recommended repeatedly, potentially they could become problematic. And we worked with independent experts, YouTube's youth and family Advisory Council, to develop these Content categories and we continue updating them. We also have take a break and bedtime reminders on by default. The take a break reminder is a full screen takeover that is on the default setting is for an hour, but parents can also adjust that as needed. And to properly ensure that those under
18
default settings are getting to the right users, we have rolled out age assurance on our own first party platforms and we're also working toward compliance, of course, with AB 1043, the approach to responsibly share signals across the broader app ecosystem. And excuse me, so how do we do that? So first, of course, we're starting with declared age, starting from somewhere. Then we run an inference model. So without taking more information from the user, we're looking at things like, has this account been around for 20 years? Probably not a minor. If they're searching for mortgage rates and tax assistance, again, probably not a minor. That goes into how that inference model works. If the model is unsure that this is an adult and that user tries to access a music video on YouTube that has explicit lyrics, something like that, that would otherwise be age gated, they will be prompted to confirm their age. Whether that's through an id. We know not everyone wants to offer an id. We offer selfie, email lookup, credit card verification, things like that. So getting into the parental tools themselves, right, all the protections I was speaking about before are default under 18, before we even get to parental tools. And the premise being that no one family and no one child is the same. Of course we've talked about this, we've heard about it, and we have to build with that reality. So one of the things, you know, we've had family link since 2017. That's our flagship parental tool for Google. But we have of course heard, as we've heard today, parents are overwhelmed. They want quick and easy setup, they want options that fit their families best. So in addition to Family Link, this past year we announced parental device controls right on the device. So at the point of a parent having the device, they can set up things like screen time, like web filters, approving and blocking apps that exists now. And that's all backed by a pin that a parent knows right there on the phone. If the parent would like a more robust experience with parental tools, that's where Family Link comes in, the ones before, just on the device. This is an app that the parents have, they can have on their phone that is a, you know, a more robust experience remotely, right? So they can, as it is, as it stands now, they can block apps, approve apps, Right through Family Link, they can block or, you know, approve websites, screen time settings, school time. This will make the phone not work during the day at school. All of those exist right now through Family Link.
Is that all free?
I heard a question.
Yeah.
And child accounts, I should say, remain in a supervised state after they turn 13 unless the parent approves removing that supervision. This helps make sure that those decisions are made as a family. And again, in talking through all of the ways that parents were incorporating parents feedback, we heard that it took too long to set up YouTube accounts and things like that in the YouTube app. So we rolled out within the past couple months an easier way for parents to set up YouTube accounts. And how to more importantly, just as important to toggle back and forth between a parent's account and a kid's account. It's incredibly important as we know, for minors to be on their own account to be able to take advantage of the default settings we talked about of the parental tools that their parent has set up. And you know, another piece to this is we just rolled out a YouTube shorts timer. So this is allowing the parent to decide how much time may be appropriate for their child to see YouTube shorts. And an important piece to note is that timer can go down to zero and parents can decide if they don't want shorts on at all for their child. And last pillar is in power here.
I'll wrap up.
This is about using technology to help young people learn and create and explore. One of the most important things talking about that is top of everyone's mind of course is related to generative AI. We want youth to have access to the benefits and the opportunities that come with it. But again, as we said before, with those appropriate safeguards in place, a bit on these safeguards themselves. Before rolling out youth experience on Gemini in 2023, we worked with our in house team of researchers, cognitive psychologists, child development experts, in addition to an independent youth advisory council that we have at Google to develop policies and protections for youth. And recognizing that youth could be more vulnerable to developing an emotional connection with AI, we built Persona protections for youth into Gemiini from day one, since 2023. So for younger users, Gemiini is designed not to say I love you, not to say I need you, or any explicit, you know, any explicit claims of humanness or that it feels emotions. We've built protections and additionally against sexually explicit content, dangerous activities, age, restricted substances, violence and gore, medical advice, unhealthy behaviors. Again, those are all baseline protections in Gemiini. Many of these for all users, but especially for under 18. And our suicide self harm protocols refer users to crisis service providers and encourage them to seek real world support and help from a trusted from someone they trust and. Oh, excuse me. So we're committed again to empowering both parents and youth to explore gemiini responsibly. We've heard a lot about parents wanting more resources and I should say for gemiini, parents are in control to decide if it's right for their child or not. But if they would like to get more information, we offer AI literacy guides. Some have been designed specifically for teens for their developmental state and family conversation guides to have this conversation as a family about how to use AI. Both of these help reinforce the importance of knowing the limitations of AI, how to think critically about responses and to double check answers as needed. And we also offer things like podcasts for parents and a video series on how to use AI with your children. We're always looking for new ways to make gemiini usable and useful for youth. As an example, we recently announced a partnership with the Princeton Review to make free on demand SAT prep available within gemiini. So I know I've gone on a little bit.
You have a lot of products.
Yes, there's a lot to go through and this is really complex. But I hope that we're able to show the many different layers we're thinking about this of which parental tools is just one layer and all under the umbrella of the premise of wanting youth to have the benefits of this technology with those appropriate safeguards in place. Thanks.
Thank you. And I will say the only one of these products that my kid has is YouTube and I didn't know about
a lot of this, so I learned myself.
So I think the education piece is really important. I know I could turn Shorts off. He will not be happy when I
get home and that's the next thing I do. And then is Family Link available even if you're on an Apple device or
do you have to be?
Okay, Yep, I was curious about that. Okay.
We will obviously have more questions, but I just baseline.
Now we will turn it over to Lauren Haver Jones, head of Youth well
Being and families at OpenAI.
Thank you so much. First, like my colleagues, I want to thank Victoria and Paul for the time
and for the testimony here today.
As a, as a parent, I cannot
imagine the experience that you've had.
Good afternoon. Chair Bauer Kayan and members of the committee, thank you for the opportunity to
be here and to testify and for
your leadership on youth safety. My name is Lauren Haber Jonas I lead youth wellbeing and families at OpenAI. In particular, I come at this as a builder, so I lead product and engineering. I do not lead only policy for OpenAI. My teams are the one building these things.
We build parental controls, we build age
assurance technologies, we build age verification. So we understand deeply the technical requirements
and how difficult it is to do this well and what the opportunity is
and any limitations might be. I have been doing this for 10 years, so this is very much my life's work. I have been building both on the product and the engineering side in youth safety at large companies, at small companies, at my own companies as an entrepreneur for 10 years. So our goal here when I got to OpenAI two years ago, from nearly the moment that ChatGPT launched, was to
build this with youth safety at the
start, from the moment that this was
in the hands of teens again, this
is my life's work and core to
the mission of the company.
I'm also the mother of three young children. I have three, seven and under. I don't sleep a lot if you see the bags under the eyes, as many have said. So I think about this both professionally and personally. We appreciate the committee's focus on parental controls as AI becomes more integrated into how young people learn, create and explore explore information. The companies that are developing these technologies have a responsibility to build the protections in from the start and also give families meaningful tools. At the same time. It's important to recognize that generative AI systems like ChatGPT operate differently than social media platforms. ChatGPT does not have feeds. We do not have engagement algorithms or public posting. We have only been available since November of 2022. But precisely because this technology is new and so powerful, we have focused on building these strong protections and learning from
the lessons of platforms that have come before us.
I'll talk a little bit today about the approach we're taking, the partnerships that guide our work, the multi layered approach. Again, not just relying on parents as
some of my peers have stated, parents
and parental controls to guide families on how best to make sure that their
teens are using these tools responsibly.
So fundamentally at OpenAI our belief is that young people should be able to
benefit from these tools, whether that means
learning, exploring ideas or developing new skills. Learning is one of the most common
use cases on ChatGPT today.
1 in 3 US students use it to study. Many use it as a learning support tool. Create practice quizzes, study plans, review drafts of assignments. It is a tool that helps them test their knowledge, clarify difficult concepts. And for many students, this kind of personalized support was previously only available through one on one tutoring. These benefits are immense, but they must
be paired with intentional safeguards and responsible design. As we've said.
One of the things that we said publicly from the start is that our
approach to this is a priority of
safety ahead of privacy and freedom for teens, full stop. This is a new technology, it is a powerful technology and we believe minors need significant protection. We have said this, our CEO has said this a number of times before. This is a very serious responsibility that we take both to our teen users and to their parents to have a
layered set of protections.
I want to talk a little bit about how we partner with experts. One of the things that we have learned from companies that have come before us is that we cannot solve youth safety challenges on our own. We have built two organizations, external third party organizations that we partner with, the first being an expert Council on well Being and AI. These are the folks on that council. These are researchers that study youth development, mental health and the effects of technology. They come from Boston, Children's Hospital, Georgia Tech, Northwestern University of Oxford. We have also built a global physicians network. So this is a network of 250
clinicians and physicians over 60 countries.
So the goal here is a global lens, not purely a domestic lens that guide and help evaluate how our systems respond and help guide our policies and our principles and the content restrictions we have in place. Beyond that, we work closely with organizations
that have long been leaders in the space. We work with Common Sense Media, the
American Psychological association aft connect safely. Today, in fact, I'm here and not there, but today, in fact, we're hosting a convening of a cross sector group of leaders, CEOs of the nation's leading mental health firms. So the American Psychological association and others to help guide our work in mental health for youth and for adults are today in our San Francisco headquarters in
this particularly unique convening. There we go.
Building on this input that we get from third parties, we introduced what we call our Teen Safety Blueprint. The blueprint is meant to serve as both an internal framework for every team building within OpenAI and as a starting point for broader policy conversations about responsible
AI and young people.
And it has a number of pillars. The first is, as we've talked about, is identifying users under the age of 18 and that is age estimation as
the approach, the initial approach we've taken.
The second is a default safety layer
of protections once those teens are identified.
The third is a layer on top of that, that offers parents the ability
to have control, as we've talked about in her quite a bit today.
The fourth is designing systems that support
that are not just a safety floor, but support well being. What does that mean? How do we support the well being of teens, not just the baseline safety for teens.
And then the last is transparency.
The goal here is to be as
transparent as possible about our approach. The moral of the story here is that no single safeguard is sufficient on its own. We have taken a multi layered approach here, all working together. Product design, behavioral policies, parental tools, consultation with experts, and most importantly, we work in the open.
So we have published what we call
our Model Spec, which are principles that guide how our AI systems behave. So this guides how the model is built and how the model should be
steered when interacting with teens.
There is a specific section of the
Model Spec that is dedicated to teens and to teen safety which has been published and we're happy to share with the committee.
Ooh, backwards one.
I want to talk a little bit about the content restrictions that we have
in place for teens.
And again, these are default on for teens when a teen is identified. Our system should not romanticize self harm or suicide. We should not engage in immersive role play with minors. They should avoid reinforcing harmful body ideals. They should encourage young people to seek
support from trusted adults outside of the
technology when facing difficult situations. Again, these are behavioral guardrails. These are content guardrails that are a foundation. They are not the only mitigation, but they are the foundation on which everything is built. Now I want to turn a little bit to parental controls. We introduced a set of parental controls in the fall. And our overarching goal as a product
and an engineering team was not to just build a new settings page, but
it was to lead the industry and to pull the industry with us. And we'll talk a little bit about how we did that and how we
feel we've done that.
I want to talk a little bit
about our parental controls and how we feel that this is empowering families and educators. The protections reduce exposure to the types of content described that research shows may
be harmful for adolescents.
So this is based on teen developmental psychology. Parents link their account to their teens account manage settings from a single dashboard. It allows parents to tailor the experience. But in particular, the setup process is very straightforward and happens in numerous directions. A teen can invite their parents to parental controls. Parent can invite their teen to parental controls.
It goes both ways.
If a teen later unlinks their account, a Parent is notified.
If a teen asks to change a
setting, a parent is notified. They cannot do that on their own.
This is only available for parents. So the goal here was to design
a system that encourages communication between parents and teens and is transparent on both sides. A teen can't do anything in terms of editing these controls their parents don't
know about and vice versa.
Once accounts are linked, there are a number of different controls that a parent has. So the goal here is to get as granular as possible. A parent should be able to turn on and off image generation, on and off voice mode, on and off.
The sensitive content restrictions they have.
Maybe for their family, they're comfortable with more adult content, their child seeing more adult content to receive alerts if the system detects sort of possible signs of suicide and distress. And we'll talk about that in a little more detail. To opt out of model training. The goal is to give parents as granular and flexible options as possible in as simple of a way as possible. All of these parental controls are default on. A parent does not have to opt in. And I want to talk a little bit about safety notifications and how we built this. This launched last fall. We were the first in the industry to build this and we're heartened to see some of our peers follow us in that regard. What this is is the following. It is a safety notification system, it's industry first and it doesn't require an opt in. So if you are in parental controls, you do not have to raise your hand as a parent and say I want to receive safety notifications. It is on by default. And we will notify you in three
ways in ChatGPT, via text and via email.
We could do it via carrier pigeon
if we, if we had any ability to, we would.
But the goal is to get to a parent and to share that a teen is prompting for distressing content. The content that a teen is prompting for is never shared with the parent. So we understand and value the privacy of teens. We are not sharing the specific prompt
and generation text that a teen is prompting. But the goal is to encourage a
parent to take action for them to
have enough information for a parent to take action.
One thing that is important to note is that when a teen is prompting for distressing content before a parent notification is triggered, that content goes to human being full time employees for review, trained full time employees inside OpenAI for review
to make sure that that content is
and we haven't had a false positive, we haven't done this in an incorrect way.
Before we send a notification to parents,
we love that this has become industry norm and this is one of the
ways that we hoped to sort of
pull the industry along in the parental controls space. As some of my colleagues have noted, our work is not done here. We are continuing to learn. We are continuing to improve. We partner with some of our friends over at Common Sense Media. We believe these are first steps. This is not the end. Additionally, because we know that parents need additional support and guidance and teen need support and guidance on how to use our tools, we have family guides on how to use AI responsibly, a set of conversation starters for parents. These resources were developed with input from safety experts and organizations like Connect to
Connect Safely in Common Sense Media.
I want to end with recognizing that protecting young people online is an ongoing responsibility. No single company, no product, feature or law will solve these challenges on its own. We believe that progress comes from thoughtful guardrails, transparency, collaboration with experts, and empowering families. In fact, today we joined a group of kids safety advocates, community groups and other organizations as part of the parents and Kids Safe AI Coalition to pass what we hope will be the nation's strongest child safety AI law. We appreciate the committee's work in this area. We look forward to continuing to partner
with you and thank you for the opportunity to testify.
Thank you.
I just want to clarify on question, you said that when parents get that notification, it doesn't say what the prompt
was, it just gave them a category.
Yes.
It says.
Yeah, so it would say suicidality, for example.
It'll say your teen is prompting for suicidal.
Okay.
Yes.
Turn my mic off. Now we're going to turn to Eliza Jacobs, who is not sitting here, but
her assistant and very talented government relations colleague is. So Eliza should be online.
Eliza, do we have you?
Hi, everyone.
Perfect.
Can you hear me?
Great. Yep.
Hi.
Thank you so much for having us today.
And thank you to all the previous speakers. I think it's just a testament to how much this needs to be a group effort for all these different components to come together and talk about this important issue.
And also, thank you so much for letting me testify remotely.
It lets me be home with my
kiddo for dinner tonight.
So I really, really appreciate it.
As Chair Barracan said, my name is Eliza Jacobs and I lead policy at roblox. First of all, I don't know how many people know what Roblox is, but Roblox is an immersive gaming platform.
People can connect with their friends and family and play and explore. Molly, you can go to the Next slide.
We have over 150 million daily active users all across the world. About 66% of them are over 13. But that means there's a significant portion of our users that are under 13.
And we have always been in all ages platform which has really informed our approach to safety for a 20 year history. Next slide.
Do we miss, do we miss the slide there? No.
Okay.
Yeah. So Roblox has been around for a while. We've always been an all ages platform and as a result we've always built with safety at our core. We have a multi tier, multi level approach to safety.
As many people have noted today, there
is no one tool that is the
silver bullet for safety. You have to have many layers and many tools to keep your community safe. And that's what we do at Roblox.
So we start with robust policies. Can we go back, Molly? Yeah, we start with robust policies. Our policies are purposefully more restrictive than most of the Internet.
Again because we're an all ages platform.
We don't allow profanity, for example on the platform.
We don't allow any references to drugs or alcohol on the platform. We are optimizing for the safety of our youngest users and our policies. We also have robust automated moderation systems. At our scale, you need to have AI working in partnership with humans to moderate the content on the platform. We then have teams of human experts doing human moderation for more complex cases. We have a team of deep subject matter experts on all manner of child safety issues, grooming, suicide and self harm, terrorist content, all of that. We have a team of internal investigators that work on those more complex issues.
And we also have a wide variety
of safety partnerships with NGOs, with common sense Media. You know, we work with all the people that organizations, people spoken about earlier today.
And I also want to highlight that we have a teen Council and a global parent council and those are groups
of users and parents that engage with the platform where we're constantly talking to them about what they want to see, what would be helpful for them.
We think it's really important to value
the teen voice and value the parent voice in all of these conversations.
So there's as I said, many layers of safety on the platform. And to start with, on communication safety, we do not encrypt any of our communication. So all of our communication can be monitored.
We have AI models running in the background constantly to monitor for grooming and other critical harms behavior.
We have internal experts that are looking
at that communication and reaching out to law enforcement where necessary. We think it's really important when we're talking about kids that we're not encrypting communication.
We also have a text filter that operates on communication on the platform.
So we're filtering inappropriate communication before it
can be sent to other users. And specifically, it's designed to block the
sharing of personal identifying information.
So kids be sharing phone numbers, addresses. Instagram handles anything that would make it
easier for people to meet up with them offline or online, but on another platform.
Platform. Next slide. And we know that it's important to design again with kids and teens in mind and to have additional protections for our younger users. There are real challenges here.
As many people have noted, as kids
grow up, become teenagers, they have growing independence.
They often have their own devices.
Maybe they're alone in their bedrooms on those devices. They're moving between apps.
You know, everyone that has spoken today, our users are on their platforms as well. And we can only control what they do on our platform once they leave the platform. We just don't have visibility into that. So a few things we've built into the product as safety features. First of all, there's no image or video sharing in chat, so you cannot share a photo from your camera roll in chat. On Roblox, you can't forward a video. As I said, we don't encrypt communication. So we're constantly monitoring all communication between users for potential harms.
We also, and I'll talk about this
a little bit more later, we require age checks to access any communication features on the platform. That is a facial age estimation process
that we rolled out starting in the
fall and is globally required as of January.
And we've open sourced many of our safety models.
You know, the companies that are testifying today are some of the bigger players,
but there are lots of apps that
just don't have the resources to build the kinds of systems that we're talking about today. And so we think it's really important to share this technology in an open source way with the whole industry to keep everybody safe. We want kids to be safe, not just on Roblox, but everywhere.
And we're constantly engaging with policymakers like
yourself and child safety experts, child development experts, to understand what is necessary, what we need to build in the next generation.
Next slide. So specifically talking about parental controls, and just to reiterate all of those things that I just talked about, those, those come as a factory setting out of the box.
You don't need to engage with parental controls to have any of that be
true on the platform.
And we think it's really important that you're starting from a place of default
safety and that parental controls are just another layer in the arsenal, another tool
so that parents and families can personalize their Roblox experience.
But by all means, we don't think
that they're the end all, be all, and we don't think that they should be necessary for kids to be safe on our platform.
But that being said, our parental controls were the result of extensive partnership and consultation with experts. We work with a variety of rating boards. So in the gaming space, similar to
movies, there are lots of different international
ratings boards that rate content.
Some of them are here.
We are working on integrating with iarc,
which is the International Age Rating Coalition,
so that sometime in the next year our users will get localized ratings.
Right now, and I'll talk about this a little later, you get our standard Roblox platform ratings. But in the future, kids in the US will get ESRB ratings.
For those who have gamers in your
life, you'll recognize those as things like
E for everyone and T for teen.
But in, for example, Germany, there are UK ratings. In the UK there are PEGI ratings. So those will be familiar to parents and will be displayed for their kids when they're accessing Roblox games.
Next slide. So how do our parental controls work? Similar to what other people have spoken about, we have a sort of parent link approach where parents create their own Roblox account. They link their account to their child's account, and then their phone becomes sort
of a remote control for their kids Roblox experience.
As a parent myself, I know that
often you're making these choices like late at night when you finally sit down after doing the dishes. And so it's really important, we think that you have the ability to have asynchronous control over these things.
You will also get notifications if your
kids request a settings change.
So if they are at a friend's
house and they want to play a game that you haven't allowed them in their settings to do, they will send a request and you will get that request on your phone and be able to approve or deny it from your phone. But you don't have to be on their device to make that choice.
In order to link your account as a parent, you have to verify your age.
You can do that either with a credit card or with an id. And once you've done that, you'll have access to the full suite of parental controls.
One thing I want to note here Is that another advantage of the parent link approach is that it encourages parents
to get in the game themselves. We really believe that the more you're opening a dialogue with your kids and talking to them about their Roblox experience or any online experience, the easier it will be to hear from them their honest experience. And if they believe that you care
about what they're doing on Roblox and
that your instinct isn't just to ban
it, the more likely they are to
be open with you about what's happening.
So, you know, create a fun avatar,
play a game with your kids. We think that's a big component of parental controls and parental involvement.
Next slide. So as I said, parent accounts must be age verified with government issued ID or credit card. We only use this information to verify
your age and so it's not an identity marker in an ongoing way. Next slide.
And then once you do that, you'll have access to this user friendly dashboard with the controls. Heard from teens and parents that parents most want. Something that we've heard a lot today is that parents are overwhelmed.
And I can totally understand that as a parent myself, I think what's most important is that we're giving parents the tools that they most want and not a million controls and a million radio buttons that are overwhelming. And that sort of become like an eye chart for parents to have to review. So we really focus on the things that parents have told us they want the most.
And in general, those fall into a couple of categories. Content restrictions, what your kid can play, communication, who your kid can talk to, spending what they can purchase, and screen
time, as well as a few other key controls.
Next slide.
So parents can see who their kids friends are and they can set daily screen time limits. They can block individual connections, which means that your kids won't be able to talk to those users. And once that connection is blocked, kids
can't go in and change that setting.
Parents can also set daily screen time limit
within the app.
I think what's just one thing to note, like this might change. You know, like my daughter was home
on Friday sick and she got a
lot more screen time that day than she would normally get.
And so again, we want this to be really easy for parents to do from their phones to be able to like quickly make adjustments if it's, you know, a sick day or a snow day. We've had many of here this year and they, they want to let their kids have a little more screen time that day. Next slide.
Parents can also set spending restrictions.
It should be noted that parents are setting spending restrictions by sort of loading
the money into their account in the first place.
But they can also set additional restrictions and also notifications.
So if you want to get a
notification every time your kid buys something on Roblox, you can do that. You can also get a notification just when the spend hits a certain limit and set an overall limit as well.
Next slide. Content maturity limit. So this is where the ratings come in. We currently maintain a sort of universal Roblox standard of content maturity limits. Think of this like movie ratings. By default, users under nine only have access to minimal or mild content.
Users over the age of nine will
have access to moderate content.
Restricted content requires that users be 18 plus.
But again, just to reiterate, our content policies are just much more restrictive than the rest of the Internet.
So again, no profanity, no drugs and
alcohol, no sexual content.
All of those things are just flat
out prohibited on the platform.
And so these buckets are actually much more restrictive than traditional GPG PG 13 kinds of ratings. Next slide.
In terms of content restrictions, parents can block individual experiences that they don't want
their kids to play.
And something that we took directly from the research was first we show parents what their kids 20 most played experiences are so that they know where their kids are actually spending time.
And then they can choose, go into those, explore them, decide whether those are appropriate on top of the ratings level restrictions.
And this really, we think surfaces the
information parents need to know, make choices about what they want their kids to
be able to play. Next slide.
Next slide.
So we, as I said in November started rolling out and in January globally required that all users who wish to
access communication features on the platform are required to complete a facial age estimation process.
Once they do so, they will be able to access communication features.
They'll only be able to chat with other kids in their peer group.
We're very optimistic that this step, though
not required by anybody, will become a gold standard for age verification on the
Internet and for child safety. For a long time, knowing how old
kids was was just incredibly difficult, right? For adults we have IDs, but for kids it was very difficult to know. So we're very excited to launch this globally. And we also have continual age estimation running in the background. I think Google talked about this as well. But if we have any reason to believe that the age that you estimated on your account is not the age of the person using that account. So for example, by the nature of the games you are playing or the types of folks that you're friends with on the platform. If there seems to be a mismatch, we will introduce additional friction and ask you to verify again if we believe that the age is not accurate on your account.
Next slide.
That's it from us. But I look forward to hearing your questions.
We're very passionate about safety at Roblox and appreciate California's leadership on this issue.
Thank you so much. We are going to open questions with Assemblymember McKinnon. Thank you guys so much.
And I'm rushing and I'm sorry that I have one and have another meeting, but this has been such an important topic today and I thank you Chairwoman for bringing this forward. I want to start with the settings. Is there any way we can make these settings to protect ourselves and kids better and more user friendly? I just started like trying to protect myself from being, you know, allowing people to know my location and you know, just privacy things on my own iPhone and it's been taking hours to go through there and try and figure out what to turn off, what to turn on, what to keep on and because I'm nervous about being followed and stuff myself for privacy. And so is there any way that the setting you guys can make these settings more user friendly?
Well, I think from our perspective we're always, from the Google perspective, always looking to improve. This is an ongoing process for parents in particular. We have a variety of resources, whether that's, you know, families. Google is where parents can go to get instructions and more information on the different settings in addition to, you know, the setup and family link. But I think, you know, what we believe is this is a process that is going to evolve, right? As different technological tools evolve, so will protections, so will the settings. And it's also something that why we take such, why we prioritize working with, with our independent advisory groups we have both on the Google and the YouTube side and also with civil society NGOs with government having that back and forth. And this will be an ongoing discussion.
I think the single biggest thing that
we're doing at OpenAI and I think
the easiest thing to do would be to have all the the default settings on so you don't have to figure out which ones are right. But the baseline safe private experience is on. And that's the approach that we've taken for our parental controls at the very least for parents and teens. We know that parents don't often know what they are. As my Google colleague has mentioned, we
have literacy resources and in person consultation and we can get better at education
and I think we should, as we've noted. But by default the controls should be on and a parent shouldn't have to turn them on and figure out what they are.
So when we purchase the phone and first get it, it should just be on the default already. And then you go from there.
Speaking to the OpenAI, you know ChatGPT experience in particular.
That's the approach we've taken and remind me because now I've probably conflated all of these different safety programs.
I apologize. So OpenAI, it's on by default for under 18 and then is that self attestation?
Is that how are you determining a ChatGPT user's age?
Similar to our Google colleagues. 3 Ways Self Declaration of Age First Age Estimation that runs in the background
that will determine whether a user is over or under the age of 18
and then if we are not certain
of a user's age over or under
the age of 18 using age estimation
we define default that user down to the under 18 experience. If we get it wrong and we have defaulted you down to the under 18 experience, you can use age verification either via selfie or government issued ID to rectify.
Got it.
And then you will I assume I'll
be complying with selling member Wix's bill when the time comes, which I know is not yet.
Although I will say before I turn it back over to Ascella Mercanter, my device manufacturer has now turned on age
signaling by their own choice.
This is not legally required yet and
I downloaded an app that was choosing to limit it to 18 plus.
My device then warned me I was downloading an app that was 18 plus, asked me if I wanted to change
my age prior to sending the age signal to get the app.
So even the device manufacturers that are
and it's not technically against the law,
we didn't think of that. We didn't think the device manufacturers would be inviting people to change their age. So we'll be cleaning that up. But I just feel like every time we try to do these things there's somewhere that there's an end run around. But we're going to keep keep fighting the fight and closing the loopholes.
Keep pushing.
Given the subject matter of this hearing, I would like the panelists to comment on how we protect vulnerable youth who may not have active caregivers but rather may be neglected or experienced trauma at home. Given that research shows that children who have experienced abuse or maltreatment are at
heightened risk for suicidal indention, I can start.
We've been talking about defaults And a lot of the conversations I've had with policymakers, this has come up. And again, not every parent is going to be involved. A lot of parents can't be involved. They're working multiple jobs. I used to do domestic violence cases. There may be home situations where teens don't want their parents involved. But again, as we said earlier today, it's an outlet for teens to connect, to get educated, to find their passions, to communicate with their friends. And that's why, again, we rolled out, we were the first to roll out the teen defaults with teen accounts. We understood how important it was that even if a parent can't get involved, we need to have the strictest settings in place. And again, we default all teens under 18 into them. I will say separately, we do, we did work, we have four or three or four different expert advisory councils we work with and they drew a differentiating line between under 16 year olds and over 16 year olds. And so if you're under 16, so between 13 and 16, you cannot get out of those protective defaults without a parent relaxing them. Older teens can drive, they are maybe studying, have jobs, executive functioning wise. Not again, every team is different, but there is a line between them. But we still default everybody into it. I think what's really important, it's not just the default experience itself, it's substantively, what are we protecting against. And we want to make sure that, again, the content teens are seeing is age appropriate. That goes to your question about, you know, sometimes vulnerable teens are looking for content that maybe they shouldn't see. So it's really important not only to have policies on it, but to enforce on it and to make sure that we are keeping that content away from vulnerable teens, especially if their parents aren't involved and cannot have conversations with them. I think who you talk to is really important. You want to make sure that teens are not getting randomly messaged by people and that they are in a protected experience when it comes to messaging restrictions. So we default them into that. And so I think the point here is everything should be automatic without the teen even having to hit anything and
try to get out of it.
And if they do want to get out of it, that's when they go to a parent or guardian.
Thank you.
One last question, please. Did you want to.
Well, I think, you know, just when we're talking through how complex it can be and building for every type of child, every type of family, their unique experiences is why this can be hard and why we want to have the ability to have different settings and from the YouTube perspective in particular, we're talking about access to a video library and the way that that can help teens or users in vulnerable situations. Finding authoritative content, finding content that is putting them and validating some of what they may be feeling in a certain family situation or what have you. I think this is also about not cutting off access for some of those teens who may need that information. From a YouTube perspective this is. Teens are using this to listen to music while they're doing homework. This is the largest for younger users, the largest video library of Sesame street for example. So this is a video sharing platform and on top of that there's digital well being pieces built in for if someone is searching for a suicide self harm disordered eating, there's going to be protections defaulted in place that are having the screen takeovers, encouraging them to seek authoritative content and to take a beat and you know, elevate content about self compassion, about you know, grounding exercises, things like that. So there's a variety of different ways that they could be in supportive as
well and are yours on by default the which I'm sorry, so these teen protections you're mentioning.
Yes, yes.
And then how do you do age?
So we do, we have age assurance. We rolled that out on our first party platforms and it goes through, we have that inference model that will say whether or not this user we think is above or below the age of 18. Taking into account things like again, how long the account has been in place, are they looking for different kinds of content?
Okay, that's fascinating.
I just will say again, I don't want to put you in the hot seat because my kids are on YouTube and part of the reason they're on YouTube is because my son has learned to play chess on on YouTube. He became a magician on YouTube. I actually think YouTube has really good content that my kids have grown from. And at the same time I will
say my son does have his computer in the kitchen. So I see what he's seeing.
He's also getting fed incredibly disturbing content every single day. And so I just, I'm like, I'm surprised by some of these answers. Cause it's all great but it's not playing out in my household.
So very last question. And it is good to see you guys coming up with great ideas.
So that's good to see because this
is my second year in privacy. With no visible representation of people of color among your leadership here today, why should black communities trust that your platforms are safe for their youth? For our youth? What measurable actions have you taken to eliminate systemic racism in your systems? And how are you being held accountable for those outcomes?
I know Mike dropped there.
I mean, I can, I can address.
I saw you doing that.
I can address it and say, I
think we need to do better. I mean, the fact that you pointed out that, you know, there aren't enough black leaders at companies across the board, not just our companies here today, I think it's something that we need to all work on. It's important. I will say, I can only speak to my experience. I will say that when I used to lead youth safety policy, which I did for two and a half years, we brought a lot of different perspectives into the people who were advising on how we built the products. And it was across race, it was across gender, it was across socioeconomic status, it was across lots of different countries and also different kinds of families and different types of teens and parenting. And I do think, and I believe wholeheartedly that the way that you best design these experiences is making sure that you're getting all sorts of viewpoints in the room and that you're accounting for them and that if you don't feel like you have enough diversity in the room, you have to try better. So that I can speak to in terms of how our team worked, both with experts, parents, policymakers, it was a very, very diverse group of voices.
One of the things that I'll a couple things to note here. I agree. I think we can all do better here.
I am Latin, I'm of Mexican descent, and I don't think that there is
enough representation just writ large in the technological industry. And so I'm in full support of that more broadly as we work with
sort of third parties in the mental health space.
In particular, one of the things that that the CEOs of the major mental
health organizations are people of color.
We ensure that on our wellbeing Advisory council there are people of color, that
the Global Physicians Network is representative and
globally representative so that we are not
taking a very particular approach in the
decisions that we're making.
I also want to address sort of the prior question.
It sort of dovetails together, which is to say as we are building some of these systems, for example, this parental notification piece that we've talked about,
we
understand that even when a parent is involved, that that parent might not always have the best intentions.
And this is something that has come through in some of the third party organizations that we've worked with on mental
health, which is to say, before we send a notification, we are Assessing for risk at home, which is to say, what else is that teen prompting for? To be sure that they're not prompting for suicidal content because they're is risk at home. Right.
So I think a lot of this kind of dovetails together and representative viewpoints from our, you know, our Wellbeing Council
on AI or Global Physicians Network, and this sort of broader, you know, representative data set has really guided our approach here.
And thank you for that and to the companies that you guys work for in leadership and decision making. We need to see a more diverse group of people so that they could give their input because this is affecting all of our children. And it's great to see women sitting here. That is very good to see women sitting here. But we do need a more diverse perspective.
So in the next coming years, that's
what I'll be looking at. Like, where are you guys with AI, with online tools? How are you guys making sure that all kids are going to be safe and that this affects our children?
Thank you.
Thank you. Assemblymember, assembly member Ward, thank you for the presentations.
Obviously key interest of the committee for some of the work that's coming forward before us and certainly discussion out in the community, parents, schools and anybody that cares about our kids, myself included, my 11 year old and 7 year old. And I sympathize as well. The 7 year old, you know, is loving YouTube, but maybe a little too much. And it raises a question because, you know, still educating myself on how to set things up well and maybe we don't have enough education. Right. When you're creating a new account. And I kind of would want to ask two things for any companies that, you know, sort of are creating accounts or pointing you in that direction of where you're trying to be able to make sure the good controls are in place. You know, how do I even know to access these controls or what options are available to me as we're learning things here just today that we never even knew. And you know, if you are creating accounts, there's evidence, you said you're sort of screening the, you know, the technology is sort of screening that, you know, this viewer, like might be a youth, might be a teen. Is there proactive, like ways to be able to prompt the teen or any other viewers there, hopefully a parent is in the room to know about the options that are there that they can start to avail themselves of parental controls or other systems?
Yeah, I'll speak to OpenAI in particular,
at every possible point we are attempting
to surface the concept of parental controls.
It's available in our settings pages. It's available.
We point users constantly to our help center, to our notification systems. The goal is to drive as many parents to this as possible. I think industry wide we can do better at education as we've said here today, but the goal is in the product to surface as many sort of notification moments both to parents and to teens as possible, to our literacy resources, to our help centers, to the settings page to engage in these parental.
Yeah, I think, I think that could be certainly a takeaway that we need to, you know, you know, more immediately kind of work on in this moment is that we want to make sure that there is a lot more opportunity for all of the software and product that you're, you're developing to be able to help be a part of the solution here that that information is getting out there so it can be availed of and maybe related to this is, you know, a youth and say in my case, you know, a seven year old, you know, we sort of get him on there and he wants to watch a little bit and I literally am typing in the search chat, you know, educational videos for 7 year olds and there's a lot of great options out there, right? And so he starts going on those and he's kind of clicking around and I'm out of the room, room for 10 minutes and next thing I come back in there and he's like, you know, hyper graphic, like you know, like war scenes and gunplay and it's like how did I get from here to here? Right? And if I was typing in educational videos for seven year olds. Well, one, hopefully you're realizing that a seven year old is watching TV and so it sort of would have self corrected but it wasn't happening in this case. And, and two, why would algorithms even sort of like, you know, link these two? So kind of an open question there. And I really raised this because we're having that challenge right now. Next thing I know, I literally this week got a call from the principal about gunplay at school. Gunplay at school, you know, and it's like, okay, well yeah, I guess he can't watch YouTube and I don't want that prohibition on it because I recognize the positive benefit of it. But something is just not actively working in practice right now or there's not enough check in there that unfortunately there wasn't a real problem. Right. Like it didn't really like have a serious action but left unchecked I could see more and more real problems sort of surfacing.
Well, I'M happy to take that. I think to kind of fuse the two questions, if I'm understanding them correctly, is when a user, for example, starts a Google account, if they're telling us they're under 13, they're automatically going, and it's saying you need a parent and getting that parent involved. And they can't access anything until they connect with the parent. And so they would go through that Family Link flow which would have all of those settings that we had there. But if a user says they're above 18, but then that's when our age assurance things come in and it isn't sure, then before they try to access any age restricted material, they'd have to confirm the age. But in that default setting, if we're seeing that it is indeed a seven year old in that setting, we would send them to Family Link, but say it's a teen. We're putting those default settings in place that would. For the parts of YouTube, I think one of the things that's really important and certainly can't speak to any specific incident, but I think what we're trying to do is elevate high quality content and limit low quality content. And so from the teen experience, we have principles that we've worked through with third party experts for both kids under 13 and teens to figure out what does high quality look like, what does low quality look like and how do we adjust those personalization recommendations accordingly. And I think the other thing, and not to say that this is the case, but I think this is one of the things we hear a lot, is the importance of children being on their accounts. And we made it actually much more easy for parents to toggle between their account. If a parent's on their account, they're not going to necessarily have those default settings that would have those personalization, those high quality principles, more in the feed than they would for a child and making sure that they can take advantage to Only the under 18 default settings that we talked about. But also whatever parental tools are in place. And so we're making it easier for parents to go back and forth and you know, wanting to show the importance of kids being on their own account.
Thank you for that. I wanted to switch because I'm overdue for a 4 o' clock meeting, Madam Chair. But I did want to make sure that we at least kind of like, you know, we're able to work on another sort of community issue. And I'm the chair of our lgbt and that comes up often as we're thinking about, you know, how to manage social media, whether any kind of constraints that we're putting on there. We do have concerns sometimes because we recognize both positive and benefit and negative challenges right around social media use. You can imagine a number of scenarios where a youth might be identifying or questioning themselves, but they might not be in a supportive environment, or they really just want to go to more kind of constructive, proactive things. Think Driver project. Think your local teen LGBT center, a support group, you know, just sort of positive information and, and, you know, with. With parental controls, with the ability to sort of manage all that, you get. Things get a little dicey, right, because, you know, they're watching what they are accessing, and then that might be, you know, kind of getting into their space of privacy a little bit too much when they're not ready to come out, or they're, you know, may not be coming out in a very, you know, supportive environment or worse. Right. Like, you know, a very, very hostile environment. And so that. That's something that comes up in this committee conversation as well as we're thinking about this, this, these regulations. And I guess what, what do you see? I know that this has been studied. The Surgeon General is looking at, like, you know, both, like, you know, studies positive and negative benefits. What do you. What do you see as sort of like, you know, the kind of lens that you're thinking through when it comes to LGBTQ youth to make sure that they're protected overall, but that privacy considerations are embedded as well, too, and positive benefits are directed.
I think that's incredibly important. And when we do talk about the. How parental tools should, you know, different levels of it, that's why, you know, this is a difficult conversation. We need to be balancing the fact that teens do have an increased developmental capacity for autonomy, wanting to make sure they have, of course, all of those default settings, but they're all really good reasons why they should be having a more autonomous experience. And it's really important to think through those exact kind of examples as we're thinking through what public policy looks like and why it's important not to completely cut off access, but to allow access within a safeguarded environment.
On the ChatGPT side, and again, we're not social media, so it's a little
bit of a different game here.
But on the parental control side, one of the.
The core tenets and principles in the way that we built this is a parent will never have access and will never see the exact prompt and generation text that a teen is putting into ChatGPT. And it's why as we built parental notifications and all of this concept, the general topic of the distressing content being
suicide specific is shared.
The exact prompt and generation text does not. Because why that teen is suicidal and everything that surrounds that is their privacy. But we want to give parents enough to have the ability to take an action. So for example, the goal is to preserve the privacy of the teen and allow a parent to have enough information to do something about it. But we recognize and have thought extensively and worked with our third party experts and counsels in the APA and what have you on this exact question. And so I appreciate it.
And is there a difference in. And I appreciate, I think it was
Instagram that you mentioned that there's a difference for your programs between under 16 and 16 to 18. For example, do you see any distinction between any age groups or Is everything under 18 privacy protected like a 7 year old?
Can I actually correct that?
Sorry if I spoke. You're right.
In everything that you said. I think it depends on what the experience is. So just to elaborate, when we rolled out, we launched the new expanded version of teen accounts, we took a different approach when it came to content. And we identified that teens should not see. Whether you're 13, 14, 16, 17, you shouldn't see content that's 18 plus. So depending on the type of experience, we actually sometimes delineate at under 16 versus over 16. And then there are other experiences that squarely fit in. This is an experience that, that teens should have and should not be accessing adult inappropriate content. So I just wanted to.
Okay, no, I appreciate that clarification.
Didn't want to misspeak for you. So do you have any distinction between under 18 or is all under 18 as privacy protected?
As you just mentioned, Today everything under 18 is privacy protected.
Interesting.
We at Roblox, just to jump in there.
Go ahead.
We have sort of, you know, kids
grow up in a variety of ages and stages, right? So as you age on the platform, you have access to a sort of expanded set of products, features, content and all of that. And we think of that as sort
of a training wheels approach.
We want to teach kids good digital habits.
And we know that at Roblox, for
many kids, we're the first account they
ever have on the Internet.
And we take that really seriously. So we make distinctions. So under nine, for example, has no access to direct messaging on the platform. And as you age up, you have more and expanded access to communication features. After that age check to different kinds of content. And then at 18 plus you have access to restricted content on the platform. But I do think one thing that
would be incredibly helpful, and this is from a couple questions ago, but we're all talking about sort of safe by
default and then layering on parental controls. But none of us use the same necessarily the identical language or the identical terms for settings and buttons and tools.
And that makes it really hard for
parents to be able to navigate across. You know, I think the stat is most kids are on upwards of 40 different apps. And so to the extent that regulation,
that legislation can standardize some of that
language to make the cognitive load easier on parents, I think we would welcome that as an industry. To say, like, this is what this word means. Everyone use this word when you're talking about this control. That would be incredibly helpful.
We're all engaging with experts and teens
and parents and NGOs and, you know,
pediatricians and all of that.
But we're all landing in slightly different places. Even though we're all trying to get to this outcome. The more we could standardize that language, I think the better and safer everyone is.
Yeah.
And I think that leads me to my next question, which is, you know,
somebody, remember Wicks, who had to leave passed the age appropriate design code, which was really intended to get at how
do we design these to be safe for children. And some of what I'm hearing today,
I think, is unclear to me.
Are you changing the algorithms or the recommendation engines?
Are you just shielding content?
I don't know.
That's a little unclear.
If you want to answer that. Age appropriate design was then sued on,
is very minimally now lawful, but mostly not lawful according to the 9th Circuit. So I guess I'm a little bit lost in.
Okay, great, we're here. You're talking about all these things.
We had an assembly member who's led in the space for a long time.
We tried to put that forward. It was then sued by industry.
So is that the gold standard?
Like, is this the gold standard?
Should we be saying what's safe for kids online?
Is that something the industry will ever allow to happen? I guess is the question. I don't know if I said that
well, but I would say Roblox supported the California age appropriate design code for
precisely the reason that I just discussed.
And, you know, I can't speak to the legality necessarily and what those arguments were, but I do think industry standards that people can align on would be incredibly valuable.
Anyone else want to weigh in on age appropriate design in concept?
Well, I think speaking to the purpose of age appropriate design, we are in favor and have been. We had a legislative framework to protect children and teens. I think we released it back in 2023 with things like requiring companies to take the best interest of the child into account, to require companies to have offerings related to prioritizing mental health and well being, things like that. And with regard to age appropriate design, I think there's a lot of things that, you know, where we've, you know, age assurance was part of that. Privacy by design is part of that. We, you know, have those in place. And I think just more broadly, you know, this is, as we've talked about and you know, maybe unsatisfying in some ways. But I think this is, I think my colleague from Meta said this isn't static. This is, you know, an ongoing conversation, an ongoing way that we want to be meeting the moment for both parents and for minors.
And I was, I mean, I was
just gonna jump in that I agree with Eliza. I think standards are good. And I think you've heard we all have different versions of default settings, different versions of parent controls, different versions of content ratings, I guess if that's what you're going to call it. So we're all solving for the same root issues and we're trying to put in mitigations and we're all working with experts and parents. I mean, we're all facing the same things. I think, though, what we heard earlier today, and I know you're going to have, I think another panel on this too is, you know, as a parent and the fact that Eliza cited the same University of Michigan Common Sense Media research study that showed that teens are on an average of 40 apps per week, it's a lot for parents to be jumping through those hurdles. And frankly, you know, I think parents have said time and time again and teens have said that the digital world is not going away. And there's a lot of good across everything that everybody has said today, it's not going away. But parents should be able to support their teens when they're online. And if a parent doesn't want their teen on 40 apps per week, they should be able to pick the apps and approve if it's too. You like YouTube. If you want your teenager or kid to be on YouTube, that's your choice. It doesn't remove the obligations from all of us, all of our companies to build those age appropriate experiences. They still have to happen. But I think we need to make it easier on parents because every person here has described a different version of what hoopp's parents are Jumping through whether it's streamlined or not to support their teens. And I think think we've pushed for federal legislation and state legislation to get parental consent at the OS App Store. And I think if you can make it easy on parents and the apps continue to build these safeguards as technology changes, you're supporting not only teens but also their parents. So I think it's everything that we've been discussing and then some now.
And you said it's funny you say, I like YouTube, I actually have a
love hate relationship with it.
I think there are fair enough as we do with most tech companies, technology frankly, so not to pick on YouTube again. But you know, I think it's so complicated and look, this is my life's
work and I didn't know about the
parental controls on YouTube so if I don't know about it, then that really says something.
But that's the point.
If it can be easier for everybody at the OS App Store level where it's like the same thing, the same standards, and then parents can decide, I'm okay with this app. Maybe my 12 year old, 13 year old is fine with YouTube, but maybe I have a kid with ADHD who's not okay with it. You as a parent should be able to decide and if you change your mind, you change your mind. But that's a parent's decision.
I also think, look, I'm a big,
I love the training wheel analogy because I actually truly believe, and this is
why the computer's in the kitchen, is that in my family, my kid will
leave home and he will have these devices and he will have access to these things.
And it's my job while he is in my home and living under my
roof to help him learn to navigate these spaces.
And so to not, you know, we
all, again, I went to college long
before these spaces existed. But we knew the kids who were sheltered a little bit too much and got to college and with other things
went a little bit, you know, wild because they hadn't been taught how to manage things that are exciting.
And so I struggle because I think kids should be in these spaces with their parents learning how to navigate them. How do we think critically about content on YouTube when you're being fed something that is maybe toxic or problematic or not factually based, how do you ask questions and look up sources? And that is something people have to learn. But at the same time, when I sit and watch
my daughter be fed
content, frankly that is different than my son.
That is incredibly disturbing from a body image perspective. I'M like, should I be allowing this at all?
And so I think that if we can create spaces where they can learn
and grow and start to get these
critical thinking skills, we are better for it.
And the problem is I think we're
not there right now.
So Assemblymember Wicks wanted me to ask you questions.
I think we've answered the first one. She said, for under 18, are the default settings the strictest? I think the answer was yes for everybody.
Correct me if I'm wrong.
Yes.
Okay. And then who can change them?
Can kids override them?
I think I heard you say only at 16 to 18.
Kids can override them them in some
contexts unless they're in parent supervision.
So some 16, 17 year olds may want to be in parent supervision. If they're not in parent supervision, they can undo some of the settings. Not all.
Okay.
And then for Google, so we have.
So for Google, a teen, a supervised user would remain on supervision after the age of 13 with, with YouTube, there are, you know, there's a voluntary teen experience that I'm happy to kind of get more information for you on. But I think I also want to go back to the point that was made earlier and just clarify that parents right now, both on Android and through Family Link, have the ability right now to approve or block apps. And I just want to make sure
that that's very clear.
And that's true on Apple too, I think. Yeah, but I have Apple devices in our house.
So
yeah, I think we answered this.
You said nobody can override it.
Right.
Okay.
Yeah. And then I think Roblox, I heard you say it depends on the age,
but can depends on the age.
It's sort of, as I said, training wheels approach. We have parental visibility through. I think it's 18, but it might be 16. I will double check.
And then we also have youth mental
health tools, again, to the sort of digital literacy point that you were making.
We worked with our teen council to
ask them what would be most valuable to them. And so at 13, they have a series of youth mental health tools that are available to them in their own dashboard to make choices for themselves.
Got it.
Okay.
And then her next question was, would our prior panel.
And this is kind of a tough one. I'm giving you her question.
She asked tough questions. Would our prior panel believe that the strictest settings, so presuming they keep it
on the strictest settings and don't make different choices as they can in some of these programs, are they good enough?
And she gave an example.
I'm Going to read her example of what she meant.
She said, for example, Google said you
have a bedtime reminder.
Can the kids just close that window and keep scrolling?
Well, I think there's bedtime reminders and
I want to make sure I get it right.
So let me make sure to follow up after. But I think, I think there's bedtime reminders and then there's downtime. And I think those are all available through family Link for parents to completely shut down the phone, whether it's, you know, the reminder itself, but also, you know, having the phone itself be off.
Okay, so would, I mean, I think
it's a challenging question. Would they think that these are sufficient?
The answer I heard them say themselves was no. So I don't know if you want
to speak for them. That feels. But I guess the last question would, which is her question, but actually I
share with her, is you're all sitting
here saying you're trying,
yet kids are dying. Right? I mean, kids are being harmed.
Kids are having eating disorder behavior because
they're being fed too much content of that nature.
I think that the lived experience of me and my peers and it sounds
like every single one of us is moms, I think. Right.
We're all the same vintage.
All the same vintage mom. So you're probably getting the same questions we and comments at the soccer games.
I am. It's not working, right.
We see our teens and our younger children, I mean, addicted, wanting that device so badly, not wanting to go out and play because the iPad is sitting there even if it's turned off. And so I guess the question is like, why, if you're doing all of these things and you think they're best
in class, are we continuing to see harm arms?
I think that the I'll sort of answer from an OpenAI ChatGPT perspective, and I think we've all said this, this is a marathon, not a sprint. The way that teens engage with ChatGPT in particular changes over time as they grow, as they age. It's a learning source. It's a teach me quizzes source.
It's a.
It evolves.
The product is so new and so
early, at least for us. It's been around since November 2022 that the mitigations and the controls and the content restrictions are constantly changing. And we're evolving them because of the way that teens are using the tool. In the ChatGPT case in particular, it is so new. It is such new technology. The technology changes over time. So for us, the approach of sort of iterative deployment is how we think about this, which is to say we restrict and then learn and evaluate.
To one of our prior panelists, we
learn, we look at metrics, we have dashboards, we at the individual user level and at the aggregate level to understand how our mitigations are working. And so I think, at least for ChatGPT, this is such a new technology that this will be a process. It's a marathon, not a sprint.
And have you pulled back models because they were harmful?
Yeah, so 4.0 in particular was deprecated
and that was, as I understood it,
mostly a sycophancy problem. Is that or am I missing?
There were a number of reasons that model was deprecated, but it's no longer in production, available to users.
Okay, I think.
Oh, sorry, go ahead.
Sorry, jump in on that.
No, you're good.
I totally agree. Look, I think it is a marathon, not a sprint.
And the technology is constantly changing. We only launched facial age estimation a couple of months ago when we felt the models were accurate enough to give us accurate age signal. We did not have that tool before. As the technology improves and becomes available, we will use it. And as our platforms grow and change, we will need to add more tools on top of them.
I think the other thing about this
is that all of these platforms are a little bit different. They have a little bit of a
different offering and all of our kids
are a little bit different. And what they need is a little bit different.
It's not one size fits all at the platform level or at the user level. We're not talking about like car safety. Right.
Seat belt protects all of us.
Same
airbag protects all of us. But when you're talking about different populations,
as was talked about earlier, for some kids, parental controls are incredibly important.
And for some, that same parental control
might actually expose them to harm because
their parent now knows something about their
private internal life that might cause them to harm them. So it's just so complex and so multilayered that there isn't one solution. Because every kid is different and every platform is different. And that's why it's a never ending problem to solve.
No, I appreciate that.
And I think what I struggle with
and I tell it.
I've told this story before. When my kids were born, I had a vibrating chair. It was the only place my babies would sleep.
It was my favorite thing in the world because it got me a nap and a shower.
Most days I believe it was five
babies flipped over and suffocated in the chair. The chair was recalled because the United
States of America wouldn't accept five babies. Was it the rock and play
in children? And so I get that this is hard, but we have accepted far too
many deaths of children through online harms.
And so I just. I hear you. I think it's hard. I actually,
I understand that, but I
just get to a point where I