“I guess that pressure is kind of what pushes us to get to those answers faster, try to get to those results faster, because we feel like all that we are is a number. I guess AI kind of supports that in helping us get those answers,” Lyons continued.
He called for reformed grading that felt more individualized to the student and allowing students to express themselves, “rather than being more infatuated with fitting into a materialistic ideal.”
Another student said his peers seemed to be using it just to think, adding that there should be guidelines because “a lot of people rely on it way too much.”
“I think it’s a good tool. You should know how to use it, but it’s a matter of being too dependent on it,” said Zeev Mallak-Yaron, of Central Catholic High School. “On social media, some people will say, ‘When I have to text someone back, but I ran out of free chats.’ So it’s like you can become too emotionally dependent on it, and you can’t even function.”
At Shapiro’s prompting, Mallak-Yaron said it was hard to say what the specific guidelines should be, “but there’s definitely something that needs to be done.” The student agreed that, specifically, AI companions shouldn’t be able to present themselves as trained medical or psychiatric professionals.
Another student suggested prohibiting its use for all minors, but King — the student from Pittsburgh CAPA — also pointed to the need for guidelines around AI uses.
“There’s no rules, no restrictions. It’s just kind of thrown to us and we’re allowed to use it, essentially, however we want,” said King. “We don’t even have any real evidence on the long-term effects of consistent AI use.”
King mentioned one MIT study analyzing the impact of using an AI assistant for essay writing, but observed that its effect hasn’t been chronicled in younger students at all.
“We don’t know everything about how the brain works, right? So now we’re having something unnaturally come in and essentially take the information that it is able to know about how the brain works and kind of skew that,” King said.
Proposed state regulations
As an example of a potential violation, Shapiro said he and his staff downloaded an AI chat bot that told them it was a licensed mental health professional in Pennsylvania.
“Let’s be clear: they’re not licensed in Pennsylvania. They’re not qualified to tell you what you should or shouldn’t do as it relates to your mental health, and I think that poses a real risk to students and others across Pennsylvania,” said Shapiro.
He proposed requiring age verification and parental consent to use AI companions and forcing overseeing companies to periodically remind its users “that there is not another human being on the other side of the screen.
“These companies will be held accountable. They do not have immunity, and if they’re going to play here in Pennsylvania, if they’re going to put their products in the app stores for our students, they damn well better know we’re going to hold them accountable,” said Shapiro.
Additional regulations could include requiring companies that detect children discussing self harm of violence to immediately direct children to appropriate authorities and prohibiting bots from producing sexually explicit or violent content featuring kids.
“We recognize that AI is here. We recognize that it can be transformational in so many good ways, but we also understand it has a lot of risks,” Shapiro told reporters after the roundtable. “Right now, our children are bearing a lot of those risks.”