tr?id=&ev=PageView&noscript=

Pa. high school students weigh in on state policies for AI use following new tools

By Whitney Downard, Pennsylvania Capital-Star

March 2, 2026

Students talked about peers who were “overly reliant” on AI for either their mental health, homework or basic skills.

Shortly after the statewide launch on Friday of new literacy tools and an enforcement task force geared toward artificial intelligence, several high school students in the Pittsburgh area shared their concerns about AI in schools directly with Gov. Josh Shapiro.

The young adults, who gathered at the Carnegie Clubhouse of the Boys & Girls Clubs of Western Pennsylvania in Carnegie, spoke about the pressures they faced and ways they’d used the technology, at times calling for guardrails on its use.

Laila King, senior at Pittsburgh Creative and Performing Arts (CAPA) School, said she’d recently learned that most people use generative AI for help with their homework or companionship, “and I feel like it kind of speaks to a lot of the issues that are going on with young people today: loneliness, isolation (and) stress.”

King said she’d pushed for excused mental health days from school as part of the PA Youth Advocacy Network to help improve student well-being.

Her classmate, Jenea Tomblin, said the assumption that young people used AI on their homework also hurt student confidence. She shared an anecdote about a student writing an essay who “stayed up all night … and her teacher flagged it as AI, but it was actually all of her thinking.”

“I just feel like that tries to downplay you as a student and make you seem like you’re not smart enough to speak that way. You’re not smart enough to use that type of grammar in your papers,” said Tomblin. “It’s affecting not only our classwork, but just us as individuals as well.”

Positive uses of AI, Tomblin shared, could include getting a bot to create flashcards for a test.

The AI Literacy Toolkit and AI Enforcement Task Force come as lawmakers continue to grapple with AI regulations at the state level.

A dedicated state website (pa.gov/reportabot) is geared toward evaluating whether AI companions or chatbots are overstepping by posing as licensed professionals like therapists. Those claims will be investigated by state Attorney General Dave Sunday, who reviews claims under Pennsylvania’s Unfair Trade Practices and Consumer Protection Law.

’Can’t even function’ without AI

Students reported peers who were “overly reliant” on AI for either their mental health, homework or basic skills — or were trying to juggle too many duties, like sports, an after-school job or caring for younger siblings.

Tayshawn Lyons, a junior at Shady Side Academy in Pittsburgh, said teenagers didn’t feel like they had resources or safe spaces.

“And I feel like those chatbots and those AI companions provide the resource quickly,” said Lyons. “My school is very rigorous. We really define ourselves by our grades, by the results that we get on paper (and) what our GPA is.”

“I guess that pressure is kind of what pushes us to get to those answers faster, try to get to those results faster, because we feel like all that we are is a number. I guess AI kind of supports that in helping us get those answers,” Lyons continued.

He called for reformed grading that felt more individualized to the student and allowing students to express themselves, “rather than being more infatuated with fitting into a materialistic ideal.”

Another student said his peers seemed to be using it just to think, adding that there should be guidelines because “a lot of people rely on it way too much.”

“I think it’s a good tool. You should know how to use it, but it’s a matter of being too dependent on it,” said Zeev Mallak-Yaron, of Central Catholic High School. “On social media, some people will say, ‘When I have to text someone back, but I ran out of free chats.’ So it’s like you can become too emotionally dependent on it, and you can’t even function.”

At Shapiro’s prompting, Mallak-Yaron said it was hard to say what the specific guidelines should be, “but there’s definitely something that needs to be done.” The student agreed that, specifically, AI companions shouldn’t be able to present themselves as trained medical or psychiatric professionals.

Another student suggested prohibiting its use for all minors, but King — the student from Pittsburgh CAPA — also pointed to the need for guidelines around AI uses.

“There’s no rules, no restrictions. It’s just kind of thrown to us and we’re allowed to use it, essentially, however we want,” said King. “We don’t even have any real evidence on the long-term effects of consistent AI use.”

King mentioned one MIT study analyzing the impact of using an AI assistant for essay writing, but observed that its effect hasn’t been chronicled in younger students at all.

“We don’t know everything about how the brain works, right? So now we’re having something unnaturally come in and essentially take the information that it is able to know about how the brain works and kind of skew that,” King said.

Proposed state regulations

As an example of a potential violation, Shapiro said he and his staff downloaded an AI chat bot that told them it was a licensed mental health professional in Pennsylvania.

“Let’s be clear: they’re not licensed in Pennsylvania. They’re not qualified to tell you what you should or shouldn’t do as it relates to your mental health, and I think that poses a real risk to students and others across Pennsylvania,” said Shapiro.

He proposed requiring age verification and parental consent to use AI companions and forcing overseeing companies to periodically remind its users “that there is not another human being on the other side of the screen.

“These companies will be held accountable. They do not have immunity, and if they’re going to play here in Pennsylvania, if they’re going to put their products in the app stores for our students, they damn well better know we’re going to hold them accountable,” said Shapiro.

Additional regulations could include requiring companies that detect children discussing self harm of violence to immediately direct children to appropriate authorities and prohibiting bots from producing sexually explicit or violent content featuring kids.

“We recognize that AI is here. We recognize that it can be transformational in so many good ways, but we also understand it has a lot of risks,” Shapiro told reporters after the roundtable. “Right now, our children are bearing a lot of those risks.”

Author

CATEGORIES: EDUCATION
Related Stories
Share This