Between Usage and Comprehension: Italian Youth’s Relationship with Artificial Intelligence

In a rapidly digitizing world, young people in Italy are encountering artificial intelligence (AI) with increasing frequency—on their phones, in their classrooms, and even in their homework. But how well do they actually understand what they’re using?

Between March and May 2025, ALFA Liguria—Italy’s regional agency for vocational training and guidance—launched a national survey as part of the Erasmus+ project YouthGovAI. The survey gathered responses from 281 young people aged between 16 and 21, offering a unique snapshot into how Italian youth perceive and interact with AI.

A striking 87.2% of respondents said they were familiar with the term “artificial intelligence” and could explain its meaning. Many recognized tools like ChatGPT, Spotify’s recommendation engine, or TikTok’s algorithm as part of their everyday digital lives. Over 70% of participants reported using AI at least twice a week, with 35.9% saying they use it daily and 36.7% two to four times per week, particularly for studying or schoolwork.

But scratch the surface, and the picture becomes more complex. When asked how confident they felt in understanding how AI works, only 11% said they were very confident, while 29.9% felt confident, and a much larger 39.9% reported only moderate confidence. A notable 16.4% expressed low or no confidence at all.

When it came to identifying whether a technology is powered by AI, the results were similar: 49.1% said they felt moderately confident, *22.4% confident, and *19.3% either slightly or not at all confident. This suggests that while digital familiarity is high, critical literacy still lags behind.

Perhaps even more telling were the responses about trust in AI-generated content. When asked how confident they were in the accuracy of information produced by large language models like ChatGPT, 44.8% were only moderately confident, and nearly a quarter (24.2%) expressed low confidence. Similarly, the ability to detect AI-generated disinformation was shaky: only 8.5% felt very confident, and a worrying 28.8% felt only slightly confident or not confident at all.

What emerges is a portrait of a generation that is immersed in AI-driven tools, but often without the conceptual tools to fully grasp what those systems are, how they function, or why they matter. They use AI regularly, but don’t necessarily understand its risks, biases, or power structures.

The findings from the YouthGovAI survey are a clear wake-up call for educators, policymakers, and youth organizations. It is not enough to provide access to digital tools. We must also foster AI literacy, critical thinking, and ethical reflection—skills that are essential to navigate a world increasingly shaped by algorithms and automation. Ultimately, this is about more than coding or prompt engineering. It’s about empowering a generation not just to use AI, but to interrogate it, govern it, and shape it in ways that are just, inclusive, and aligned with democratic values.

Leave a Reply

Your email address will not be published. Required fields are marked *

toggle icon