Schools are at a crossroads with the Australian federal government’s $6.5 million age verification trial set to begin in late 2024 and a social media ban on the horizon. IT leaders must navigate how to uphold this social media ban for young students while managing the operational challenges and preparing students for the digital world they’ll eventually enter.
The independent trial, led by the Department of Infrastructure, Transport, Regional Development, Communications, and the Arts (DITRDCA), will assess various age verification technologies, including biometric estimation, email verification, and parental consent mechanisms. The trial will provide crucial data to inform future national policies and industry standards, though it could take years to fully implement these measures nationwide.
This leaves state-based education bodies facing the immediate challenge of integrating these technologies into diverse school networks.
How can a social media ban be enforced in schools?
Enforcing a social media ban in schools is already raising questions: how can it be done when 73% of Australian teens regularly access platforms like TikTok and Instagram, often via personal mobile devices? Schools already use content filters, as seen in Victoria’s Department of Education, but these don’t block social media accessed through mobile data.
Technologies like biometric age estimation and Digital ID tokens are being considered to manage the ban across a wide variety of ICT touchpoints.
Gartner’s 2024 Education Technology Outlook predicts that AI-driven safety tools will become a critical component of school networks over the next decade, helping to bridge the gap between security and digital empowerment. AI-driven age verification systems, such as AgeID and Yoti, are already being trialled internationally for their ability to estimate user age accurately. These tools use facial recognition, voice analysis, and digital behaviour tracking to determine age and could be integrated into school networks.
Yet, integrating these systems into existing infrastructure and ensuring they work across school-operated and personal devices will require careful planning. As Gartner recommended, a phased rollout may help IT managers address operational and security challenges before scaling across schools.
The reality of a digital divide
A significant obstacle to age verification is the disparity in ICT infrastructure across Australian states. Western Australia, for instance, has made substantial upgrades in rural areas. But nationwide, 1 in 5 rural students still lack reliable high-speed internet—critical for implementing advanced technologies.
While the federal government’s School-Based Broadband Initiative aims to close this gap, remote schools may still struggle with the costs of deploying age verification systems, which Gartner estimates to range between $30,000 and $60,000 per school.
Piloting these technologies in well-resourced urban schools first could address early-stage technical challenges. However, this raises concerns about equity.
As Paul Fletcher, former Minister for Communications, pointed out, “a lack of digital infrastructure in certain regions could exacerbate existing inequalities, potentially leaving rural students more exposed to online risks.”
Students finding workarounds: the VPN Challenge
Even with robust age verification in place, more tech-savvy teens may find ways to bypass restrictions. A 2023 eSafety survey revealed that 34% of Australian teens already use VPNs to access restricted content, a number expected to grow.
AI-driven monitoring tools, such as those available on Microsoft’s School Data Sync, could help track student behaviour and flag attempts to use VPNs or access unauthorised content.
Minister for Communications Michelle Rowland has stated that “taking steps to prevent access to age-inappropriate content like pornography is one way to protect young minds from damaging and misogynistic behaviours.”
Creating safe exposure to social media
However, schools must also consider how to bridge the digital literacy gap this ban could create.
Yet, as Lizzie O’Shea, chair of Digital Rights Watch, warns, “a heavy-handed ban may push students toward more dangerous, unregulated parts of the internet.”
It’s an age-old adage that the more we’re denied something, the more we want it. So, with this in mind, students will need structured, safe exposure to social media platforms to build the skills necessary for navigating mainstream platforms.
Building a safe space for digital social media literacy
Schools will play a critical role in preparing students for future digital challenges. Some programs, like Be Internet Awesome, Be Internet Citizens and the Social Switch Project, already provide models for teaching social media literacy.
Schools can take this further by co-designing an edtech platform focused on safe, moderated social interactions. Features like AI-powered content moderation can filter inappropriate content in real-time while educators guide students on responsible engagement.
Data privacy and security are essential, and compliance with Australia’s Privacy Act 1988 and the Online Safety Act 2021 ensures the platform protects students’ personal information. A “privacy by design” approach, alongside role-based access controls for teachers, would allow students to build critical thinking skills while engaging in moderated discussions.
State-based education departments must ensure the platform integrates with existing school networks and scales across diverse environments, from metro to rural schools. Upgrading infrastructure in under-resourced areas and using tools like ICT dashboards can identify schools needing additional support.
Preparing for the future
So, while we wait for the age verification trials to begin, education IT leaders must start planning for the impact this technology will have on school networks, privacy, and student learning. As these technologies evolve, schools will need to balance safeguarding students with teaching them how to engage responsibly in the digital world.Share