News / Blog

Safer Internet Day 2026: Proactive AI technology is unsafe SafetyTech

UK policy debates about banning children from digital platforms are poorly evidenced and internally inconsistent, especially when contrasted with what is already happening in schools. While lawmakers propose sweeping restrictions on children’s online access, a far more intrusive system of AI-driven surveillance is already normalised in UK education, with little scrutiny, weak evidence of benefit, and serious risks to children’s rights. Despite these harms, could we use it to reveal evidence and inform debate before the government take the wrong fork in the road?


February 10, 2026 — Safer Internet Day

In the heated media debates about bans for children in the digital environment, after the Australian limitations placed on a narrow set of social media platforms to restrict children’s access online, we rarely hear evidence based informed voices. Here’s eight points to summarise some of the status quo on the ‘safety tech’ already used to monitor children’s activity and content in the digital environment, in and beyond educational settings.

(1) Policymakers are ignoring relevant existing evidence and practices.

After the Lords made amendments to the Children’s Wellbeing and Schools Bill, we see no evidence to suggest the additional approaches in the UK to ban all user-to-user regulated services would be beneficial for children, or why the far more strict than the Australian approach, is necessary beyond the Online Safety Act criminal offences that already came into effect on 31 January 2024.

But in fact, there should already be some evidence from technology in the UK , that prevents children’s access to content online at scale, and policymakers should be asking to see it.

(2) We already have large-scale, intrusive digital control of children—this should be central to policy discussions.

AI-enabled “SafetyTech” commonly used in educational settings, monitors millions of UK school pupils in different ways, but many often share common core characteristics: keystroke logging, screen capture and keyword-based classifiers, often 24/7 and beyond school grounds, at home and in school holidays. These tools, marketed as safeguarding technologies, normalise pervasive automated surveillance of children by private companies, with little public debate, weak evidence of effectiveness, and significant risks (Defend Digital Me, 2025; Laird et al., 2022).

(3) Current practices are incompatible with children’s rights standards.

From a children’s rights perspective, this infrastructure conflicts with international norms. Article 8 of the ECHR and UNCRC Article 16 promote children’s right to privacy and protection from arbitrary interference with their correspondence and family life. The UNCRC has stressed that automated filtering and profiling must not “supplant, manipulate or interfere with children’s ability to form and express their opinions in the digital environment” (OHCHR, 2021, p.10). Yet school SafetyTech through contextual AI risk analysis routinely records intimate content, even scanning and retaining unsent or deleted text and offline documents, building and distributing child-level longitudinal behavioural profiles in ways that are not rights respecting, transparent or demonstrably necessary and proportionate. What would rights respecting look like for such tools in risk averse environments, that are educational settings?

(4) Fundamental data governance responsibilities are being bypassed.

Inferences are retained by companies for inconsistent retention periods, and may even be passed on to police on request, despite the fact a company may have deemed them irrelevant and did not pass them onto schools. Determining the nature of processing, retention, distribution, and reuse, makes these companies in-part “sole”, and in-part “joint” data controllers, not only data processors. Since these companies are making these kinds of decisions, they also need to take responsibility for failures of governance. And we can do better. We must not design law that further entrenches this power imbalance, by giving more uncontrolled power to technology companies, and taking it away from school staff, parents or pupils.

(5) Surveillance likely causes social and democratic harm, especially for children and their relationship to authority, but remains under-researched.

U.S. civil society research evidence suggests such kinds of system are used more for discipline than safety, and that they chill speech, and disproportionately harm minoritised children (Thakur and Laird, 2023). UK practice appears to follow similar patterns, including categories that claim to “identify” extremism and radicalisation, and are marketed as necessary to meet national public authority duties under the anti-radicalisation and extremism programme Prevent, where education is already the largest source of referrals. Treating every child as a potential threat, based on opaque AI-risk profiling, likely feeding highly inaccurate data into such onward referrals, corrodes trust in authority and the conditions necessary for democratic society. The harm this is doing to children’s relationships with staff authority and technology needs urgently researched, evidenced and acted upon. What better would look like, needs to take all this into account.

(6) Surveillance is rapidly expanding dressed as ‘safety’ but through political and economic agendas, not democratic consent.

In the absence of democratic debate, these developments have been shaped by political agendas. The UK government actively promotes SafetyTech as an export sector, positioning the UK as a “world-leader.” At the same time, ministers have said that manual inspection of private communications would be “a clear breach of privacy” (Hansard, 2020), even as the de facto “man-in-the-middle” architectures are deployed without legislative guardrails or parliamentary oversight, not even with a warrant, to decrypt and inspect children’s communications and that of anyone who interacts with them on school devices, online or off. The Online Safety Act 2023 and Codes of Practice further entrench “proactive technology” like this, pretty much along the same lines of categories to monitor, and also ignoring these tensions and unlawfulness (OFCOM, 2025). So few understand what this “proactive technology” will mean for all online content likely accessed by a child, and all adult users who’ll have to prove they are not one in an age-gated Internet, but the recently announced government consultation has sketched out what they might include: “the option of banning social media for children under 16 and raising “the digital age of consent” (which does not exist, so that’s an interesting place to start), to stop companies using children’s data without their or their parents’ consent; whether there should be curfews overnight or breaks to stop excessive use or doomscrolling; how we ensure more rigorous enforcement of existing laws around age verification; and action to address concerns about the use of virtual private networks.” It could mean the end of respect for the ban on general monitoring (protections from which we lost in law as a result of Brexit), likely mandatory on-device scanning and upload filters (at minimum for children, maybe for all),  and Cinderella laws that do not work.

(7) Core state responsibilities are being privatised without accountability.

SafetyTech often depends on cloud providers, resulting in vendor lock-in and concentrating control of educational infrastructures and highly sensitive data in a few private corporations. Undemocratically, this outsources the key core public functions of safeguarding in schools, to opaque contractual arrangements with intermediaries and cloud platforms, without oversight, reducing school staff agency and authority, or controls on national costs (Patel et al., 2019). Worryingly, this has led to arbitrary and company-led decision-making on significant questions of law and public policy, such as when the company should or should not retain or destroy children’s intimate images that they see in the course of monitoring, and that may be innocuous bikini shopping shared with friends, or may be grooming or trading pics under duress or blackmail. Safeguarding—traditionally a core public function in education—has been outsourced to private companies via opaque contracts, and accountability is left in corporate hands.

(8) Children’s online safety policy must move away from tech-solutionism toward rights-based governance, and away from data protection alone determining the boundaries of how children may exercise their right to privacy of communications and family life.

We argue that there are parts of some AI-based SafetyTech in schools that fail key tests of legality, necessity, proportionality and non-discrimination, and are therefore incompatible with democratic commitments to children’s rights. But we’re most deeply concerned about the surveillance infrastructure this has built, normalised, and even exported, without proper due process and oversight or any thought for who might benefit from that infrastructure that companies have built, without checks and balances, and contrary to the best interests of any given child.

Policy on children’s online safety must shift from technological solutionism towards:
(1) strict limits and sector standards on routine, child-level surveillance in bulk;
(2) mandatory independent auditing and error-rate reporting, plus tracking how many nudes or other types content staff have seen;
(3) enforceable routes to exercise data protection rights including rights of access, correction, minimisation; objection; and deletion for pupils and families; and
(4) fundamental first principles of the law need reapplied, whether it’s communications; privacy; RIPA; or opt out for data processing on the basis of legitimate interests;
(5) meaningful participation of children, parents, educators and civil society in enforcement of rights-respecting governance of AI in education.

First, there’s an opportunity here we could benefit from in the public interest, if the companies would give up their aggregated data for research access. Surveillance is not safety. But it might give us some clues as to what platforms and searches children are making and at what magnitude, that are considered legal but harmful and are blocked, or flagged, sold as harm prevention in schools.



References

Defend Digital Me. (2025). SafetyTech in schools: A timeline. https://defenddigitalme.org/safetytech-in-schools-a-timeline/

Hansard. (2020, 15 December). Online harms consultation, Vol. 686. https://hansard.parliament.uk/Commons/2020-12-15/debates/1B8FD703-21A5-4E85-B888-FFCC5705D456/OnlineHarmsConsultation#contribution-C6B532EC-E744-4084-BE3B-FBEB0386B354

La Rue, F. (2013). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (A/HRC/23/40). UN Human Rights Council. https://www.refworld.org/reference/themreport/unhrc/2013/en/96147

Laird et al. (2022). CDT Report: Hidden harms: The misleading promise of monitoring students online. Center for Democracy and Technology. https://cdt.org/insights/report-hidden-harms-the-misleading-promise-of-monitoring-students-online/

OFCOM. (2025). Proactive Technology Draft Guidance Additional Safety Measures—Annex 6. https://www.ofcom.org.uk/siteassets/resources/documents/consultations/category-1-10-weeks/consultation-online-safety—additional-safety-measures/annexes/annex-6—draft-guidance-to-proactive-technology-measures.pdf?v=399379

OHCHR | General comment No. 25 (2021) on children’s rights in relation to the digital environment. (2021.). OHCHR. https://www.ohchr.org/en/documents/general-comments-and-recommendations/general-comment-no-25-2021-childrens-rights-relation

Online Safety Act 2023, c. 50. (2023). UK Public General Acts. https://www.legislation.gov.uk/ukpga/2023/50/contents/enacted

Patel, F. (2019) The Brennan Center https://www.brennancenter.org/our-work/research-reports/school-surveillance-zone

Thakur, D., and Laird, E. (2023). Beyond the screen: Parents’ experiences with student activity monitoring in K–12 schools. Center for Democracy and Technology. https://cdt.org/insights/report-beyond-the-screen-parents-experiences-with-student-activity-monitoring-in-k-12-schools/

Defend Digital Me
January 2026