Digital monitoring

(Last updated: August 24, 2021)

Tell us your story of surveillance

Have you or a child in your family been referred to the school or outside services, for your digital activity, in school or at home? You might have been asked about your Internet searches, chats, webcam and video activity or use of counselling services. Have you or a child in your family been referred to the Prevent programme because of something done on a computer or phone? Did the system or school take a photograph of you, using your webcam?

We are interested in talking to families or young people who believe their online activity was surveilled through school. It doesn’t matter whether you are still at school or it happened in the past. We want to build case studies in confidence, and we’ll talk to you on your terms.

When any device is used to access school networks, in or outside the school or college site, your activity may be monitored until the user disconnects from the school system. Activity such as Internet searches, webcam content, video, emails, chats or social media — anything on screen or the attached camera, may trigger a report that is escalated. Triggers may come from content that systems suggest means a child is at risk from others, to themselves, or is a risk to others, based on matching content with vast keyword libraries among other things and you don’t have to be in school for the systems to be on.

One CEO of a company, told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet.“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”

We know that families do not get told properly how this works and there’s lots of different systems that operate in different ways. Although the over 400 school Acceptable Use Policies we have viewed, may mention monitoring applied to the use of a school or college digital environment we are yet to see any that really explains that this includes offline documents and out of school hours. Some only state a single line ‘I understand my use of the Internet may be monitored,’ which isn’t enough to explain that systems are monitoring all remote learning at home in the COVID-19 pandemic for example. It shouldn’t be difficult to find this information, or for children to understand the foreseeable consequences of their actions.

Our concerns include that

  • information might be passed on to the school or by school staff to others including police referrals, or Further or Higher Education institutions, which is misinterpreted out of context. For example, who was actually using the device at the time of the incident. These outcomes may have very serious and significant consequences for a child.
  • the multi-lingual aspects of monitoring systems. The topic areas that the system has been programmed to deem significant, may build-in racism, cultural discrimination and bias that is not obvious to the staff in schools or children it affects. Since we are not able to see the lists of content that trigger ‘actions’ or ‘flags’ from any of the suppliers we cannot however make a judgement here and it would be a good topic for independent, academic research.
  • no one should be able to take a webcam photograph of someone without their knowledge or control.
  • we believe that it is hard for a child to understand how these algorithms work. If a child uses a search term, text or pictures on thing screen that the system has decided needs flagged by the company, when is the child told?

We would like to see all of the companies that operate in this field to have a duty of transparency, to publish error rates and how errors are defined, the keywords that would trigger action from a past year for example, the full sources of input, and for companies to offer a standard subject access report template that would enable a school to print annually, and on demand, the information created about a child from monitoring. This would also need to ensure that any shorter data retention periods still have adequate audit functions to be able to meet requests.

In our opinion there should also be a review of lawfulness across all of the sector suppliers, from the perspectives of human rights, privacy, and communications law.

Please get in touch



The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated: “The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.” (2014)

84% of 1,004 parents we polled through Survation in 2018 agreed that children and guardians should be informed how this monitoring activity works. (We didn’t ask if it should happen at all or not.) Survation poll on behalf of defenddigitalme in March 2018 of 1,004 parents with children in state schools

Manchester colleges agreed to share data of students referred to counter-terror scheme (2020)

Lower Merion School District, Philadelphia settles lawsuit for using laptop webcams to look in on students at home. (2010)


(Last updated: August 24, 2021)

Teachers, Safeguarding Leads, IT staff, pupils, students, families — Have you got experience of the Prevent programme in education?

The education sector made the most referrals (2,426) to the Prevent programme in 2017/2018, accounting for 33%.

In 2017/18, of 7,318 individuals referred, the majority (4,144; 57%) were aged 20 years or under. Those aged 20 years or under also made up the majority of the 1,314 individuals discussed at a Channel panel (818; 62%) and 394 individuals that received Channel support (259; 66%)

[Source: Home Office statistical bulletin December 2018.]

Some are barely in school before being referred. One 4 year old “was referred to the UK’s anti-extremism watchdog over Fortnite comments“.

We know how hard it can be to get errors and records removed, and people are concerned that the information could be used against them in the future, “even though they were based on a mistake”.

Prevent’s efficacy and harm is long overdue review; its underlying assumptions and evidence base, its human rights implications for all fundamental freedoms, freedom of speech, freedom of association, freedom of thought, conscience and religion; and, ultimately, whether it is fit for purpose. We are however deeply disappointed that the government will not address this with the respect and independence it requires, signified by two successive appointments to the Review process.

We have been made aware of aspects of the implementation of the Programme that negatively affect children and need to change. In particular we hear stories in relation to the creation of records about children from Internet monitoring and classroom management software, that can contribute to profile building behind the scenes.

We welcome additional stories of your own experience. We are researching this topic, across any age group in education, for the purposes of building evidence and will be as confidential as participants want it to be.

Whether you are a teacher, school safeguarding lead, IT network manager, a pupil or their families, or have any other experience, we would welcome your input, anonymous or otherwise.

We will treat all your information with complete confidence and only disclose anything as a case study or in evidence, in full agreement with you.

Please get in touch.