Mapping a child’s digital footprint in the state education landscape in England
Read the ReportChildren have lost control of their digital footprint by their fifth birthday simply by going to school.
The State of Data 2020 report
Data protection law alone is inadequate to protect children’s rights and freedoms across the state education sector in England. Research trials are carried out routinely in classrooms without explicit parental consent and no opt-out of the intervention. Products marketed for pupils are increasingly invasive.[1] Students are forced to use remote invigilation tools that treat everyone as suspicious, with automated operations that fail to account for human differences, or that harm human dignity.[2]
This report asks government, policy and decision makers to recognise the harms that poor practice has on young people’s lives and to take action to build the needed infrastructure to realise the vision of a rights’ respecting environment in the digital landscape of state education in England.
We make recommendations on ten topics
Now is a critical moment for national decision makers if they are serious about the aims of the National Data Strategy[3] to empower individuals to control how their data is used. After the damning 2020 ICO audit[4] of national pupil data handling at the Department for Education, will you make the changes needed to build better: safe, trustworthy public datasets with the mechanisms that enable children and families to realise their rights or will you stick with more of the same; data breaches,[5] boycotts[6] and bust opportunity?
Will you act to safeguard the infrastructure and delivery of state education and our future sovereign ability to afford, control, and shape it, or drive the UK state to ever more dependence on Silicon Valley and Chinese edTech, proprietary infrastructure on which the delivery of education relies today.
The 2020 exam awarding process demonstrated the potential for discrimination in data, across historical datasets and in algorithmic decision making. While some may consider the middle of a global pandemic is not the best time to restructure school assessment, progress and accountability measures, change is inevitable since some collections were canceled under COVID-19. Now is the time to pause and get it right. We are also at a decisive moment for many schools to decide if, or which new technology to invest in, now that most of the COVID-19 free-trial offers are over.
Privacy isn’t only a tool to protect children’s lives, their human dignity and their future selves. The controls on companies’ access to children's data, is what controls the knowledge companies get about the UK delivery of education and the state education sector. That business intelligence is produced today by the public sector teachers and children who spend time administering and working in the digital systems. So while many companies offer their systems for free or at low cost to schools, schools have intangible costs in staff workload and support time, and donate those labour costs to companies for free. Our children are creating a resource that for-profit companies gain from.
Exclusive Department for Education funding to support schools’ adoption of tech giants[7] products in lockdown, further established their market dominance, and without any transparency of their future business plans or intentions or assurances over service provision and long-term sustainability.
The lasting effects of the COVID-19 crisis on children’s education and the future of our communities, will be as diverse as their experiences across different schools, staff, and family life. Worries over the attainment gap as a result of lost classroom hours, often ignores the damaging effects on some children of the digital divide, deprivation and discrimination and lack of school places for children with SEND, that also affected children unfairly before the coronavirus crisis. Solutions for these systemic social problems should not be short term COVID-19 reactions, but long term responses and must include the political will to solve child poverty. Children’s digital rights are quick to be forgotten in a rapid response to remote learning needs, but the effects on their digital footprint and lived experience might last a lifetime.
We call for urgent government action in response to the COVID-19 crisis and rapid digital expansion:
Sector-wide attention and longer term action is needed to address
This year marks 150 years since the Elementary Education Act 1870 received royal assent. It was responsible for setting the framework for schooling of all children between the ages of 5 and 13 in England and Wales.
Todays’ legislation, the Education Act 1996 is the primary legislation upon which most statutory instruments are hung to expand pupil data collections, and start new ones for millions of children generally as negative statutory instruments without public consultation or parliamentary scrutiny. It is no longer fit for purpose and lacks the necessary framework when it comes to data processing and related activity in the digital environment in education. It is therefore our first in ten areas of recommended actions on the changes our children need.
In 2020 as the world’s children continue to be affected by school closures in the COVID-19 pandemic, technology plays a vital role in education. Some tools enable the delivery of essential information, connecting school communities outside the classroom. Others provide national platforms for sharing educational materials, or offer alternative means and modes of Assistive Technology and augmented communications, supporting the rights of those with disabilities.
But many families across the UK still don’t have the necessary hardware or Internet access to support online remote learning at home. In addition, a lot of the critical infrastructure to deliver the administrative access to education is enabled by Silicon Valley big tech — companies originally set up by and for business, not educators. The Department for Education’s (DfE) rapid response to a need for remote learning in the COVID-19 pandemic, bolstered the near duopoly in England’s school system by offering funding in the Platform Provisioning Programme, to schools to get started with only two providers’ systems— either Google or Microsoft.[8] Is that lack of sovereignty in the state sector sustainable? What is the current business model? What happens when freeware business models change? There is inadequate capability and capacity in schools to understand much of the technology marketed at them. Staff are expected to make quick and effective procurement choices for which they have often little training and can lack access to the necessary expertise.
Some of the greatest ongoing debates in the education sector on assessment and accountability, funding, curriculum and governance all have implications for children’s digital records. And we are at an acute point of heightened awareness of disadvantage and distance learning. Understanding how technology should support these needs was part of the regular delivery of education. A large part of products offered to schools was for administrative support, but tools supporting learning to date have in the main offered stand-alone and closed commercial product offerings. The exceptional demands of remote learning now demand more focussed attention on what is desirable, not only on what is currently available.
Creating better public sector infrastructure and local systems
Today, schools overstretched by austerity, routinely push costs back to equally cash strapped parents. Lack of investment in school infrastructure means parents are increasingly asked to pay upwards of £400 in lease-to-buy hardware schemes and take on ever more back-office pupil admin through linked pupil-parental apps. Freeware products may choose to make money through data mining or ads instead of charging an upfront fee that schools can’t afford. Children using the product may not know their data and behavioural activity is used as a free resource by companies in product development and research. Practice that can fail to comply with the law.[9]
Imagine instead, a fair and open market in which safe tools were supported that were effective, equitable, and proven to meet high standards. To support better accessibility, pedagogy and provide trustworthy emerging technologies we must raise standards and hold businesses and the state accountable for their designs and decision making.
Imagine if the government invested a flat rate in COVID-19 teacher training support, and open funding to build tools that schools need, to support a blended approach beyond autumn 2020.
Imagine moving away from systems that siphon off personal data and all the knowledge about the state education system—using teachers’ time and work invested in using the product for their own benefit—and instead the adoption of technology focussed on children’s needs and transparently benefited the public interest. Imagine decentralised, digital tools that worked together across a child’s school day centred on the child’s education rather than a series of administrative tools that are rarely interoperable and most often siloed.
Despite the best intentions of peer-to-peer demonstrator schools to share best practice and selected digital products, there is no joined-up vision for a whole curriculum approach, underpinned by pedagogy and proven child outcomes. Promotion encourages adoption of ABC products because it can help you with XYZ as a bolt-on to current practice. Rather than looking at a child-centric and teacher-centric experience of teaching and learning and asking what is needed. While many products look and sound appealing, many of the learning outcomes are contentious and unproven, and are rarely compared with giving every secondary school child a full set of subject text books for example.
Government must work to safeguard the national infrastructure behind the delivery of state education and our future state ability to afford, control, and shape it. But it must also provide a high standards framework for educational settings to be able to address the lack of equity and access at local level; due diligence in procurement in technical, company integrity and ethical terms.
There is rarely a route for families' involvement in decisions that affect their child from high level democratic discussion of the corporate reform of education through to the introduction of technology in education, down to the lack of consultation on the installation of CCTV in school bathrooms. Without new infrastructure, the sector has no route to move forwards to develop a consistent social contract to enable and enforce expectations between schools and families.
Creating safe national data systems
Learners have also found themselves at the sharp end of damaging algorithms and flawed human choices this summer across the UK, as the exam awarding processes 2020 left thousands of students without their expected grades and stepping stone to access university. People suddenly saw that a cap on aspiration[10] was a political choice, not a reflection of ability.
The historic data used in such data models is largely opaque to the people it is about. The majority of parents we polled in 2018 do not know the National Pupil Database exists at all. We have campaigned since 2015 for changes to its management; transparency, security and reuses.
In the wake of the national Learning Records Service breach,[11] the Department for Education tightened access to the approval process for new users of the 28 million individuals’ records in Spring 2020. The Department now requires firms to provide details of their registration with both the Information Commissioner’s Office and Companies House, as well as evidence of their being a going concern. And it will be dependent on firms providing “a detailed description of why they need access” —all of which one would have expected to be in place and that routine audit processes would have identified before it was drawn to national attention by the Sunday Times[12]. But it is just one of over 50 such databases the Department for Education controls, and what about the rest? The ICO findings from its 2020 audit should be applied to all national pupil data.
These databases are created from data collected in the attainment tests and school censuses, some of which didn’t happen this year. So what needs to happen next?
After the exams fiasco 2020, and pause on attainment testing for the accountability system, we propose a moratorium on league tables, accountability, and Progress 8 measures until at least 2025. Delay the national central collection of children’s records and scores in the new Reception Baseline and Multiplications Times Tables Tests. Data should work to support first and foremost the staff that create it in the direct care of the children in front of them. The Department for Education should receive sampled data from Early Years, Phonics and Key Stage Two testing and enable a decentralised model for the minimum necessary information transfers of Year 6 into Year 7 transition, which may adjust the Common Transfer File.
Building a rights’ respecting digital environment in education
Few families would let an unlimited number of strangers walk into their home, watch what they do on-screen, hear what they say or scan every Internet search and label it with risk factors. No one would let strangers or even school staff take a webcam photo of their child without their knowledge or permission. We would not expect outsiders who were not qualified educators to stand in the classroom and nudge a child’s behaviour or affect their learning without DBS checks, safety and ethical oversight and parents being informed. Yet this is what happens through current technology in use today, across UK schools.
Imagine England’s school system as a giant organisational chart. What do you see? Which institutions does a child physically pass through? How do the organisations relate to one another and who reports to whom? Where is regulation and oversight and where do I go for redress if things go wrong? It is nearly impossible for parents to navigate this real-world complexity amongst the last decade of restructuring of the state school system. Now add to that the world we cannot see. It is hard to grasp how many third-parties a child’s digital footprint passes through in just one day. Now imagine that 24/7, 365 days a year, every year of a child’s schooling and long after they leave school.
Learners’ rights are rarely prioritised and the direction of travel is towards ever more centralised surveillance in edTech, more automated decision making and reduced human dignity and may breach data protection, equality and consumer law.[13] The need for protection goes beyond the scope of data protection law and to the protection of children’s right to fundamental rights and freedoms; privacy, reputation, and a full and free development..
“The world in thirty years is going to be unrecognizably datamined and it’s going to be really fun to watch,” said then CEO of Knewton, Jose Ferreira at the White House US Datapalooza in 2012.[14] “Education happens to be the most data mineable industry by far.”
We must build a system fit to manage that safely and move forwards to meet the social, cultural and economic challenges young people face in a world scarred by COVID-19 and as we exit the European Union. We must not model our future aspirations for the economy and education on flawed, historic data.[15]
We must also enable children to go to school without being subject to commercial or state interference. “Children do not lose their human rights by virtue of passing through the school gates… Education must be provided in a way that respects the inherent dignity of the child and enables the child to express his or her views freely...[16]
The Convention on the Rights of the Child makes clear that children have a specific right to privacy. Tracking the language of the UDHR and ICCPR, Article 16 of the Convention states that “no child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, or correspondence, nor to unlawful attacks on his or her honour and reputation,” and reaffirms that “the child has the right to the protection of the law against such interference or attacks. These standards imply that children should be given the same levels of protection for their right to privacy as adults. When contextualising children’s right to privacy in the full range of their other rights, best interests, and evolving capacities however, it becomes evident that children’s privacy differs both in scope and application from adults’ privacy.” (UNICEF, 2017)[17]
By placing this background work into the public domain (in parts two to five of this report) we intend it for others to use and help keep it up to date with current information and case studies in the constantly evolving areas of statutory data collections and technology to collectively build better.
We set out to map a snapshot of the current state of data processing in 2020 for children in education in England, age 2-19. In Parts 2-4 we describe a selection of some of the common data processing, what systems do and why, how they share data and consider their risks.
This report is about how systems create, use and exploit information collected about children as well as content created by them, and how that data is processed by third-parties, often for profit, generally at public sector cost in terms of school staff time and from school budgets.
We include applied case studies in the online report content (Part 3), brought to our attention by a wide range of stakeholders including young people, parents, state school, private school and public authority staff with the aim of drawing out more concrete discussion of common issues in a rapidly changing field. We are grateful to the companies that contributed to our understanding of their products and reviewed the case studies in advance of publication.
We do not attempt to present this as a comprehensive view of the entire education landscape that is constantly evolving. We need to do further research to map data flows for children with special educational needs who leave mainstream schooling and ‘managed moves’. We do not cover secure children’s homes or secure training centres. But there are consistent gaps with regard to lack of respect for child rights highlighted across Ofsted reports of all settings where children receive education, so that children in the Oakhill Secure Training Centre[18] may have much in common with those in edTech demonstrator schools.
We have sought views from discussion with a wide range of others: academics, benchmarking companies, data protection officers, data consultancies, researchers, school network managers, suppliers, vendors. In 2019 we also ran workshops with young people.
We include the opinions of over 1,000 parents in a poll we commissioned through Survation in 2018, and the views from 35 school IT network managers and staff on the online forum Edugeek, polled just before the GDPR came into enforceable effect in May 2018. The latter was too small to be a representative sample of opinions, but is an interesting snapshot of views in time.
This report is not about how children access or use the Internet in their personal lives. There is already a lot of discussion about child protection with regard to online stranger-danger, or restricting their access to harmful content.
We aim to map what personal data is collected by whom, for what purposes and where it goes and why. This report is only a sample of the everyday data collection from children in the course of their education and tells only some of the story that we can see. The fact that so much is hidden or hard to find is of itself a key concern. Gaps that readers familiar with the sector may identify, may highlight how hard it is for families to understand the whole system. We intend to update this knowledge base in an online repository and maintain it with current examples as time goes on. We welcome case studies and contributions to this end.
This report falls into five parts.
Part 1: a summary report of recommendations and main findings
Part 2: national statutory data collections including a CV at-a-glance age 0-25
Part 3: local data processing including edTech case studies and a day-in-the-life of an eleven year old
Part 4: highlights from the transition from compulsory school to Higher Education
Part 5: an annex of data, source materials, research and references.
This is Part 1 and consists of this introduction and summary report to highlight our ten areas of recommended actions. Parts 2-5 are online only.
Part 2 starts by identifying the core infrastructure behind national statutory data collections in the state education system affecting children typically from birth to age 25. We mapped the most common statutory data collections for the purposes of the national accountability system that are about recording a child’s attainment and testing and the seven types of census collected by the Department for Education on a termly or annual basis. A subset applies to every child in mainstream education with additional collections for each child who attends state-funded Early Years settings, is a child at risk, or leaves mainstream education and is counted in Alternative Provision. We added in the most common data collections from local level progress and attainment testing for schools’ own purposes and additional testing applied to a sample of children nationally every year for national and international purposes. And we address where all this data goes when it leaves a school and how it is used.
Finally we look at samples of other significant pupil data collected through schools about children nationally, such as health data and the vital role of the school vaccination programme as well as the interactions with school settings by other national institutions for youth work, careers or school regulation by Ofsted.
In Part 3 we address local data processing. We map common aspects of the local data landscape and address the data processing from the daily systems and edTech interactions that affect children from both primary, secondary and further education to help readers’ understand the volume of data flows between different people and other organisations outside the state education sector. We include a range of case studies picking out different types of edTech most common in schools today.
In Part 4, we address in brief, the transition between school and Higher Education, from childhood to adulthood. We look at some of the most common data processing from applicants and students as they transfer from state education to Higher Education at age 18. We cover both national data collections and local institutional choices processing data for student data analytics and national policies such as the Prevent programme.
Part 5 contains an Annex of tables, and figures including comparisons of national data collections and use across the four devolved nations only to serve as a comparison to England’s policy and practice, and while many of the same questions around edTech apply across all of the UK, we do not attempt to map the landscape outside England.
A future and further stage of this project would look to map the Department for Education funding flows across the sector to see where there are differences between who provides data about a child, where the child learns, and who gets the money for providing their education. In researching the Alternative Provision sector in particular the discrepancies indicate a lack of accountability when where a child goes and where the money goes are to different places.
Later guidance will be created from this to help advise teachers and parents of what they can do to protect children’s human rights as we continue to move into an ever-more machine-led world.
Three futures need to be championed and must be made compatible in a long term vision for a rights’ respecting environment in state education. 1) The rights of every child to education and promotion of their fullest development and human flourishing[19], 2) the purpose and value of learning and education for society and its delivery, and 3) the edTech sector’s aspirations and its place in the UK economy and in export. We must build the legal and practical mechanisms to realise rights across the child’s lifetime and beyond the school gate, if the UK government is to promote all three. It is against this background that we have undertaken this report at defenddigitalme and recommend founding that framework in legislation upon which that vision for the future can flourish.
For national governments
For the Department for Education
For Local Authorities and Multi-Academy Trusts and educational settings
“a State should not engage in, support or condone abuses of children’s rights when it has a business role itself or conducts business with private enterprises. For example, States must take steps to ensure that public procurement contracts are awarded to bidders that are committed to respecting children’s rights. State agencies and institutions, including security forces, should not collaborate with or condone the infringement of the rights of the child by third parties. States should not invest public finances and other resources in business activities that violate children’s rights.”
“In the U.S. Between 2013 and 2018,40 states passed 125 laws that relate to student privacy. In general, these have coincided with states moving to online statewide testing (which has increased the quantity of data created and shared) and as states have built integrated data systems that combine data from multiple state agencies. Some common goals of these laws are
While many may consider the middle of a pandemic not the best time to restructure school assessment, progress, and accountability measures, it is inevitable since some of the mainstays of the system do not exist for some year groups after their cancellation under COVID-19. The Department for Education Data Management Review Group 2016 findings are yet to be realised, so that schools can have greater freedom to balance professional autonomy and agency against the demands of the accountability system. And the recommendation from the 2017 Primary Assessment enquiry has not been realised to ensure the risks “of schools purchasing low-quality assessment systems from commercial providers” are mitigated against through standards’ obligations. This year’s awarding process and its failure of fitness for purpose also demonstrates a need for better risk assessment and understanding of the potential for discrimination in data, across all of these systems at all levels.
For the Department for Education
Fig.1 overleaf is an infographic to demonstrate the range of sources of data that may become part of a child’s national pupil database record in England, over the course of their lifetime education age 5-19. The records for a child that attends state funded Early Years educational settings will start earlier, any time from the rising 2s. A child at risk, may be captured in data from before birth if they are the child of a child, whose personal records will be sent to the Department for Education in the Children in Need (CIN) Census. Not every child will experience Alternative Provision or transition to Higher Education. But those who do, will have a larger named pupil record at national level. Personal data is sent to the Department for Education from every statutory test a child takes from the Early Years Foundation Stage Profile, to Phonics Tests, SATs and GCSE and A-Levels and more. The core data about a child are extracted in nearly every termly school census, annual census, and statutory test. Where this deviates is noted. Some items have multiple sub-categories of detail but we do not list them all in the chart, including SEND types that may be Autism Spectrum Condition (ASC), Speech, Language and Communication Needs (SLCN), Specific Learning Difficulties (SLD), Moderate Learning Difficulties (MLD), Social, Emotional and Mental Health Difficulties (SEMH), Attention Deficit and Hyperactivity Disorder (ADHD) and Sensory and/or Physical Difficulties.
We believe it is unnecessary and disproportionate for many of these details to be retained by the Department for Education indefinitely at named pupil level, and instead data could be extracted in anonymised, aggregated groups of data, or through statistical sampling.
NB. *Selected CIN data are not added to the NPD and some are restricted to Department for Education staff only.[48]
If you want to help us change this, please write to your MP and tell friends and family. You can see more information and steps to ask to see what is held in your own or your child’s record (since 1996) at: https://defenddigitalme.org/my-records-my-rights/
The ICO summary of its compulsory audit[49] of the Department for Education data handling is damning. Lack of oversight, accountability and lawfulness. National data collections of highly sensitive data have been rushed through successive parliaments in negative secondary legislation that far outstrip the data collection intentions of the original Education Act 1996 or squeezed into surprising places in non-education based legislation. Changes are needed in the making of legislation, risk assessment, re-use and repurposing of national pupil datasets, research access, and recognition of rights. Some national practice is currently unlawful, unsafe, and unaccountable. This needs substantial work to be fit and proper foundation on which to build a national data strategy “to drive the collective vision that will support the UK to build a world-leading data economy.” To be of greater value to users and reduce tangible and intangible costs to the state at national, local authority and school levels, national datasets should be reduced in size and increased in accuracy. The current direction of travel is ever more data and ‘mutant’ algorithms, when it should be towards more accurate and usable data within a trusted regime with standards, quality assurance and accountability. This needs action if the national data strategy is to become more than an aspiration
For Government at national level
For the Department for Education
For Local Authorities
We are yet to see articulation of a future-thinking vision for education that moves away from centralised data surveillance and intra-schools competition, and instead prioritises local needs and collaboration. The best of what COVID-19 highlighted in communities across the UK was in systems of human support networks, with the continuity of learning and children’s welfare and rights at its centre. In June 2019, the High Level Expert Working Group on Artificial Intelligence (HLEG-AI) in their Policy and Investment Recommendations for Trustworthy Artificial Intelligence, proposed children must be better protected when using emerging technologies. That needs extended in England to protection from excessive or non-consensual research trials of emerging products and practice. (See case studies, part three.)
For policy makers at all levels
Policy makers should recognise that the hype of ‘edTech’ achievement in the classroom so far, far outweighs the evidence of most delivery. More than three quarters (79%) of teachers and school leaders surveyed want to see clear proof that EdTech works in the classroom. To build a trusted relationship in edTech efficacy and intentions, then a sea-change is needed in industry, research, Think Tank and policy making bodies current approach and attitudes that assume entitlement to access state school children, trial products on them, and use their data as a free resource. Normalised poor practice should be reset, enabling safe and ethical product development, that is not exploitative or encroaching on educational time, supported by common standards developed in conjunction with children and families, their representatives, civil society, industry, teachers, and regulatory bodies.
Evidence
Efficacy
Ethics
Exports
Engagement between companies and in educational settings
Fig 2. This illustration is intended to show a high level digital day-in-the-life of an eleven year-old at state secondary school. The common core administrative functions in educational settings that are supported by the Management Information System, the digital centre of a school records system. The outer ring are the activities in which children’s data most commonly leave the school and are processed or controlled by third-parties for daily interactions. It also includes the processing of hours that children do not see: data transfers to Local Authorities and the Department for Education as part of regional decisions on risk scoring and predictive analytics programs, or for the national census termly for the school census and annually for some others. These functions may entail processing of a child’s personal data by dozens of companies in one day, every day, across their entire education.
Evidence
Efficacy
Ethics
Exports
Engagement
Devolved nations have made greater efforts than England to establish a national child rights framework across public services delivery. (Wales 2011 Rights of Children and Young Persons Measure and in Scotland the Children and Young People's Act 2014). To strengthen a unified and outward looking approach to children’s rights would bring consistency across the Union for business and the public sector with a more sustainable and explainable standardised set of expectations.
For policy makers at national level
For the Department for Education
For educational settings and companies they use
Children’s rights appear to have low priority when it comes to procurement or research projects. Many products influence behaviours, choices and habits and there is little transparency for families how these tools shape their child’s educational experience or what and how they learn. To restore a rights respecting relationship between families and schools when it comes to edTech, then the controller/processor boundaries need redrawn between edTech companies and educational settings and this needs oversight at a higher level than single school, for example by MAT or Local Authority.
For Local Authorities and equivalent MAT level service provisions
For educational settings and providers
Capability and capacity
Lack of transparency and information
Design functionality and lack of suitable tools for families and children
Lack of democratic engagement and public consultation
The data industry that now operates in the Higher Education sector is vast and includes student surveillance 24/7, across campuses that increasingly strive to be smart, across the student whole experience from academic attainment and predictive analytics assessing ‘risk factors’ for drop out, to a lifetime of alumni fundraiser calls. Everyday data collections sit behind the infrastructure for building access and attendance registration and administration using fingerprint readers for the interface with printer and cashless payment systems. Universities surveil social media for reputational risk, online activity for the Prevent programme and the Student Loans Company may scan every social media for signs of fraud. Learning and attendance data are repurposed for tasks the data were never designed for in predictive analytics and even mental health risk indicators. Voice activated routine support services that process biometric data excessively, risk [103] trivialising high value biometric data. The young applicant on the cusp of adulthood remains deeply disempowered.
There is no single definition of research, or reference point for the volume and location of trials, testbeds and industry-led projects going on in schools in England. What the application of these various testbeds, projects and trials can mean for a single child across their educational life is not clear. Organisations carrying out research trials and not subject to Freedom of Information decline requests for transparency statistics. It is impossible for us to research this at school level due capacity and it would be costly for schools to ask at scale, how many trials are typically going on at any one time across the sector. But it appears a glaring gap in understanding how much and which children are subjected to what kinds of interventions in their classroom or school-led activity at home. No single organisation appears to have oversight or be accountable for these activities.
Good data management today in the sector is inconsistent and limited. To move forwards children need adults to fix what is currently broken and be accountable to children enabling them to understand and exercise their rights, rather than ignore them as usual today. It must be clarified quickly whether or not the ICO Age Appropriate Code of Practice applies to educational settings[122]
In England broadly in education law as well as data protection law, a child is anyone aged under 18 except where stipulated otherwise. Although you can leave school on the last Friday in June if you turn 16 by the end of the summer holidays, you must then do one of the following until age 18[133]:
This has implications for what data is collected from education data collections such as the censuses, and for how long. School statutory data reporting requirements therefore apply to everyone up to age 18. Some aspects of children’s social care and education may also make statutory obligations for young people up to the age of 21 or 25, such as leaving social care or those with special educational needs and disability.
As regards data protection law, children are only aged up to 18 and therefore there are a number of years for young people who may be regarded as having additional vulnerability or in need of extra adult care, for whom their data rights are out of sync with their educational rights and data collection is inconsistent.
Compulsory education ages and definition of “pupil” are different across the UK, and within the meaning of the 1996 Education Act; the Education (Scotland) Act 1980, The Education and Libraries (Northern Ireland) Order 1986, or young people with special educational needs or disability within the Children and Families Act 2014.
In all cases, the best interest of the child carries weight, so data controllers are required to make it clear to children and/or their legal guardians what their rights are at the outset of any data processing, in a way that is meaningful and appropriate for the individual’s circumstances.
Our research has shown that the complexity of roles and responsibilities means that very often no one explains to a child or their family what their data rights are, or provides any way for them to be realised, and this is acute when the data processing is done by a different agent than where a child sites, for example in the Alternative Provision Census where no LA had made those efforts in 2018.
It is difficult to imagine just how much information is created and collected about children throughout their education, or how that information is used, stored, and shared with others. Most data used in schools is ascribed by staff. While it may be fact based it can also be:
Core educational records were once the only data that schools would exchange as children move across the sector throughout their education.
Now a child and parent may both contribute data to a multitude of software systems maintained by third party suppliers. School staff ascribe an often detailed set of opinions and adjectives that last a lifetime. Increasingly machine-assessed measures of behavioural profiling through points or keywords are used to measure and predict success or failure and databases store that prediction on a permanent record at local and national level.
Personal data from parents and the wider classroom or community can be part of a child’s school record. Families are encouraged to comment on a child’s profile from the Early Years Foundation Stage. At primary school, parents, carers and staff can comment on a child’s digital portfolio. This may be stored on a cloud-based platform, with comments and photos of school work and the child.
Personal data is first collected by a school for the purposes of administering a child’s admission to a school from legal guardians’ applications to the state school system for a place. Once the place has been assigned through the Local Authority, more information is collected by a school directly from the legal guardians. Schools store this personal data in their management information system (MIS) — though the brand name acronym for the most commonly used system among 15 common UK providers, Capita SIMS[134], which supplies over 80% of England’s schools, has taken on a common use.
Having started school, a child’s individual named pupil record is managed and built up over the course of a child’s time spent at that school through the MIS. Most systems are cloud based, and the data is therefore stored off site, with support from the provider companies. When a child moves between schools, the core of that record is transferred by the sending to the new receiving school, in what is known as the Common Transfer File (the CTF).
Schools manage their internal administration by accessing the core data held on the MIS through their chosen software and tools, for timetabling and planning for example. Schools then start to input their own data to a child’s named record. This is added by school staff in the classroom and through administration. This can include behavioural records, or individual needs such as health and special educational and disabilities, safeguarding data that may come from or go to children’s services, whether a child is entitled to free school meals and if the school can claim pupil premium (additional funding on a named-child basis, most commonly as the result of a child’s family low income status).
All these types of data use and reuse are generally for the administration of a child’s education and direct care from a school, while in loco parentis but those boundaries of school responsibility and reach into private and family life have become blurred as cloud-based processing allows surveillance and data processing 24/7, 365 days a year. Data collected by a school may now be from the child’s homework, or all online activity at home.
The administration of the school can also involve outsourcing to private companies, that provide the capability to supplement on-site administration through technology so that the data are processed by and controlled by the company. For example, cashless catering systems are now common in UK schools to enable families to manage the pre-payment of lunch money, as well as payment for materials in food tech, art or DT lessons, or optional school trips.
Personal data created about and through a child’s experience of learning are limited within the MIS. However, the number of actors involved in creating personal data about a child’s learning outside of the educational record can be vast, where a school supplements teaching, classroom and homework activities, with digital tools.
This raises questions over what is and is not part of a child’s state educational record, who has control of that. Where data is held by third-parties can schools meet all their lawful obligations? For instance, if a parent asks for a copy of the educational record, it must be available within 15 days under the Education (Pupil Information) (England) Regulations 2005 (SI 2005/1437). This is shorter than the month of a Subject Access Request and some companies may fail to provide information about a pupil in this time period and will only fulfil a SAR via the school as a go-between and decline to answer questions directly, even once a parental identity established by the school. This 4-way relationship child-parent-school-company is cumbersome in understanding a child record and while schools should rightly play a role in confirming a legitimate SAR request, companies that are data controllers should meet requests and answer questions directly, once the approved status is given.
The 2009 report by Dowty and Korff addressed the law and children’s consent to personal data sharing, and still holds true today.
It has long been the case that agencies can share information without consent about children whom they believe to be at risk of significant harm from neglect or abuse. It is also true that practitioners have always maintained case notes and discussed particular concerns with each other. However, what is relatively new is the question of whether children can consent to having sensitive data that they reveal to one person stored on a database and shared with others. In this instance, ‘sensitive’ means information about their mental or physical health, their beliefs and their private lives.
Legal guardians act on behalf of a child and where children have competency, children can act in their own best interests. Children can also exercise rights over their data to the extent of forbidding others – including their parents – from having access to their confidential records.
Government policy and children’s online activities raise all kinds of questions about confidentiality and the integrity of data, and they push the vital issue of who can or should consent to the collection, storage and sharing of children’s confidential information to the top of the agenda. (Dowty and Korff, 2009)[135]
It is common bad practice in schools to collect flawed ‘consent’ forms on admission, which are required acknowledgements rather than a valid consent process for data processing i.e. ‘Please consent here.’ We have not seen data collection forms that discern between local and national purposes. These forms need improvement for school admissions and school census processing to inform families what is collected for which purposes, ask for consent for optional items, and separate national from local needs.
Consent is often invalid. If a school or companies do ask for consent as their lawful data processing basis it can only be valid where a pupil or family has an informed and freely given choice, without bundled purposes, no pressure to agree, and no detriment to refusing. For example if a parent/child declines to use a consent-based AI platform that the school demands is used in the classroom, the school must be able to offer an equal educational alternative.
It is not valid to ask for consent to use a routine digital product in the classroom where children and parents cannot freely decline without detriment and such processing must be careful to address the reasons why the terms and conditions require consent. Often it is because the company is processing pupil data beyond the school remit. I.e. for the company's own purposes that it has chosen to carry out, not at the request of the school: product development, research purposes or passing data on to third parties. The Information Commissioner’s Office guidance makes it clear public authorities, which includes educational settings, will have difficulty relying on a consent basis for data processing, given the power imbalance.[136] This is especially true for children.
Consent will rarely be a lawful basis for processing data in a school. However for biometric data processing where offering an alternative is obligatory, freely given consent is required and schools must not process biometric data from a child if either a parent or the child objects. These protections are offered in England and Wales, under the Protection of Freedoms Act 2012. This legal obligation is often ignored (38% of families we polled where school biometric systems were in use in 2018, said they had not been offered a choice).
The idea that a ‘digital age of consent’ of thirteen applies to schools for all data processing is mistaken, or even that it applies to all the apps and platforms that schools require children use. Organisations that ask this of schools, appear not to understand data protection law, specifically the requirements of the GDPR, or the legal basis for pupil data processing.
The Department for Education data protection toolkit for schools (2018)[137] sweepingly states that “Parental consent will always expire when the child reaches the age at which they can consent for themselves (13 years old).” This adds to misunderstanding in some schools if, how and when consent is a lawful basis for processing children’s data, whether ‘information society services’ applies to education apps data processing[138] and fails to address questions of capacity versus age, or the power imbalance that often invalidates consent.
Consent is sometimes asked for, where processing is excessive. Such processes also often make third parties data controllers, not processors, from a data protection perspective in law, although they will argue that “we are data processors as set out in our contract and terms and conditions.” If so, then a processor should be doing nothing more than very narrowly that processing which a school has expressly required of them. But third parties often go beyond this, and determine the nature and purpose of processing. Extensive adTech analytics, third party re-use or repurposing for research distribution, or keeping data forever for their own product development even where de-identified, makes processing that relies on the school’s public task likely unlawful and more likely that third-parties are [joint] data controllers.
In “Cookies That Give You Away: Evaluating the Surveillance Implications of Web Tracking,” released in 2014, Reisman et al. explained how web pages with embedded trackers can connect a user’s web page visits back to the specific user. Cookie consent is rarely valid.
Consent is rarely informed. Like the Department for Education when it comes to national processing, suppliers often fail to properly pass on necessary information to schools to allow the fair processing responsibility to be met, such as adTech third party processor re-uses, repurposing for research distribution, or keeping data forever for their own product development. Companies fail to tell schools which child’s information was used for which third party purposes. Schools are therefore unable to meet this obligation that the companies delegate, and the companies in turn fail to meet their own accountability obligations.
It is impossible for a school to really understand how many of these digital tools work or see that data processing goes beyond what the school requires due to complexity and vague terms of service. Researchers at the Oxford University Department of Computer Science, revealed the extent of hidden ad trackers, in an assessment of nearly one million apps (Binns, Zhao 2018). If even the developers might not understand the full extent of what their code does when it comes to re-using third party data analytics and cookies for example, (Ekambaranathan, Zhao and Van Kleek 2020)[139] then suppliers cannot explain it to schools, and schools cannot to families or children.
In the same way that third-party data processing at national level from the Department for Education assumes fair processing is done, it falls through an accountability gap in local data processing too.
The House of Commons July 2016 briefing paper CBP-7657 considers parental responsibility and access to pupil records.[140]
Parents may be recognised differently under education law than under family law. For the purposes of education law, section 576 of the Education Act 1996 defines a 'parent' as:
all natural (biological) parents, whether they are married or not;
any person who, although not a natural parent, has parental responsibility for a child or young person (this could be a step-parent, guardian or other relative);
any person who, although not a natural parent, has care of a child or young person.
A person has care of a child or young person if they are the person with whom the child lives and who looks after the child, irrespective of what their relationship is with the child.
In family law ‘parental responsibility’ means all the rights, duties, powers, responsibilities and authority which by law a parent has in relation to the child. People other than a child’s natural parents can acquire parental responsibility, for example through being appointed a guardian or adopting a child. More than one person can hold and exercise parental responsibility for a child.
Education law gives parents the right to information about their child’s education. However, these rights differ depending on the type of school the child attends. In all cases a parent can access information about their child’s education where the child is below the age of capacity, or where the child agrees, through a Subject Access Request under UK data protection law.
In addition, the Education (Pupil Information) (England) Regulations 2005 (SI 2005/1437) give parents of pupils at Local Education Authority (LEA) maintained schools the right to access their child's educational records. Educational records may include information such as the records of the pupil’s academic achievements as well as correspondence from teachers, local education authority employees and educational psychologists. Parents have a right to access their child's educational record, even if their child does not wish them to access it. This applies until the child reaches 18.
The Education (Pupil Information) (England) Regulations 2005 do not apply to non-maintained schools (e.g. academies, free schools and independent schools). This means that parents have fewer rights to access their child's educational records than parents in Local Education Authority maintained schools. Instead, the Education (Independent School Standards) Regulations 2014,[141] which came into force on 5 January 2015, set out certain minimum standards that all independent schools (including academies and free schools) must meet.
The standards on information provision require that an annual written report of each registered pupil’s progress and attainment in the main subject areas taught is provided to the parents of that registered pupil. This could offer an ideal place to expand upon the duty to include a list of all data processing and third-parties that are engaged by the school. Where this happens today, is the exception.
Schools also have the right to refuse a parent’s request for information in some circumstances; for example, where the information might cause serious harm to the physical or mental health of the pupil or another individual.
New legislation should adjust these discrepancies and introduce fairness across all kinds of educational settings.
The summer of exams 2020 drew attention to the problems faced by children when their needs are prioritised less as individuals and instead public bodies prioritise the protection of the system.
Families fighting for EHC plans or school places for their child with additional needs in austerity[142] have felt this for a long time. Many in the Traveller community or other marginalised groups have felt the stigma of data labels applied as part of a cohort. The harmful effects of standardisation is not new for the third of children held back by a failed system of comparable outcomes that leave school without any good exam grades in a system that demands one third fail no matter how clever they all are. But we may be on the cusp of greater societal understanding after those harms became mainstream in and more for a wider range of white middle class children, in A-level and other results, who objected to decisions about their lives to be based on historical data over which they have no control.[143]
If you are profiled and targeted for interventions as a child in a Troubled Family again and again across public services, you experience negative feedback loops. The moral and political values embedded in those data are not neutral. That dataset tends to be the lead data for other linked datasets in children’s social care predictive analytics, and again, the same factors are reinforced. There appears to be little appetite to tackle this at regional or national level as long as the data continue to give the answers that the policy seeks to find. But when “on average, if the model identifies a child is at risk, it is wrong six out of ten times. The model misses four out of every five children at risk. None of the models’ performances exceeded our pre-specified threshold for ‘success’,” it is overdue to end the use of those bad datasets and tools based on them that don’t work and that have “dangerous blind spots”[144] in life and death situations for children.
Generally, statistical research may imply that the result of processing for statistical purposes is not personal data, but national uses of administrative pupil data for loosely defined research purposes have been allowed to overstep this for so long at local and national levels that it is now used for individual interventions. It will be the death knell of datasets for longitudinal public interest research.
The key take away from this report should not be how much data is collected about a child although it may be the most striking. It is not our aim to highlight theoretical risks or abstract concept of privacy. Instead, it is to demonstrate what has happened in England’s education system as we have enabled the datafication[145] of children as individuals, and exclusion and “managed moves” of outliers —the disabled, the lower achieving and those who won’t get good grades —as to quote Michael Rosen, “the children only learnt what could be turned into data. Then the children became data.”[146]
Children’s outcomes and everyday lives have become mechanistic targets as part of a cohort—behaviour points, absence scores and reading for pleasure at school level— and in turn schools are forced to turn the complexity of children’s lives into simplified progress scores or attendance ratings without context, to be ranked and spanked in league tables by the national Regulator. Children’s lifetime educational achievement is now measured through the lens of the Treasury. What kind of world will they grow up in, if all of education and aspirations are only given value measured by what LEO says?
The developing child must be permitted to make mistakes and not have them permanently recorded and distributed indefinitely to others simply because the data systems make it possible. Their historic data must not be held against them. Historic data can cause harm.
To move forward and level up the edTech playing field we need a model of education that prioritises access, inclusion, safety, privacy, and young people’s views[147] in how their own data is used underpinned by the public interest that safeguards the delivery of trustworthy systems.
If a child is denied entrance into the university of their choice, parents may wonder if their children’s Prevent profiles[148] were passed on to institutions and used to screen and reject their application.
If their children are turned down for jobs, did the employer’s screening app check them out using an online profile of their social media or browsing history gathered by their school-issued device and bought from data brokers?
If children’s identities are stolen, was it the result of an app data breach many years ago?
If children are denied public services as an adult, could it be because of their records held by the National Department for Education or other agencies?
These are the kinds of questions that East German residents found had become very much a reality in the years of the secret use of pupils’ and other personal records of the State Security Service of the former GDR only once it was made accessible to the public after 1989.[149]
When police start to repurpose school records for criminal investigation, the Department for Work and Pensions seek fraudulent benefit claimants in every child that went to school, or the Home Office has free rein on repurposing national school records to deny Early Years children a free school meal there are three things seriously wrong.[150]
The authorities attitude towards how they treat children’s data, reveals how they treat people:
It is our aim for this research to contribute towards change.
This concludes Part 1 of this work: Summary report of recommendations and main findings.
The further parts can be found online at: https://defenddigitalme.org/the-state-of-data-2020/
Part 2: National statutory data collections age 2-25
Part 3: Local data processing with case studies of commonly used products
Part 4: The transition year from compulsory school to Higher Education
Part 5: Annex of data, source materials, research and references.
Fig 3. The legislation and data items expanded in the national pupil database
Just like the census carried out for the Domesday book the great survey of 1086 that gathered a minimum core of centralised data about much of England and parts of Wales, the original purposes for pupil data collections were about accounting.
Accountability for funding — not pupil educational standards— is the origin of the central government data gathering model we have today.
According to the National Archives, little is recorded about the early history of the School Census.
Education statistics based on annual returns submitted by schools have been published at a national level since at least 1905. Form 7 returns for LEA-maintained schools are thought to have been introduced in 1945-1946. Independent schools were required to provide information for the Schools' Census as a result of provisions in the Education Act 1944 and the Independent Schools Registration Regulations 1957, which required independent schools to furnish annual returns of changes to their pupils, boarders and teaching staff. In more recent years, the statutory basis of the Schools' Census was provided by section 537 of the Education Act 1996. This required the governing bodies of LEA-maintained schools and the proprietors of independent schools to provide such information as might be required by the Secretary of State for Education. Data from the Schools' Census was used to calculate schools' per capita allocation of funding, leading to a high response rate (around 95 percent in 1998). (National Archives, 2020)[151]
Money today is allocated to a school on an individual pupil basis, which means if you have a primary school designed to hold 180 pupils and have fewer than 30 pupils in a classroom you may have less funding available than a similar neighbouring school that is full, depending on the types of pupils you have in the school and any premiums they draw in. Competition between schools is therefore encouraged, not only to get the best reputation in an area, but to do so at capacity and achieve an optimal economy of scale. The school costs will be largely the same whether you have 160 or 180 pupils, but you’ll have less money in your budget. Special Educational Needs and other funding variables throw more complications into the sums.
But in effect the Department for Education has made it a requirement to collect pupil data at individual level and some of those data items are marked as ‘used for funding.’ Named pupil data is gathered on an individual pupil basis today through the school census, and provides a measure of evidence of spend. Beyond this, those individual pupils’ results are used as a measure of school performance, and therefore accountability for its costs and outcomes.
When certain criteria are met for certain pupils, additional funding is allocated on a per pupil basis, such as Pupil Premium for pupils with extra needs. Where additional funding is less than the actual cost of a pupil’s extra needs, tangible and intangible, the incentive to accept those pupils is reduced. This is why, despite statutory obligations to offer places, many families of children with additional learning needs feel schools are not accepting their children.
This micro-management of funding allocations for over eight million children in statutory provision simultaneously, has created a constantly growing and changing set of national pupil databases of unimaginable size.
The data management infrastructure loosely, but not fully, fits the follow-the-money model as far as the central government directly allocates the funding. It only provides funding to registered providers. Unregistered providers may for example receive funding but indirectly, as a contracted service to a school or Local Authority.
A notably complex area of both data and funding accountability is in Alternative Provision — for the education provision of children removed from mainstream education and put into another form of education paid for by the State — where the buck literally stops with the Local Authority to allocate to the setting rather than the setting itself, and therefore has a duty to provide data in the Alternative Provision Census rather than the setting.
If a provider is registered it may be inspected. If it is unregistered, it is not inspected. If the service is directly managed by the Local Authority, it is registered and known as a Pupil Referral Unit. A provider of AP should be registered if it caters for five or more full time pupils or one who is a Looked After Child or has statemented Special Educational Needs.[152]
Children could be attending the alternative setting part-time, for only a few hours, half a day a week. And the child spends the rest of the week in the usual mainstream school. But when we asked 150 Local Authorities how they counted those children, in 2018, it was chaotic. There was a lack of consistent understanding who was to be counted and how in the FOI responses we received back. Some Authorities counted each child once, who was in AP on the day of the annual census. Some counted every child once, who had been in the AP on any day in the year preceding the census. Some counted only the total number of hours spent in total by children in AP in the year preceding the census and calculated them to show full-time-equivalent numbers.
Much of a pupil’s data is ascribed, and never seen by the student or parent. Parents told us how difficult it can be to understand the complexity of who is responsible for what in their children’s care and service provision by multiple bodies, and that they avoid any superfluous questions or being seen as ‘problem parents’. Asking providers and the authority about data confidentiality is actively avoided even by those who have brought concerns to us.
Who the person is responsible for the duty to communicate to the child/ family what data is collected and which organisations will access that data, for what purposes, and for how long is unclear in these messy models in everyday reality, compared with the neat term on the privacy notice ‘data controller’.
A child is of compulsory education age on the 1st January, 1st April or 1st September following their 5th birthday. Legal guardians can choose to send the child to state school in a range of educational settings. This report focuses on the most common types of personal data collection that apply to mainstream and Alternative Provision settings, rather than where settings may have their own practice or less mainstream needs, such as religious schools.
Long before five, any child in state funded education provision from age one approaching their second birthday (rising 2s) becomes recorded in centralised data collections. For Children in Need, those considered at risk, it can begin when a Unique Pupil Number is assigned even before birth to include the unborn child in the Children in Need Census. Personal data in a pupil record may no longer be only personal data, but interpersonal, and about a whole household.
There are more than twenty key providers of school information management systems to UK schools. They facilitate the administration of pupil records, the creation of behavioural data and benchmarking, data analysis and reporting, school marketing, assessments, production of management reports and communication with parents via parent’s contact details, vital for class lists, parental contacts and emergency contact numbers, stored as part of a child’s record.
Information Management Systems (MIS) are commonly 100% cloud based and offer users a single sign-on. Users for example of iSams, are also offered the ability to use existing social login information available from multiple providers such as Facebook, Twitter, Google+ and others.
It is usual practice for them to not only store and display the data as entered, but to create further data and information automatically from the pupil data and use automated systems to create profiles, dashboards and display patterns in data about a child over time. School Bacs for example, offers that “statistics and charts are instantly calculated by Plus! for Schools, so teachers now have the tools to quickly and more routinely identify progress of individual children both in their class groups and each subject area.”
In order to distribute data from the MIS to third parties, at speed and scale many schools use data integrators such as Groupcall,[153] Wonde, and Go4Schools. It is therefore disappointing that our overall assessment of that part of the sector’s understanding of data protection law is weak, in particular in their guidance for schools on the critical differences between the roles of data controllers and processors, and misunderstanding of children’s consent.[154]
These integrators process standard pupil data and may also include contextual information and the results from other tests such as CATS tests and GL-Assessment’s PASS survey.[155]
Between 2018 and 2020 our research failed to find a provider that enabled a sufficiently granular level of data processing control of individual data items in the API to enable schools to meet all of their requirements in data protection law, for example in the ability to leave an individual child’s selected individual data items out of a whole school data transfer.
This is true to the best of our knowledge at the time of writing, we acknowledge however that functionality changes frequently.
The unique pupil number (UPN) is a 13-character code that identifies each pupil in the
state school system in England[156] and half a million children (as at 22 November 2019) in or at some time having been in private education.
Allocated on first entry to a state funded place in nursery school, the UPN is generated using the nationally specified formula and is expected to remain with a pupil throughout their school career regardless of any change in school or local authority. It is permissible for a new UPN to be issued in situations where a child has been adopted or is at risk but there is little public information available on how well or consistently this is managed, especially given the use of the UPN in the Children-in-Need census.
While it is a 13-digit number, since some of these are predetermined codes associated with Local Authority and school, there are only 999 possible combinations for any one child.
UPNs are also used by the Department for Education in the National Pupil Database (NPD).
The UPN is comprised of 13 characters that are:
The combination of these characters is intended to be a simple means of guaranteeing uniqueness as no two schools can allocate the same UPN to two different pupils. Once a pupil’s school career is complete, their UPN must be archived as part of their school record. A UPN must never be re-issued to another pupil.
Section 537A of the Education Act 1996 enables the Secretary of State to require state funded schools to provide certain information, including UPNs, to local authorities and the department including: key stage assessment information and pupil level census returns. Section 408 of the same Act enables the Secretary of State to require the transfer from school to school of pupil records, including UPNs, using CTF. Schools are legally entitled to transfer such data and do not need to seek pupils’ or parents’ consent to do so.
National guidance on the use of the UPN[157] was updated in December 2017 and 2019.
Both state that the UPN must lapse when pupils leave state funded schooling, at the age of sixteen or older. What this should mean in practice for providers is unclear because by age sixteen it has been widely distributed, and linked with other data including the unique learner numbers which take over identification post-16.
Previous Guidance suggested that the UPN is a ‘blind number’, not an automatic adjunct to a pupil’s name and only transferred to those with a genuine right and requirement for its receipt.
However this new phrasing appears to have watered down the stricter 2013 issued guidance on distribution of UPN, “The data protection restrictions associated with UPNs mean that it is only possible for UPN data to be shared by CTF between schools/academies, LAs, DfE and other prescribed government departments. Under the Data Protection Act 1998, the UPN is designated as a ‘general identifier’ making its use for any purpose unrelated to education illegal. A pupil’s admission number, rather than the UPN, should be used as the general pupil reference number on the admission register or paper files."
Due to the sheer number of third-parties using a child’s data via school information management systems (SIMS), there is no oversight how many users receive this number. It is convenient for software companies to use. Often these third-party companies also receive the SIMS ID number.
National guidance does state explicitly that UPNs must not be printed or written on any reports or similar documents. A pupil’s admission number, rather than the UPN, must be used as the general pupil reference number on the admission register or paper files, and schools are encouraged not to give this number to children or parents.
National guidance also states explicitly how schools should handle UPN retention or changes, in the case of adoptions, and further that schools may receive pupils who, for their own safety, have changed their identity and how to handle UPNs for these children.
“This will be the case for children in the witness protection programme and for those fleeing from abusive family members. As part of their new identities, these pupils must be issued with new permanent UPNs and their previous UPN must be deleted and not recorded under ‘former UPN’. As part of this process, it is important that there is no link retained between the original and new pupil records.”
These links are maintained in national data collections. This process should be reviewed.
There is as far as we know no recall process management for similar consideration of any safeguards of children whose UPN and personal data have been distributed to third-parties or at regional level.
The IRMS toolkit[158] offers some excellent guidance on general data management and particularly on retention. However in the area of UPNs we think it now needs updated to reflect the national DfE guidance. The IRMS toolkit suggests on page 10/57 that the UPN should be included in "the following information should appear on the front of the paper file”. The DfE guidance says, "pupil’s admission number, rather than the UPN, should be used as the general pupil reference number on the admission register or paper files.”
Required to allocate a Unique Pupil Number? |
YES |
NO |
Early Years private / voluntary / independent provider including childminders and agencies |
✓ |
|
Nursery Schools |
✓ |
|
State Funded primary |
✓ |
|
State Funded Secondary |
✓ |
|
State Funded Special School |
✓ |
|
State Funded all-through School |
✓ |
|
Non-maintained special school |
✓ |
|
City Technology College |
✓ |
|
State Funded Pupil Referral Unit |
✓ |
|
State funded Alternative Provision Academy / Free School |
✓ |
|
Non-state funded Alternative Provision provider |
✓ |
|
Independent Schools |
✓ |
There is no statutory duty placed on independent schools to allocate, or maintain, UPNs but it is permissible for an independent school to allocate UPNs where they have the ability and desire to do this.
As independent schools do not participate in the school census, the UPNs held in databases controlled by the Department relate to either attainment data provided to the Department for pupils in independent schools where the independent school has the ability and desire to maintain UPNs (i.e. pupils who, at the point of the relevant examination, were in an independent school which maintains UPNs), school census or attainment data for pupils in state funded education where the UPN was originally allocated by an independent school (i.e.former independent school pupils that were allocated a UPN by their independent school who, at the point of data collection, were in state funded education).
As such, the total number of distinct pupils with a UPN falling into the above independent schools categories in the DfE National Pupil Database (as at 22 November 2019) was 533,544. Each year more are added and the number of records with a UPN received by the Department for Education in the 2018/19 academic year was 51,003.[159]
Nearly half of the records that the Department holds for pupils who attend independent schools do not have a UPN since Independent schools do not complete School Census returns. The number of UPNs and pupils in the NPD from pupils’ pre-autumn 2016 Census (September 1, 2016) in the "non-state" / private schools sector were as follows[160]:
1. Early Years: the Department does not collect UPN in Early Years Census (EYC) data but there are 32,379 distinct pupils in independent settings in the 2016 EYC data
2. Primary up to age 13: 2,600 UPNs in 2016, but there are 6,402 distinct pupils in independent settings in total
3. Age 13+: 95,624 UPNs in 2015 (UPN not held in 2016 data), but there are 160,002 distinct pupils in independent settings in total.
The UPN has become a national ID number by the back door, used as a common persistent identifier across the education sector and wider national administrative datasets.
Researchers in the 2017 report, Towards a Family Justice Observatory[161] suggested assigning a UPN at birth. This is because they are most interested in research uses of children in need and looked after children whose data may be collected even pre-birth, and cannot be as easily linked to their national pupil database record, because they are pre-school and do not have the unique pupil number assigned yet.
The vital need to protect the most at-risk children including those adopted from care, from the potential harm that could be caused by loss, misuse, stigma of permanent longitudinal records or those created by using the pupil ID to enable linkage, is given inadequate consideration.[162]
Persistent identifiers have been recognised as needing extra protection in the past but this approach was abandoned in 2013. In 2016 Department for Education research, it was further suggested that the NHS number should become a persistent identifier (cross sector) for children.
Researchers are already linking Cafcass data with the Hospital Episode Statistics and the Clinical Records Interactive Search, or National Pupil Data with data from the National Police Computer and MOJ data. The Home Office punitive uses of national pupil data since 2015 in secret and its scope creep since the 2002 collection of names should be warnings of what lies ahead if the UK permits pupil records or any other unique identifier to become a permanent identifier for life for cross government uses. Uses far beyond individuals’ reasonable expectations when data is collected or submitted by schools for the purposes of education, and under education law.
In the US, permanent record fears fueled the backlash that brought down inBloom, a nonprofit data repository designed to store states’ comprehensive student records. (Zeide, 2014)
The UPN is routinely included on children’s personal FFT target benchmarking profile sheets despite current DfE guidance saying that UPNs are not to be printed or written on any reports or similar documents and that the UPN must be a ‘blind number’ not an automatic adjunct to a pupil’s name. (UPN Guide, 2019)
In the opinion of defenddigitalme, the guidance is clear. UPN is for use in pupil transfers across the education system to ensure integrity of the transfers of the pupil information when they move school or similar, for the statutory obligation of their education between “LA, central government or another school/academy”.
It is not designated for general identifier purposes or sharing with a broad range of third parties. In many ways, similarly to the National Insurance Number, even if it would be easy-to-use, for other data processing purposes, a “pupil’s admission number, rather than the UPN, should be used”. [Source page 7: Available guidance 2019]
There are different wants between the IT community, the EdTech community, and the stringency of previous guidance, but we believe practice requires stricter control and release of this unique personal identifier.
This open use is far too broad, and runs against the 2019 UPN DfE Guidance that states:
"The UPN must be a ‘blind number’ not an automatic adjunct to a pupil’s name. It must be held electronically and only output when required to provide information to the LA, central government or another school/academy to which the pupil is transferring. Under the Data Protection Act 1998, the UPN is designated as a ‘general identifier’ making its use for any purpose unrelated to education illegal. A pupil’s admission number, rather than the UPN, should be used as the general pupil reference number on the admission register or paper files."
The challenge: schools send this out to a wide range of third parties, automatically using it to identify pupils, when distributing SIMs data to third parties for apps to track pupil health, absence reporting, buying uniform and school trips, and other admin, or as in the FFT Aspire question, as a required field to third parties which then obtain and link further pupil data from the National Pupil Database.
2019 guidance states that UPN should lapse when the child leaves education at the age of sixteen or older.[163]This is not respected today.
Schools may find the current guidance inconvenient or in conflict with current practice and there is a common expectation from third party commercial companies to use UPNs widely but it does not mean statutory guidance should be ignored at local, regional or national level.
The UPN use and sharing appears to have exploded compared with the intent and as published in the Department for Education guidance recommended only a year ago.
Regulation of data distribution out of schools’ core management systems has not kept pace with the expansion of API to send data out of the school information management system. For example,
“The iSAMs Marketplace builds on the power of your iSAMS modules by allowing integration with all sorts of third-party apps and systems. From curriculum management to facilities maintenance, school security to sports and social, the Marketplace provides your school with the tools to integrate disparate systems and streamline processes. Visit Marketplace to find a list of over 80 integration partners.”[164]
The same security and child safety issues that the Department sets out in August 2020 guidance on the Common Transfer File[165] are not sufficiently considered on the use of the UPN.
“Circumstances when it is not appropriate to pass on information about a pupil via a CTF might include:
In the first two examples above it may not be desirable for the “old” school to know where the pupil has gone to ensure this cannot be accidentally divulged. If a family is in a witness protection programme the “new” school should also not know where the pupil has come from as this could enable the pupil to be linked back to their previous identity. It is important that an adopted child cannot be identified through their school history and so a new school should not know the previous school an adopted child attended and vice versa.”
The ILR from 14+ also contains a Unique Learner Number (ULN). The principles of data minimisation and indefinite retention of unique identifiers needs re-addressed across the sector.
A Unique Learner Number system began in 2013. It is used as the key identifier for the Personal Learning Record (PLR) and includes the full personal data and verified qualifications, and a record of what individuals have studied and achieved since age 14. This includes anything funded by the Skills Service.
Unique learner numbers (ULNs) are assigned to all students over the age of 14 in publicly funded education and training. The use of ULNs is mandatory within English schools. ULNs are allocated and managed by the Learning Records Service (LRS) and remain with an individual for their lifetime including in satte supported adult education.
Middle-deemed secondary, secondary, all-through, special and PRU/AP schools need to send this information to the Department for Education in the termly census for both on and off roll pupils.
ULNs are mandatory for all pupils on roll aged 14 and over on census day and for pupils no longer on roll who were aged 14 as at their leaving date. ULNs are assigned to pupils aged 14 or over in publicly funded education and training.
The Learning Records Service (LRS) is part of the Education and Skills Funding Agency and [166] allocates this number.
Awarding organisations are required to collect and validate ULNs and provide candidate achievement data to the LRS for all government funded learners.
The LRS Organisation Portal uses the ULN to index each learner’s identity details, education and training qualifications within the Personal Learning Record (PLR).
The data in the PLR is accessible to the same Prescribed Persons in The Education Act 1996, the organisations “legally authorised to have access to the data”. In 2012 the Prescribed Persons list was broadened and since enabled access to pupil level data by journalists, charities, think tanks, data analytics companies and other researchers.
In 2014, the Department for Education reported that 95% of schools used this ULN number.
Unique learner numbers (ULNs) are not only submitted along with data to the Department for Education (DfE) but also by any exam awarding organisations “to enable learners access their achievement and exam results online. This follows the full implementation and use of the ULN and individual learning record (ILR) system across the post-14 schools, FE and awarding organisation sector.”
The Learning Records Service collects data from The Department for Education, schools and training/learning providers. Information provided by students when enrolling with the training/learning provider. Qualification/unit data held by Ofqual and Qualifications in Wales (QiW). Participation and achievement data held by the Welsh Government. Accredited achievement data supplied by awarding organisations.
Individual details are by default sharing achievement records with schools, colleges, further education, training providers, universities, employers or careers advisors – and until 2019 an individual could opt out of the use of their data from the UK Register of Learning Providers. This changed without public consultation or announcement.[167]
Schools, colleges, higher education institutions, careers services, employers and training providers can all use the ILR system to inform furthering education, training and employment. “The UK Register of Learning Providers is a 'one-stop' portal to be used by government departments, agencies, learners, and employers to share key information about learning providers. The UKRLP allows providers to update their information in one place and share this across agencies such as the Skills Funding Agency, the Higher Education Statistics Agency (HESA), the Higher Education Funding Council for England (HEFCE) and UCAS.
“Since provider registration opened on 1st August 2005, the UKRLP has grown to over 30,000 providers. Each of these has been verified against a recognised external source and has been allocated a UK Provider Reference Number (UKPRN). This is the unique identifier used to share information with the UKRLP partner agencies.”
“During 2014, the ILR will contain the majority of publicly funded achievements awarded by awarding organisations. It will also include the summer exam results, which will be available 10 days following the general release of results, when we expect well over 90% of GCSEs and A Level results to be made available online.”
Its secondary uses access was operated on an opt-out basis until 2019. Learners could choose not to share their PLR data and opt-out of sharing with some third parties:
The Education and Skills Funding Agency (ESFA) an executive agency of the Department for Education (DfE) published a privacy notice in 2019 to explain how it uses and distributes learners’ personal information.[168] Access granted to Individualised Learner Record data is routinely published by the DfE on its website for DfE external data shares alongside the distribution of data from the National Pupil Database.
We asked for some more details of how these data is processed in early 2019.[169] Subject Access Requests can be made by submitting a form to the Skills Funding Agency.[170]
In January 2020 the Sunday Times reported[171] that the LRS had been accessed by data intelligence firm GB Group – whose clients include 32Red and Betfair among other gambling companies. The data contains names, ages and addresses of young people aged 14 and over in schools and colleges across England, Wales and Northern Ireland.
FE Week[172] reported that the “education training provider” which “wrongly provided access” to the LRS was Trustopia, a firm co-founded by Ronan Smith in August 2018. Prior to co-founding Trustopia, Smith ran a private provider called Edudo, which was investigated by the ESFA in 2017. The agency subsequently terminated the firm’s contracts, which were used to deliver courses funded through advanced learner loans, according to FE Week.
Under the standard admissions process for maintained Primary and Secondary schools, parents submit an application form (the School Application Form) to their Local Authority, listing their preferred schools for their child’s education. Supporting reasons can be given to indicate why they feel their child’s attendance at a particular school would be more beneficial than their attendance elsewhere. In cooperation with other Admission Authorities, the LA is responsible for processing all the submitted preferences and for ensuring that each applicant is issued with an offer of a school place. Parents are then advised of this offer.
Each March, schools receive an Admissions Transfer File (ATF) from their Local Authority. This file contains a list of children who have been offered a place at the school to start in the following September. Children will have been sorted and ranked using a school’s specified admissions criteria. Importing this file of basic personal data enables schools to finish their admissions process and begin their curriculum planning for the forthcoming academic year.
At primary level this includes Name, Gender, Date Of Birth, Address, Ethnicity, SEND and Language details, Contact details, Date of Admission, Enrolment Status, their current Application Status and their Unique Pupil Number (UPN). If an applicant does not have an existing and permanent UPN, it is assigned at this stage.
Imports begin by validating a child’s address. House names and numbers in an address are held in a single field of XML file structure. This field is called a PAON (Primary Addressable Object Name).Those addresses that the system can reconcile without user intervention will not be shown. Those that cannot be resolved automatically will be displayed and a best guess option may be shown.
Application Reference Numbers (ARNs) play a key role in the matching of applications. An applicant already in the system after the import of an ADT file or previous ATF file or manual updates from perhaps an onsite nursery, might match with an applicant currently being imported. If their ARNs are the same, the system will automatically match them and their details will not be displayed.
The system also checks for any matching Contact details e.g. any applicant in the incoming ATF file who shares a contact who is already present in the system.
During the admissions process, a schools will require sight of one of the following types of documentation:
Full certificate; Birth certificate short form; Certificate of registry showing given names and family name; GRO copy; Adoption Order issued by the High Court, County Court or Juvenile Court; Certificate of adoption issued by the GRO; Foreign birth certificate issued by registration authority of the foreign country.
Or one or more of the following Secondary certificates: Certificate of Baptism; Marriage Certificate; National Health Service Medical Card; Child's Certificate of Vaccination; Child's Health Record Card; A certificate of Service in HM Forces or other employment under the Crown or in the Mercantile Marine; A certificate of membership of a Trade Union Friendly Society or any cards or papers relating to membership of an Approved Society or Unemployment Insurance Apprenticeship indentures; Early certificate or testimonial from employer; Aliens registration card, certificate of naturalisation, Home Office travel document or a passport; Life insurance policy; Certificate of confirmation; School certificate or report; A birthday book or old family record; Family Bible containing a record of birth.
If the admissions process is later than starting primary for the first time, more data will be pre-populated such as Teacher Assessment of Key Stage one and transferred via a generic file transfer facility School-to-School (S2S).
Schools that accept international pupils/students requiring a visa, need to manage these applications during admissions. The Tier 4 functionality, which handles applications from international pupil/students. A CAS Reference (Confirmation of Acceptance for Studies) can also be recorded. If this functionality is not enabled, it is not possible to record this information. An application’s CAS Reference can be recorded in SIMS, to comply with the UK Border Agency’s traceability requirement to link a CAS Reference with a VISA number.
The common transfer file (CTF) is the mechanism whereby schools transfer “common transfer information” to a subsequent school when a pupil leaves as specified in the Education (Pupil Information) (England) Regulations 2005 Statutory Instrument (S.I.) and subsequent amendments in The Education (Pupil Information) (England) (Amendment) 2008. xml data transfers are used for transferring pupil data within the education sector for pupils who are not moving school, for example when sending selected assessment records to the local authority or central government.
All schools maintained by a local authority in England are required to send a CTF to the new school when a pupil ceases to be registered at their school, and becomes a registered pupil at another school in England, Wales, Scotland, or Northern Ireland. Schools maintained by a local authority include all phases – e.g. nursery, primary, secondary – and types of schools - eg special schools and pupil referral units (PRUs). PRUs are legally defined as schools and so references to schools regarding the CTF should be regarded as applying to PRUs as well as to other schools.
Academies (including free schools) on the other hand, are only strongly encouraged — but not obliged — to send CTFs when a pupil leaves to attend another school.
The Common Basic Data Set (CBDS) Excel workbook provides details of the data items included in each container and related code sets.
According to the 2020 Department for Education CTF documentation[173], a CTF will comprise a number of containers for data groupings. A CTF may include one or more pupils, for example if a year group transfers at a standard stage of education to a single local school. All of the containers should be included in a CTF where data is available in the system for the pupil(s) for whom the CTF is being generated.
To ensure continuity of records, each child is allocated a Unique Pupil Number by the school and an additional Unique Learner Number that is centrally assigned from age 14.
Both numbers last a lifetime, right through being joined up with Further Education and Adult Education and evening classes.
Some data items do not match across the different parts of the UK education sector where education is a devolved matter. For example, where in ‘SEN type’ or ‘Ethnicity’ codes are transferred between Welsh and English schools and the codes for the sending country do not match those for the receiving country (are marked as ‘no match’ in Welsh CBDS) then the data should not be imported. Service Child and Disability are optional fields and do not apply to Wales.
Not every piece of information that is held in the CTF for transfers about a child between schools is also sent to the Department for Education.
The 2017 Education Select Committee Enquiry into Primary Assessment produced a detailed report in 2017 on how children at primary school are assessed which was changed in 2016. New tests were introduced to reflect the new national curriculum, first taught in 2014, and after national curriculum ‘levels’ were replaced with a new measurement of attainment. According to the report, the Standards and Testing Agency did not oversee the implementation of the new assessment system in 2016 effectively, with guidance delayed and test papers leaked online. This caused significant disruption in primary schools as schools felt there was too little time to implement effective new assessment systems and prepare teachers and pupils for SATs.
“The design of the new tests was also criticised, particularly the reading and writing assessments. One issue with the writing assessment is the focus on technical aspects, like grammar and spelling, over creativity and composition. We are not convinced that this leads directly to improved writing and urge the Government to reconsider this balance and make spelling, punctuation and grammar tests non-statutory at Key Stage 2. There are also questions over the appropriate role of teacher assessment within the assessment and accountability system that the Government should explore.
“While the new assessments were being introduced there was little additional support offered to schools to implement new assessment systems to cope with ‘life after levels’. Primary school teachers only receive limited assessment training during initial teacher education and must have access to continuing professional development on assessment, as well as high quality advice and guidance on effective assessment systems.
“Assessment is closely linked to the accountability system in primary schools, with Key Stage 2 results used to hold schools and teachers to account on the progress and attainment of pupils. However, the high stakes system can negatively impact teaching and learning, leading to narrowing of the curriculum and ‘teaching to the test’, as well as affecting teacher and pupil wellbeing.
“The stakes should be lowered at primary school in order to combat some of these negative impacts. Performance tables should include a rolling three-year average of Key Stage 2 attainment and progress data to reduce the focus on an individual year’s results. Ofsted must also ensure that it inspects the whole curriculum, not just English and maths, and does not focus too heavily on Key Stage 2 data. We support the introduction of an improved progress measure, but the Government must be cautious if a baseline measurement is introduced. It should be designed as a diagnostic tool to help teachers identify pupils’ needs and must avoid shifting negative consequences of high stakes accountability to early years.”
The purpose of primary testing is said to be for the purposes of the accountability system in primary schools, with Key Stage 2 results used to hold schools and teachers to account on the progress and attainment of pupils.
But the impact on individual children is two-fold and largely ignored by central and local governments.
Firstly, the effects that high stakes testing has in and of itself on a child, and the indirect effects of the effects it has on teaching throughout each year of a test.
The Parliamentary inquiry[174] into Primary Assessment in September 2016-17, concluded that, “the high stakes system can negatively impact teaching and learning, leading to narrowing of the curriculum and ‘teaching to the test’, as well as affecting teacher and pupil wellbeing, due to increased pressure and workload as a result of statutory assessment and accountability.
Russell Hobby commented on the value of the data compared with the harm done in testing, saying, “Given the importance of loving reading as an outcome of primary school, to have that summed up with a test of reading, which you feel like a failure of, does more harm than all the value of the data that we could collect from that”.
Despite the understanding that these tests create a great deal of data, workload and questions around the effect of the testing on children, the inquiry did not look beyond data collection, to address the effects of reuse or distribution of data including any on children’s development and broader effects and outcomes.
The reuse of SATs data as individual measures of attainment has been understated by Ministers and is overstated by the users of the data.
The exam standardisation process in 2020 fell apart when it became apparent how the use of historic cohort data was used in assigning exam grades for GCSE, A and AS Level results and to a lesser extent inBTEC and vocational courses.
In a normal year, this standardisation usually includes use of cohort data from children’s Standard Assessment Tests (SATs) taken at age 10. This plays an opaque role in GCSE standardisation and any enquiry into the 2020 fiasco should also investigate whether it is suitable to repurpose these tests at all for this purpose five years later.
SATs scores used for supporting system integrity at GCSE, is inextricably linked to the role of SATs in the accountability system all through secondary school.
Families are told not to prep children for SATS. They are told SATs are not a measure of individual attainment and do not matter. The Schools Minister even as recently as May 27th, told the Education Select Committee[175] that SATs are an accountability measure for the school system.
“SATs are not qualifications for young people. They’re not like GCSEs or A-Levels that really do affect young people for the rest of their lives. SATs are a form of accountability for the school system.”
But we believe that they do have a significant effect on young people because SATs scores are used to profile state educated children with progress measures, predetermine a child’s educational path, and predict GCSE grades, as well as being used in the results standardisation[176]. An Ofqual 2017 video demonstrates how it is done.[177]
Some families buy pre-SATs tutoring. If some children score better, it may be affluence not ability that weights the prediction of their GCSE results. Privately educated children do not sit them at all so what is the effect in results weighting, of missing data?
The data from the testing last a lifetime for children. They are made accessible to a wide range of third parties beyond the child’s knowledge and school lifetime.
Given the significance of carrying out assessment it is a big gap in teacher training, as Dr Becky Allen told the Education Select Committee Enquiry on primary assessment in 2017, that “we do not have a system of training for teachers that makes them in any way experts in assessment”. Some schools had resorted to buying commercial options of varying quality, as described by the Association of Teachers and Lecturers concerned about several dubious “solutions” commercially available to schools which do not offer value for money or a high-quality assessment framework.
Without teacher training in statistics and understanding bias and data discrimination, teaching staff are likely to inadvertently perpetuate any historical bias in the data they have to interpret.
It was proposed in 2017 that the risks “of schools purchasing low-quality assessment systems from commercial providers” are to be mitigated by high quality advice and guidance, rather than change of policy and practice. That recommendation from the enquiry into Primary Assessment has not been realised.
The exams process has never been under as much media scrutiny and public criticism as in August 2020. But there is limited understanding outside of the teaching profession that standardised testing begins in the Early Years. And even within the teaching profession, our discussions with staff suggest that few are informed that every assessment and test ends up in a named national pupil data collection, or is collected by the Department for Education into a single, longitudinal record.
The personal data and outcomes from all these tests are added to data collected in various census collections, and are linked together to create the National Pupil Database.
The standardised assessments and testing and census collections start as soon as a child enters the state education system, routinely at age 5, or earlier aged 2-4 in the Early Years.
There is so much data collected in and through a school that any users at local level in terms of staff or school governors, or within Multi Academy Trusts or Local Authorities, looking at it for comparisons of school performance against other schools, first need to understand the language, abbreviations, codes and groupings to make any sense of most of it. It’s pretty much impenetrable for families.
When looking at school performance data in tables, in the Analyse School Performance (ASP) system, the Ofsted Inspection Data Summary Report (IDSR), or commercial benchmarking company charts and reports such as from Family Fisher Trust (FFT), figures are broken down by pupil characteristics grouped together.
The main pupil groups are characterised by Disadvantaged (with breakdowns by pupils in receipt of Free School Meals, pupil premium, Looked After Children, and Service Families); Gender; SEN (special educational needs) which is further split into two sub-categories SEN Support and EHCP (Education, health and care plan), (nationally, 12.2% of primary pupils have SEN Support whilst 1.3% have an EHCP/statement); EAL (English as and Additional Language); Mobile Pupils, which means children that joined the school during primary years 5 or 6; Term of Birth, because weight is given to the assumption that summer born pupils tend to do less well; and Prior Attainment.
“Broadly speaking we are dealing with four key types of data: threshold measures (percentages achieving expected or higher standards), average scores, progress scores, and absence figures. Officially, we only have average scores and progress scores at KS2, although your school's internal data may have other measures you can track, including data from standardised tests. These categories may be further broken down, such as the 24 prior attainment groups used in the KS1-2 progress measure summarised into three groups: low, middle and high. Also note that Ofsted, in the IDSR, have a pseudo-progress measure for KS1 whereby attainment is broken down by start point based on Early Years (EYFSP) outcome.’
The default setting in FFT is VA (value added). This means that progress is measured in the same way as it is in ASP and IDSR. It compares each pupil's result to the national average result for pupils with the same start point, and scores should match other sources. When we look at group level progress data in FFT and focus on, say, disadvantaged pupils, the scores are value-add (VA) scores and will be the same as those calculated by the DfE. Using the VA measure in FFT, disadvantaged pupils' progress is not compared to disadvantaged pupils nationally; it is compared to any pupil nationally with the same prior attainment. Progress data is only contextualised if the CVA button is clicked, otherwise it is no different to progress data found elsewhere. The difference is in the attainment comparisons, where results are compared to those of the same group nationally.” (Pembroke, 2020)
The Department for Education Pupil Premium funding is allocated on a named-child basis, to local authorities and schools to support disadvantaged pupils. There are premiums for deprivation, children of armed services’ families, and children under the care of the Local Authority, known as looked‑after children. Schools themselves determine how they will spend the Premium, which is a top up of the general per pupil budget, intended to support raising the attainment for the most vulnerable pupils or for additional pastoral support.
Another flag on a child’s record, ‘school dinners taken’ notes at pupil level, the infants who on school census day, take a school meal. This is used in the calculation of funding for universal infant free school meals. Schools are encouraged in Local Authority guidance to show ‘a sensible number’ even if is not accurate to the pupil, given circumstances on census day itself, may not reflect a typical day.
To be eligible for free school meals, the maximum annual household income threshold is £31,500 within London or £28,200 outside of London, and to be applied after tax. This equates to £2,625 a month for London applicants, and £2,350 a month for applicants from outside of London, and was significantly increased in 2020.
How to apply for pupil premium status varies across England. In some counties there is automatic enrollment when a legal guardian meets the necessary criteria in welfare support. In other places, the parent must apply via a local government portal. But the onus is generally on the parent to obtain confirmation and to notify schools of their eligibility for the Pupil Premium even though schools may miss out on eligible funding where parents do not, whether through their own circumstances or choice, for example that are concerned a child will be stigmatised by staff or peers as a result. A parent that was concerned spoke to us for example when they found their child’s exercise books labelled differently from other children’s, with PP on the cover.
Since the ECS is open to Local Authorities, academy schools are unable to access the ECS. Some academies have made arrangements with a local authority (often for a fee) for them to check eligibility via the ECS on their behalf.
Parents of pupils attending academies can use websites such as https://www.gov.uk/apply-free-school-meals to check their eligibility.
Pupils that have been eligible for free school meals (FSM) in the last 6 years, or have been in care at any point, or have been adopted from care remain in what is called the Ever 6 category, indicating deprivation across the child’s school lifetime regardless of how long a period the child qualified as FSM eligible when the parent qualified for welfare support. Previously this group was referred to as pupil premium and still is across many reports, in academia and wider education community discussions. The numbers will be different between the ‘now eligible for FSM’ versus the cumulative higher number of Ever 6 / or FSM6.
From the data held by the Department, in schools eligible for pupil premium funding, as at January 2019, there were 7,500,804 pupils of whom 1,850,305 were eligible for the FSM ever 6 element of the pupil premium and of those 659,871 were not currently eligible for FSM at that time.
In schools eligible for pupil premium funding, as at January 2020, there were 7,579,666 pupils, of whom 1,841,412 were eligible for the FSM ever 6 element of the pupil premium and of those 482,295 were not currently eligible for FSM at that time.
From April 2020 schools get £1,345 for every primary age pupil, or £955 for every secondary age pupil, who claims free school meals, or who has claimed free school meals in the last 6 years.
Schools should receive £2,345 annually for every pupil who has left local authority care through adoption, a special guardianship order or child arrangements order.
Local authorities get the same amount for each child they are looking after; they must work with the school to decide how the money is used to support the child’s Personal Education Plan.
The service premium is not part of the pupil premium as the rules to attract the service premium are different. The onus is on service personnel with children in state schools to notify schools of their eligibility for the Service Pupil Premium (SPP).
Schools get £310 for every pupil with a parent who:
This funding is intended to help with pastoral support.
Each of these categories are therefore important for a school to mark on a child’s record when the school submits data in the school census which is used to determine its funding allocations.
A child is defined[178] as a legitimate or legitimated child or step-child of either or both of the spouses/civil partners; a child statutorily adopted by either or both of the spouses/civil partners; a child of the family (a legal term meaning any other child who is being brought up in the household of the spouse/civil partners at their expense or was so being brought up immediately before the spouses/civil partners were estranged, separated by legal order, divorced or before their marriage or civil partnership was annulled or before the death of the husband, wife or civil partner). A child is deemed to be below the age of majority (18 years). If over that age, a son or daughter must be 24 years or under, unmarried or not in a civil partnership and in receipt of full-time education at a school, college or university (studying for a first degree only), or be out of full-time education for up to one year between secondary education and further education. The age limits do not apply to a son or daughter who is physically or mentally incapable of contributing to their own support.
Service Children’s records, may also include four extra pieces of data related to a child:
The Statutory Framework for the Early Years Foundation Stage explains how progress starts to be identified, ascribed and recorded,[179]
“When a child is aged between two and three, practitioners must review their progress, and provide parents and/or carers with a short written summary of their child’s development in the prime areas. This progress check must identify the child’s strengths, and any areas where the child’s progress is less than expected.”
Although the common core of personal data collected is the same throughout a child’s life, a child’s educational record soon becomes heavily based on opinion and on data ascribed by staff, school or Local Authority administrators. These are submitted through the various national censuses and statutory testing multiple times a year to the Department for Education. Pupils and parents are not given these data in any regular format, timing, or standard access in order to validate data accuracy or understand data usage.
Schools, nurseries, childminders, local authorities and governing bodies are responsible for assessing the EYFS profile for children aged 5 and under.
This EYFS annual data collection process covers any early years provider, including maintained schools or private, voluntary and independent settings, unless the setting has received an exemption from the early years foundation stage framework.
The Foundation Stage Profile comprises seven Areas of Learning containing a total of seventeen Early Learning Goals which are divided up between Areas of Learning.
Children’s attainment data, which is collected through exams and tests later, starts in the Early Years through observation and teacher judgment. These opinions are collated and used to generate profiles benchmarking children against each other and used to compare schools’ achievement and accountability. Guardians are not asked for permission, and often do not get to see this information once returned.
The results are submitted as part of “statutory assessment and reporting arrangements” in the Early Years Foundation Stage statutory framework on a named individual basis at national level to be added to the National Pupil Database. Children under 5, rising 2s+, are not of compulsory school age. Guidance says, “EYFS providers may use any secure system to collect and submit profile data, as long as practitioners can record completed profile data for every child at the end of the EYFS.”
It acknowledges that assessment at such an early age may be difficult.
“The EYFS profile is an inclusive assessment. It can capture a wide range of children’s learning and development outcomes. Despite this, practitioners might find the observation and assessment of some children particularly challenging. If a child is given an outcome of ‘emerging’, it might not provide the full picture about that child’s learning and development at the end of the EYFS. In these cases, extra information should be included alongside EYFS profile judgements.”
“An EYFS profile should be completed once only, at the end of the year before the child moves into KS1.”
In the final term of the year in which the child reaches age five, and no later than 30 June in that term, the EYFS Profile must be completed and submitted for every child.
The assessment at this age is largely opinion based and anecdotal or observation based.
“Moderation of the EYFS profile is a sampling process and not a method for checking each child’s attainment. LA moderation visits use discussion to check that teachers are making accurate assessments and applying them consistently. Evidence discussed includes: materials, knowledge of the child, anecdotal incidents, results of observations, information from additional sources that supports the overall picture of the child’s development.”
“Evidence doesn’t have to be formally recorded or documented. The extent to which the practitioner chooses to record information will depend on individual preference.”
All EYFS providers must give parents a written summary of their child’s attainment against the ELGs. For each ELG this must state whether the child is: not yet reaching expected levels (‘emerging’) meeting ‘expected’ levels or ‘exceeding’ expected levels.
Although guidance says progress should be discussed with parents, it does not require that profile to be made available to parents, it suggests only showing this where requested.
“Practitioners may wish to consider making the child’s profile available to parents as part of this discussion. If parents do ask to see a copy of their child’s profile, the provider must make this available.”[180]
The Profile must reflect: ongoing observation; all relevant records held by the setting; discussions with parents and carers, and any other adults whom the teacher, parent or carer judges can offer a useful contribution.
Each child’s level of development must be assessed against the early learning goals. Practitioners must indicate whether children are meeting expected levels of development, or if they are exceeding expected levels, or not yet reaching expected levels (‘emerging’). This is the EYFS Profile. This summary profile is shared alongside a collection of paper evidence often in the form of stacks of post-it notes and photographs of behaviours, recorded social interactions, positive and negative opinion, with the Year 1 receiving primary school teacher.
Childcare providers are expected to carry out this check as a developmental step and it is closely tied to health. Local Authorities may encourage providers tol put the written summary into a child's red book so that health visitors can also access the information, and assess measures of development and early interventions designed based on the pre-school staff assessment.
When a child transfers from one school to another whether at a standard age of transfer to another stage of education — Early Years to Primary, Primary to Secondary— or to another school off-cycle, a core file of data is sent between institutions via the Admissions Transfer File (ATF) or Common Transfer File (CTF). The content included depends on stage and timing.
Under age 5, the Early Years Foundation Stage data is included in the transfer file for local use.
The summary EYFS Profile is also sent to the Local Authority and national Department for Education.
“Early years providers must report EYFS Profile results to local authorities, upon request. Local authorities are under a duty to return this data to the relevant Government department.”[181]
The national Reception Baseline Assessment (Baseline test) at the time of writing has been delayed until September 2021 but settings that wish to run it sooner are able to do so as an early adopter school.
The National Foundation for Educational Research (NFER) are contracted by DfE to develop, trial and pilot the new reception baseline assessment from May 2018.
It is a national test in maths and English, planned for during the first six weeks of the autumn term. It is a task-based assessment, delivered in English, using physical materials that children can easily handle such as plastic shapes and picture sequencing cards. The child will be tested away with the member of staff administering the test, using a computer tablet.
Scores are recorded out of 39 however it is an adaptive test, so that not every child is offered the questions to enable them to achieve all 39 possible marks.
As the assessment includes routing, the number of marks presented will vary from pupil to pupil. Routing, according to the Department for Education RBA framework 2020, is intended to prevent pupils from being presented with too many activities in which they are unlikely to be successful and to reduce the time required for the assessment and the possible loss of motivation that pupils may feel if they are unable to complete an activity. These routing rules are applied automatically by the online recording system. All pupils are presented with activities worth at least 22 marks.
At the end of the assessment, a raw score is recorded for each pupil. The data is not age standardised and there is no pass mark/threshold.
Families will not be told the test scores, or see the personal data before it is submitted to the NFER and the DfE, and schools will not get more than a summary statement. Results and the personal data of each child, will be used to create or add to every child’s permanent national record, in the National Pupil Database. This collection of personal data fails to meet basic data protection principles, including fair processing and explaining how the data will be used once in The NPD, and fails to tell families how their children's rights can be exercised, including the Right to Object. It is highly likely to lead to a successful legal challenge.
When asked in Parliamentary written question[182], the Department has failed to address this, but Ministers will not be able to claim they hadn’t been warned. We have asked the NFER for their position and comment.
Dr Guy Roberts-Holmes, Senior Lecturer in Early Childhood Education at UCL Institute of Education, criticism of Baseline might be applied to the whole current accountability and benchmarking model. “It is a deficit model of what children cannot do and is disrespectful of young children’s (and teachers’) competencies, abilities and expertise.”
Alice Bradbury, from the UCL Institute of Education gave us an interview with her view in 2019, including the view that a growing body of research which suggests children in primary schools and early years settings in England are increasingly ‘data subjects’, the source of large amounts of digital data (Bradbury and Roberts-Holmes 2016a; Roberts-Holmes and Bradbury 2017; Lupton and Williamson 2017; Williamson 2016).
“These data range from attainment information on all areas of the curriculum to biographical data about children and families, and are increasingly important to schools in a culture where ‘data is everything’ in terms of school inspection and the monitoring of performance. My research conducted with Guy Roberts-Holmes has found processes of ‘datafication’ in settings for children as young as two, and particularly in primary schools, an obsession with collecting data for the purpose of profiling and predicting attainment.
This ‘turn to data’ is spurred on by policy shifts which prioritise measures of progress, including the recent attempt to introduce a ‘baseline’ assessment into Reception classes for four- and five-year-olds, for the purpose of comparing the ‘output’ results of a primary school (Key Stage 2 Sats) with the ‘input’ as children begin school (Bradbury and Roberts-Holmes 2016b).
In the midst of this process, there are several questions which remain unanswered about where children’s data go, how they are stored, and who can access them and for what purpose. These questions are particularly relevant given the use of three private companies to produce and process the 2015 Baseline assessments. In the 2017 proposals to bring back baseline, the use of a single private organisation to process the data is included, making these questions about the journey of the data still relevant.
In the 2015 iteration of Baseline policy, three private organisations – Early Excellence, the Centre for Evaluation and Monitoring (CEM) and the National Foundation for Education Research (NFER) – became the approved providers of the assessment. Although it was non-statutory, 13,000 schools signed up to conduct a baseline assessment of their Reception pupils in their first six weeks of school, and chose from one of these three (Ward 2016).
The purpose of the assessment was not to aid teachers in their planning, but to define a starting point for each child in the form of a single number, so that when they came to be assessed at age 11, their progress could be measured.
These progress scores would then be aggregated and used to judge the performance of the primary school as a whole.
This assessment sat alongside the statutory Early Years Foundation Stage Profile, which is conducted throughout the Reception year.
As with the EYFS Profile, the Baseline assessments cover academic areas such as oral skills, reading, writing and basic mathematical skills, and personal and emotional development.
The CEM and NFER versions used tablet-based one-to-one assessments and some observation, while Early Excellence – by far the most popular choice - used teacher observation only.
The Early Excellence Baseline included teacher judgements on children’s readiness to learn, their physical development (such as their ability to use a toilet or undress themselves) and emotional competencies. Thus the data collected were quite personal in nature, in some cases. The data were then sent off to the providers and returned to the teachers with indications of how the children in their class compared to others across England. Early Excellence produced colour-coded tables of each child’s attainment across the assessment, labelling children as ‘exceeding expected levels’, ‘at expected levels’, ‘below expected levels’ or ‘well below expected levels’. In some cases this information was relayed to parents; CEM and NFER produced proformas for reporting to parents, while Early Excellence did not. The policy was scrapped in 2016 when a comparability report between the three providers found that the data were incompatible and could not be used to measure progress.
The current plans under consultation continue this policy of collecting data on children as they start school, but with two alternatives to the 2015 version. Firstly, only one company will be used to produce and process the data, to ensure consistency. Secondly, the data will be ‘black boxed’, until it is used for comparison seven years later. Given these plans and the experiences of the 2015 roll-out of baseline, there are a number of questions to be considered in relation to children’s data.
How did the private providers store and analyse the 2015 Baseline data? Do they still have it and can it be used by others?
How did schools store and use their own versions of the Baseline data they transferred to the provider? How will they do this in future?
Where will the data be stored when it is ‘black boxed’? How will it be made inaccessible, for example when children move school? This is important if the issue of engendering low expectations is to be avoided.
How will the data provided by teachers be processed and turned into a single number, and how open will this analysis be? Will it be automated?
Will schools need to ask parental permission for their children’s data to be sent outside of the school, given that many parents will assume their children’s attainment data is only shared with the local authority or central government?
Will baseline results be reported to parents and in what form?
How will baseline data be shared or compared within local authorities or multi-academy trusts?
Many of these questions could also be asked in relation to children’s data in other educational settings – in nurseries, children’s centres, later phases of primary schools – but the case of Baseline provides an interesting case study of how huge volumes of quite personal data flow from teacher to private provider and back to the school, with little parental awareness and processing practices which remain opaque.”
The late Professor Harvey Goldstein[183] was less than impressed with the standard of the RBA and generous enough to share his views and encouraged challenge.
“The response to your FOI about Baseline contains two incorrect statements. The first is that the baseline test is 'reliable'. Actually its reliability is likely to be low and the evidence that would allow us to assess that was one of the things redacted from the NFER report. It's a bit technical, but it is very clear that the way that the NFER will have calculated a reliability coefficient is flawed. I have been wondering whether to put an FOI to the DfE on this, but I will postpone until I have finished doing my own calculations on a reasonably comparable data set from Australia. Secondly they quote a paper by Burgess that, they suggest, shows that when Wales stopped publishing league tables performance went down (actually that whole paragraph is rather pathetic). This is simply untrue and we published a paper subsequently that showed why it was They know about our paper[184] but choose to ignore it.”
The annual Year 1 Phonics screening week for pupils age 5-6, takes place across England every June.
State-funded schools including academies, free schools and special schools submit pupil-level phonics screening check results to their geographical local authority before it is provided to the Department for Education.
The Year 1 phonics screening check is supposed to see if children have learnt phonic decoding to the government mandated appropriate standard. Children asked to read out 40 items with a
teacher. These items are a mix of real words and pseudo-words. The teacher scores each item (1 mark for a correct answer) and total the scores to make up the mark for the screening check. The threshold mark is only made available to schools after the end of the test window
The phonics screening check threshold mark in 2017 was 32.[185]
The Department instructs schools[186] that pupils should not take the check if any of the following apply:
For those not excluded from the test, the same test is then sat by every child, regardless of their reading standard. Children who do not achieve the appropriate standard in year 1 are expected to retake the check the following year.
For every child teacher assessment data is gathered in addition for the collection for all year 1 pupils and those in year 2 who did not meet the expected level in year 1. In addition to the basic pupil identifiers (as collected in other collections), there are two specific phonics test data items collected which are the child’s school year group and raw test score. ‘National Curriculum Year Group Actual’ and ‘Phonics Mark’ are the required components of the ‘Phonics Screening Check’ data collection. This differs from Key Stage One tests in which the raw scores are first turned into scaled scores, and the actual raw test marks from the Key Stage One tests are not sent to the Department for Education.
The results of the Phonics Check are collected on a named individual basis, are sent by schools via Local Authorities to the Department for Education at the national level and are added to the National Pupil Database. Local Authorities will have access to these data as well. Schools use their management information system (MIS) to enter and export phonics data to their local authority in common transfer file (CTF) XML format, or as agreed with the local authority. The Department only accepts phonics screening check results created from software that can produce the information in the correct format to upload onto the COLLECT system.[187].
The following data items are submitted: Pupil’s full name, Unique Pupil Number (UPN), Date of birth, Gender, National curriculum year group, Phonics mark between 0 - 40 and Phonics outcome used if the pupil has not taken the check for any reason such as absence, leaving the school or not being able to take the test, Local Authority number, school establishment number, and Academic Year— these all become part of a child’s permanent record in the National Pupil Database. The data submitted are accessible via the COLLECT system to third parties within 6 weeks
In 2014 a group of academics wrote an open letter to the Secretary of State for Education (Michael Gove) asking that the Phonics Test should be scrapped, because “Officially, it is described as assessing text decoding skills. Actually, it is dangerously confused.” (TES, 2014)
“In addition, a “substantial minority” of pupils who did not achieve the expected standard on the phonics check went on to achieve the government’s expected level 2 in reading and writing in the Key Stage 1 tests a year later, the research found. (Walker et al, 2014, p10). This raises questions about the reliability of the phonics check as a measure of likely future literacy success.”
“Thus, the phonics check is methodologically flawed, undermines confidence of children, particularly some of the more able, is negatively impacting on how reading is taught and is an inefficient, not to mention expensive and time-consuming, way of assessing an aspect of children’s reading ability.
“It is time to abolish it.”
There is no evidence that the purpose for Phonics Test data collection from around 800,000 children every year is necessary or proportionate at pupil level, or that the Department for Education use and distribution of personal data outweigh a child’s rights and freedoms, rather than alternatives such as the Department collecting aggregated statistics or sampling data.
Phonics test data becomes part of a child’s permanent record in the National Pupil Database.
Further guidance on arrangements for the 2020/21 academic year will be published in September 2020. Following the Rochford Review, the government announced plans to introduce the engagement model. This is a new form of assessment for pupils working below the standard of the national curriculum tests and not engaged in subject-specific study. It will replace P scales 1-4 and will become statutory from the 2021/22 academic year.
Government guidance and change of legislation published in August 2020 light of the impact of Covid19, shows it intends that the 2020/21 academic year will be a transitionary year to allow schools time to prepare for, and start embedding, the engagement model.[188]
Teacher assessment outcomes must be submitted at the end of key stages 1 and 2. Schools that have used the engagement model will be able to report against it, and schools that need more time to implement this change will continue to have the option to assess against P scales 1 to 4, for one final year.
How will pupils be assessed?
The engagement model has 5 areas of engagement:
The areas allow teachers to assess pupils’ engagement in developing new skills, knowledge and concepts in the school’s curriculum by demonstrating how pupils are achieving specific outcomes. They represent what is necessary for pupils to fully engage in their learning and reach their full potential.
How should schools use the engagement model?
The engagement model does not replace existing planning and assessment systems. The model encourages schools to measure each pupils’ progress independently, according to their individual profile of needs. It can also be used as a baseline tool to track and support ongoing progress.
The Key Stage one SATs test is due to be made non-compulsory in schools from 2023 when its plans to test children in Reception Baseline instead are fully rolled-out.
In the Summer term of year 2, a child in state education must sit four Key Stage 1 SATs (Standard Assessment Tests) in English and maths. There are also two optional national tests in Spelling, Punctuation and Grammar, a total of six formal test papers.
The SATs are marked by teachers, then moderated by local authorities who sample 25% of schools each year. Currently, the results are then used to measure pupils’ progress between year 2 and year 6. Schools are then judged on the basis of these scores as an ‘accountability measure’. The Department also expects these to be used in teacher performance related pay.
Like Key Stage two SATs these tests have gathered far more significance than originally intended. Not only in terms of schools accused of trying to game results or teachers who feel compelled to teach to the test, or even parents coaching children to improve results, but the purposes the data is used for by third-parties after the test data are submitted from schools to Local Authorities or to the Department for Education.
Key Stage SATs data becomes part of a child’s permanent record in the National Pupil Database.
The Standards and Testing Agency (STA) published information about the June 2019 national pilot[189] of the Multiplication Tables Check (MTC).
“It is a short timed, on screen, check of pupils’ ability to fluently recall their multiplication tables up to and including their 12×12 tables. Check form will comprise a maximum 25 questions with a 6 second time limit to recall and input answer with a 3 second gap until the next question.“
Pupils will be given 25 questions and will have only six seconds to answer each one.
According to the Data Protection Impact Assessment (DPIA). The only personal data the DPIA states will be captured are already collected it claims — defeating any logic of the purpose of a new data collection and is clearly a mistake. The guidance confirmed it will capture scores, at pupil level. And more. Scores are not all that is missing in the data protection risk assessment:
New data will be created when details will be linked to children marked as “not taking the test”, and the reasons for that. Note that these include, “Just arrived EAL”.
The DPIA does not show that the processing is necessary and proportionate to achieve its aims which are not clear; nor does it balance this individual pupil level data retention at national level without any personal gain, against the individual’s interests, rights and freedoms.
The Statutory Instrument to create a new lawful basis for the processing, came into effect on May 1st and September 1st, 2019.[190]
Personal data collected at individual level in the Multiplication Tables Check (MTC) will become part of a child’s permanent record in the National Pupil Database.
The Key Stage two SATs are far more significant for a child than parents are given to understand. Not only are they retained indefinitely and used by Local Authorities, or can be given away to third parties, but they are used in making reference scores for later examinations, GCSEs.
The KS2 Sats English test was issued in a new format for the first time in 2016. This new format of testing is based on the revised primary school curriculum issued in 2014. Children sit two papers. There is an increased focus on Spelling, Grammar and Punctuation (SPaG) in the new format SATs Papers which were sat by children for the first time in 2016. Paper 3 is a comprehension exercise (called a reading test).
The KS2 Maths SATs papers consist of three papers in total of arithmetic and number work.
Ready reckoners are used to calculate progress scores for pupils and schools, and transition matrices of GCSE subject grades and EBACC scores, by prior attainment.
If a pupil is missing a valid key stage 2 test outcome, then their key stage 2 teacher assessment outcome may be used instead.
When it comes to GCSE there is little awareness that KS2 scores play a role.
The whole premise of social mobility through better attainment is an impossibility given the current design of the exam grading system of comparable outcomes.
Because the system is designed to “align standards between exam boards in each subject and over time” it means that the distribution of the grades – the range of how many pupils achieve each of the available grades – is already determined from one year to the next.
For any gap in attainment to narrow, then disadvantaged pupils would not only have to achieve higher grades than they did in previous years but also an equivalent proportion of pupils elsewhere, would need to simultaneously get lower grades than in past years.
Imagine that every child in cohort A in 2021 were to be 100% more intelligent than last year’s cohort in 2020, due to a smart pill, or genetic selection in the birth year for intelligence, or simply that the quality of teaching was 100% better. There would be no more chance of the least highly scoring in 2021 getting more than the lowest grades, because the grades are spread across the range of achievement of the cohort. It wouldn’t matter that for the same level of ‘intelligence’ in 2020 you got a C (grade 4). You could still only get a G, or grade 1.
The exam system is only a ranking of pupils attainment in relation to each other, not in relation to their intelligence or even attainment year on year.
Personal data collected at individual level in the Key Stage Two SATs become part of a child’s permanent record in the National Pupil Database.
England, Wales and Northern Ireland have several exam boards, with schools and colleges able to freely choose between them on a subject-by-subject basis. Currently, there are seven exam boards available to state schools:
The JCQ (Joint Council for Qualifications) is an organisation, independent of government, to provide a common voice and standards for UK exam boards. The JCQ is made up of AQA, CCEA, City & Guilds, Edexcel, OCR, SQA and WJEC. Among its roles, it devises standard rules for exams and publishes statistics.
According to the JCQ, “each year, over 25 million separate exam scripts and items of coursework are marked by nearly 60,000 examiners and moderators. Awarding Bodies will issue over 2 million A and AS level results and around 6 million GCSE results.
“Awarding Bodies have introduced online marking to greatly reduce the risk that a script could be lost or damaged in transit. The original script remains in secure storage at the Awarding Body during online marking. The marking of separate sections of a script may be done by multiple examiners simultaneously. Each examiner is able to focus solely upon the question(s) they have been allocated, rather than changing topics frequently, thereby improving the accuracy and consistency of marking. Removing the need to transport hard copy scripts during the marking process means senior examiners are now able to perform checks much more quickly, speeding up the marking process overall.”
The JCQ issues guidance for post-exam reviews and moderation.[191] Exam boards are not subject to Freedom of Information and some are obstructive in the level of transparency they are willing to provide on data processing.[192]
When asked for more information about their data processing practices, WJEC[193] provided web links to their Data Protection page on their website and Privacy Notice. Pearson EdExcel says that they have a process for subject access requests and that “a data subject can apply for one directly with us or can provide consent for us to disclose it to a third party on their behalf. (But) We do not have a policy to provide to you.” The Northern Ireland CCEA policy is in line with that of JCQ Post Results Services. Access to a candidate's script is available on request by their centre. Papers are not processed outside the UK. CCEA does not process data for machine learning purposes or training AI or other product development beyond responding to the candidate with a mark. While the CCEA pointed us to the JCQ website, it was not able to provide us with the policies as provided to candidates or students.
And while boards they agreed they have an obligation to notify data subjects of the processing activities, “that we undertake and we already do this”, they referred us only to a defunct link[194] in 2018 at the JCQ General Regulations for Approved Centres.
There is no transparency of exam paper processing as regards data processing
a) in or outside the UK
b) for any secondary purposes
c) for use by whom
d) retention and destruction periods.
In 2018 EduGeek forum comments on exam boards and GDPR preparation highlighted the lack of preparedness apparent in the everyday school experience of some exam board interactions.
One forum user presented a case study: “In preparation for GDPR we are about to make USB pens read only for staff (bar the odd exception). Some feedback we have received is "what about when we send X materials to the exam board on a USB pen?"
I can't believe Pearson, AQA et al haven't started to think about this to ensure they are GDPR compliant. You would think they would have a secure method of digitally uploading materials to them rather than requesting things be sent in the post. Yes, we can encrypt the content on USB before it goes, but I would be interested to hear if anyone has had any communications from the Exam bodies on how they expect data to be transferred to them securely with the new GDPR rules?”
The lack of transparency and scrutiny framework for Exam Boards is a problem to be solved.
Not only do they award exams grades, but can own the vertical market from shaping GCSE course curriculum circumventing government policy[195] apparently without repercussion, to delivery of the courses all through.
Because students have the option to take papers produced by different exam boards, and additionally because the difficulty of papers will naturally vary by small amounts from one year to the next, a process called grade ‘awarding’ is used on top of the marking, every year.
The key assumption in Ofqual’s comparable outcomes approach[196] is that if a group of students taking exams this year (a cohort) are of a similar ability to students taking the exams last year, the total number of students getting each overall grade should be roughly the same. The number of students achieving each grade is rationed.
To work out how the ability of this year’s cohort compares with last year’s students, the other awarding organisations look at how well each group performed in previous national exams. For Years 12 and 13, this means looking at how well the group of students did in their GCSEs. For Year 11, they look back at how well the group of students performed in Key Stage 2 tests.
Prior attainment data is used to predict the total number of students expected to achieve each grade in that age group but is not used to predict what grade an individual student should get. If, after setting grade boundaries, the number of students getting a certain grade is significantly different from the predictions, exam boards must justify to Ofqual why they think the grade boundaries are valid. This means explaining why they think students have performed significantly better or worse than expected. This data isn't used the same way when setting grade boundaries for BTEC qualifications.
GCSE data becomes part of a child’s permanent record in the National Pupil Database.
T Levels are technical alternatives equivalent to three A Levels and have been created in collaboration with industry. Information providers must return data to the Department for Education for T Level students which forms part of the ILR (Individual Learner Record) and Autumn School Census data collections.
An industry placement – of at least 315 hours, or approximately 45 days – also forms part of the course, where students will build the knowledge and skills and develop the confidence they need in a workplace environment.
T Level programmes have a number of mandatory data components:
The International Baccalaureate (IB) Diploma Programme (DP) is accepted by universities and is described as a good preparation for university by the UK university and colleges admissions service (UCAS).
In summer 2020, the Norwegian Data Protection Authority stated its intention to order the IBO to redo the awarding of grades, based on processing personal data in an unfair manner and that this year’s IB grades are inaccurate. It shared common problems with the UK awarding standardisation process in that grades did not reflect students’ academic ability, but that historical data and school data were used.[197]
It is worth including that the English Baccalaureate (EBacc) is not a test, but an accountability measure in England. It measures the proportion of children who secure a grade 5 or above in English, Maths, Science, a humanity, and a language at GCSE. Arts subjects are not included.
The accountability system in England, and in the Local Authority maintained part of the system, performance related pay, both use individual pupils’ attainment to compare attainment by a school with other schools.
Teachers may be expected to create predicted scores for each child for Key Stage tests and present these to Local Authority and similar Senior Leadership meetings ahead of SATs tests.
Pupil data from every statutory standardised test is sent to the Department for Education. From there, the pupils’ individual results are given to third party companies. The companies use the data for a range of purposes loosely within the topic of benchmarking. This profiling and benchmarking are used to identify children that can bring up or bring down a school’s overall attainment, and therefore performance measures, Ofsted rating, and public reputation.
According to James Pembroke, on his Sig+ blog, 5 things primary governors should know about data,[198]“Each pupil's KS1 result - their prior attainment or start point, is crucial to the value-add process. Each p-scale, level and sublevel in reading, writing and maths at KS1 has a point value, which enables the DfE to calculate a KS1 average point score (APS) across the three subjects for every child that has a KS1 result (note: pupils without a KS1 result are excluded from progress measures). Their KS1 APS is then used to place pupils into a prior attainment group (PAG), of which currently we have 24, ranging from pupils that were on p-scales at KS1 (pupils with SEND) up to pupils that were Level 3 in all subjects. There is even a PAG for pupils that were level 4 at KS1, but there aren't many pupils in that group.
“All pupils with KS1 results are therefore slotted into PAGs alongside thousands of other pupils nationally. The DfE then take in all the KS2 test scores and calculate the average KS2 score for each PAG.
“The result is a made-up metric distorted by the pressures of high stakes of accountability.” (Pembroke, 2020b)
The Secretary of State for Education, Damian Hinds wrote in the i-newspaper in February 2019, that “SATs are not public exams – they are tests, and there is a difference. These are not A Levels or GCSEs with results that count on an individual basis in the long term.”
At defenddigitalme we disagree. Some companies create predicted progress scores for each child. These machine made predictions can count on an individual basis, shaping expectations of progress flight paths and have direct effects on children as a result — both those targeted for extra intervention, and those left out as a result of the predictive scores that schools buy from companies
“We blow them up to A1 and stick them on the staff room walls,” was how one teacher told us the benchmark profiles are used in their school, “so that staff can see the six children every year to put extra focus on,” and implicit in that, whose outcomes will affect their Oftsed ratings.
Today there is no transparency to parents or pupils as these progress score data is not generally given to families and there is no consistent guidance on doing so. Some schools and Local Authorities buy the analytics produced by companies and some do not. Whether this processing is necessary for the child, whether it benefits or discriminates against the child, and whether a child and/or parent could opt out of this profiling may need tested in the courts before a change would be made, however, the public and professional scepticism of the soundness and real value-add of Progress 8 at age 14-16 is loud.
Progress 8 was intended to measure the impact a secondary school has on a pupils' performance across eight subjects. It uses the Key Stage Two results of pupils in their last year at primary school as a starting point.
Leckie and Goldstein (2017) concluded in their work on the evolution of school league tables in England 1992-2016: ‘Contextual value-added’, ‘expected progress’ and ‘progress 8’ that, “all these progress measures and school league tables more generally should be viewed with far more scepticism and interpreted far more cautiously than they have often been to date.[199]
Gaming the system by primary schools or parents, can affect the results for those pupils and therefore the accountability measure as a “value-add” of the secondary school, where pupils do not go on to measure up to their expected attainment level, between age 10-11 and GCSEs taken at age 16. You cannot fairly judge secondaries by Progress 8 and it is likely some primaries data is inflated through above average test preparation by the school or parents beyond what may be expected.
With regard to the effects this automated profiling has on a child as part of progress flight paths, we believe much stronger safeguards and transparency are required on their use. Teachers we know use these at face value and do not necessarily understand how the benchmark or predicted scores are reached. The likelihood of staff spotting errors in the analytics is therefore low. The lack of critical scrutiny before accepting the implications and the direct effects on children has become ingrained in the school system in England.
And the effects of the role of KS2 data in GCSE comparable outcomes is opaque to most people.
The Department for Education in fact explicitly tells schools in national guidance,[200]”There Is no need for schools to share individual Progress 8 scores with their pupils”.
This instruction from the Department for Education to avoid fair processing, leads to unfair data processing practice by schools in breach of the first data protection principle.
Furthermore when it comes to the lawfulness of processing, the 2017 Article 29 Working Party Guidelines on Automated decision making and profiling for the purposes of Regulation 2-16/679 (GDPR) are clear that while the recital 71 suggestion that such activity should not apply to children is not an absolute ban, controllers should not rely upon the exceptions in Article 22(2) to justify it routinely.
The acceptance that gains in pupil test scores are the best available metric to measure teacher performance (Murphy, 2013) is premature without any assessment of primary Key Stage data quality — not only in terms of what is expected of the SATS data once delivered; but its design to assess if the test is fit for purpose, looking at automation bias, and the self-reinforcement loops from parental tutoring and teaching-to-the-test. All before one asks the simple question whether a ten year old’s attainment as-seen only on a single day and using a one-size-fits-all type of assessment is appropriate at all to use as a system measure over years.
These SATs scores, when used as a reference score in GCSEs, then carry forwards any in-built bias and discrimination into the data driven expectations for GCSE. Worse still they may also be used to link to other datasets for interventions in individual lives by Local Authorities and their partner organisations, the data is increasingly being given a weighting they do not deserve and were never designed to deliver.
Even at GCSE level, because of how exam courses have been designed to stratify some pupils into taking Higher and others into Foundation level and split exam entry, the system may be opaquely stacked against more people getting a good pass at a former C grade, now a 5.
If they are entered for the Foundation level they must complete the paper and do well, to get the top available grade (you can’t be awarded higher than a 5 on the Foundation paper), whereas a weaker candidate entered at Higher level, may complete well under 20% of the paper, taking more time over each answer, and still achieve the same grade outcome. It is a gameable design albeit a gamble on the part of the teacher who enters candidates to make a wise choice.
Choice in the realm of American education, according to Moe and Chubb, was first proposed by conservative economist Milton Friedman and later promoted by Ronald Reagan. (Moe and Chubb, 2009).
But can the concepts of a free market system sit well within the framework of state education? If a family has only one catchment area secondary school within their affordable housing and a reasonable driving distance and that is the only one the school bus route will take the child to, how much choice do I really have to choose the neighbouring school, if they can’t get physically or safely get there?
This is a particular challenge for children with special educational needs whose choices are already limited where a school declines to offer the child a place.
Teachers can come and go and change the quality or nature of a child’s experience and learning. Few families will choose to disrupt a child’s school year established routine and friendship groups, to move to a different school even if it is not far away if there is a change of staff. The human social and emotional factors of change are not factored into the narrative of choice, which in education may be reserved for those privileged to have more choices already in transport and parental time. The lag time in attainment changes for a school, means accountability data do not affect families’ decisions in real time and the amount that schools vary by can be wildly misconstrued in small numbers.
The case study in Groupcall’s case study guide on MIS data analytics[201] sums this up:
“Be careful how you tie in accountability and target setting to analytics. As soon as individual performance or Performance Related Pay is attached to simple data measures, they often become distorted and can drive unwanted behaviours and practices. Remember the old adage of Goodhart’s Law: ‘when a measure becomes a target, it stops being an effective measure’.”
In reality, managing the mechanism of the accountability system has become a target. Staff are obliged to meet its enormous appetite for data, and demonstrations for Ofsted, the regulator. Because its tasks take up such an inordinate amount of time to create, gather, compare, and then decide and assess their effectiveness over time, whether to make any intervention based on the data at so any levels — classroom, school-wide, MAT-wide, Local Authority wide and Country-wide, the system measurement conflicts with delivering quality teaching and learning. The accountability system has taken on a life of its own, driven by the Department for Education and by the companies that benefit from the promotion of the data outputs that they sell back to schools and authorities.
Teachers create the data, collate the data, clean the data, summarise and correct and submit the data, all as part of their taxpayer funded role. Then the administrators at school, MAT, and Local Authority may do the same, before passing on the next stage. Then the same happens again once the data reach the Department for Education. Then the data is passed on, for free, to the data benchmarking and intermediary companies, who do it all again. And then they sell the ranked and rated and prettified data and information as data products back down the chain to schools, MATs, and Local Authorities that pay to get their own data back again, in dashboards, and to compare with others across the country. Anyone exploring the practicalities of the process, leaving aside its politics, must see that it is vastly inefficient and costly to the State education system as a whole. We call on government to provide an assessment of the cost of the delivery of the accountability system at school, MAT, and national levels.
As Cathy O’Neill describes in her 2016 book, Weapons of Maths Destruction, an entire industry thrives on the feedback loop in China and the U.S.. But we should not for a moment think this does not exist in England. Families that seek tutoring to the test, in effect not only widen the injustice between those who can afford to get coaching and tutoring outside of school, and those who can’t, but exacerbate the social injustice between those who can afford to get extra tutoring for SATs and those who can’t. In a system in which cheating is the norm, following the rules amounts to a handicap. The only way to win in such a scenario is to gain an advantage and make sure others aren’t getting a bigger one. (O’Neill, 2016)
As Goldstein and Spiegelhalter pointed out in 1995 already, “this also raises the interesting issue of feed-back effects, namely that public knowledge of this kind which then is acted on may change the composition of institutions and hence directly affect their operation, so that the relationships formerly observed no longer hold.”[202]
“If students decide to choose A-level institutions (where a choice is realistic) on the basis of previous adjusted comparisons then those institutions with apparently 'better' results will have greater competition to enter and will therefore be able to exercise selection. For many this will change the characteristics of their students and the characteristics of the students of 'competing' institutions. If there are interactions between student characteristics and institutional policies then future adjusted comparisons will also change.”
Because the government mandates the measures, it is rarely asked whether an adequate legal basis for such data processing exists today — but the accountability of the organisation is built upon processing personal data of an individual child and this brings with it lawful obligations.
In theory, accountability makes teachers and schools that fail to reach expected standards suffer sanctions; performance related pay, job loss, reconstitution and reform through imposed academisation.
But Academies, Free Schools and independent schools do not have to abide by the School Teachers’ Pay and Conditions Document, the Conditions of Service for School Teachers in England and Wales - commonly known as the ‘Burgundy Book’, or other terms and conditions negotiated nationally for school teachers and support staff. They may make their own decisions concerning the level of teachers' pay. These outcomes as a result of academisation[203] are less frequently discussed than the ‘profits before pupils’ scandals.[204]
Academisation is the process by which maintained schools leave local authority control and oversight.
It is clear that if the pay and conditions negotiation framework is weaker in academy non-maintained schools, the collective power of teaching staff is undermined by this new model of educational system design. If pupils have not improved their attainment as a result of a new testing and accountability model, then one must ask if the aims of the data driven systems have been met. Were they designed in the best interests of pupils or more about political ideology designed to shift power away from teaching unions?
As Moe sets out in an essay Has Ed Reform failed?from his 2014 book, What Lies Ahead for America's Children and Their Schools, publishing school performance measures are part of the politics and reform of education. “There is heightened pressure on school districts and teachers to raise test scores and promote learning. State accountability systems regularly test students—But the reality is that the nation’s fifty-plus-one accountability systems do not actually hold anyone accountable.”(Moe, 2014)
The same appears true in England.
The intention of GDPR to protect children from unexpected uses of their personal data includes a requirement to not usually make decisions based solely on automated processing about children if this will have a legal or similarly significant effect on them.
When profiles are posted on staff room walls to target certain children for teaching intervention, is that a significant effect? If it were not intended to have a significant effect, should school bother doing the data analytics given the monetary and intangible costs?
Children (or their family) must be told what will happen to their personal data,how and why it is processed by whom, and what rights they have. Current Department for Education guidance on Progress 8 is at odds with the requirements of fair processing under current and future data protection law.
“The GDPR does not prescribe the format or modality by which such information should be provided to the data subject but does make it clear that it is the data controller’s responsibility to take “appropriate measures” in relation to the provision of the required information for transparency purposes.”[205]
By contrast, profiling of children is generally kept secret and not called profiling but progress measures.
“Transparency is a cornerstone of data protection law and crucial to help people not only understand how their information will be used to their benefit, but also the risks and safeguards, including any defaults that might impact on their privacy. Transparency is also key in helping people be aware of their rights and how to exercise them.”[206] (Walshe, 2019)
The author was able to obtain Early Years Foundation profiles and Key Stage profiles from both her Local Authority and directly from the commercial organisation that processed personal data about her own children for the purposes of data analytics, using Subject Access requests.
These were not made available by either the children’s Early Years nursery, their primary school directly or from the Department for Education via Subject Access Request.
Analyse School Performance (ASP) is a secure system for which everyone who has access must have a valid role within the delivery of education within either a school (including governors), LA, MAT, Diocese, Ofsted or the Department for Education. Access is via Secure Access (Secure Access is the key secure single point of entry system used by the Department for Education to give external users in education institutions and Local Authorities access to various Departmental IT systems), and levels of access to reports/data is based upon the role and entitlement to see data. Organisations can only see pupil level about the pupils relevant to that organisation. The system is a replacement for RAISEonline, which provided similar functionality from 2005 to July 2017.
The Department for Education contract award, the accredited suppliers will receive performance data from the ASP service from 31 May 2017.
“The intention is that they will develop products that sit alongside ASP but provide value-add, innovative services targeted at schools and colleges to promote self-improvement and performance accountability.
“Take up of these services by schools is entirely optional, and other packages that support them understand performance are available. It is for schools to choose how best to undertake their work about self-improvement and accountability, using whatever software or approach best meets their needs.”
The contract to was awarded to 8 suppliers[207]:
Alkemygold Ltd
Arbor Education Ltd
Asset for Schools
Fischer Family Trust (FFT) Education Ltd
For Schools Support Ltd
Maze Education Ltd
National Consortium for Examination Results (NCER)
Software for Data Analysis Ltd (SDA)
The level of data accessible by each user group is documented in the attached ‘ASP user access model’.
ASP contains sensitive pupil level data, accessible to authorised users within that same school or organisation. The Department says that dynamic reporting does not allow users to create tables containing sensitive items such as FSM and CLA with other data items.
“Seeing small groups of pupils is however part of the intended functionality to support schools in ensuring all pupil groups make the right progress. It is for these reasons that the service has the terms of use, identity management and security levels that it does.
“ASP allows reports to be downloaded or printed. As part of the terms and conditions which a user signs during registration, they are made aware of their roles and responsibilities in using this data.”
Access to ASP is granted through a Secure Access ‘Approver’ – which means they are an authorised user within an organisation with additional responsibilities to create and delete user accounts. As of 16/02/2018 there were 62,195 approved users of the ASP with access to identifiable pupil level data. The table in the Annex details what kinds of users have what kind of access.
While the purposes of national pupil database releases are not categorised or grouped in any way, we have had to make our own estimations of the data re-use using the keywords in summaries of the DfE third party releases register. From 2012 to December 2019, our analysis estimates around one quarter of data reuse was for benchmarking purposes.
This chart is a case study example of what kinds of data is released, in a typical Tier 2 data release and why the NPD dataset could be re-identified by combination with other knowledge datasets.
This company applied for and received Student and Exam Level data for Key Stages 2, 4 & 5. From the school name and exam results it may be possible to identify some individuals’ other data. For example, you know that in School A, one child sat A-Level in Biology. Perhaps several children sat A-level biology, but only one sat Biology *and* English. This combination of data factors that are easily identifiable with other knowledge.
The national pupil dataset is a census, so you know that every child that was on roll at the school at a particular time, will be in the data, you just need to find the one or two identifying features to find them in a targeted search.
Education data may include data that have been made public, such as exam results that are posted on Results Boards or in the press. Add into that, a rare subject, or rare combination, or child of a public figure at a known school, or perhaps MPs whose school name and year of leaving are on their Wikipedia page, and the entirety of the dataset released for an individual that included some simple institution identifiers may be increasingly easily exposed.
Repurposing SATs scores designed as a school accountability measure for any type of indicator, flag for inclusion in Troubled Families, or for risk assessing an individual child for potential future interventions by Local Authorities is at best careless may be dangerous and should often be considered negligent. Negligent because of the systemic unwillingness to examine the true nature of these data, their context and purpose, at the time of collection from a child and understand that they are not fit for purpose.
A vast amount of data about the same people can be extracted and is done so with certain underlying assumptions and generalisations about reasons and outcomes related to absence and attendance, exclusions, and attainment. The final report of the project Data Scores as Governance: Investigating Uses of Citizen Scoring summarised the results of one year of research by the Data Justice Lab at Cardiff University. They examined uses of data analytics in public services in the UK with a focus on re-use of administrative data and concerned with the advent of data-driven scores that combine data from a variety of sources as a way to categorize citizens, allocate services, and predict behaviour, by Local Authorities and their partner agencies. (Dencik et al, 2018)
In 144 pages, the word ‘child’ appears 108 times, and school 37.
Some of its highlights include the case study of Camden Council is one of the first local authorities to have implemented a master data management system, ‘to allow the single view of a citizen’ and one of its key uses of the index is to enable fraud detection, such as validation for residency for accessing council services such as school places.
The key problem about much of this data is it gives an invisible weight to a moral judgment or expectation on poverty. Parenting while poor is assumed to correlate to poor parenting.
the disentanglement of data systems, and the breakdown of the type of data and variables used to make assessments, point to the necessarily abstracted and reduced constitution of social knowledge and lived experiences that data analysis depends on in order to algorithmically process information. This is significant in several respects. In the case of Bristol’s Integrated Data Analytics Hub, for example, it was recognised how data-driven risk assessments can only take account of risk factors such as school attendance, records of domestic abuse, etc. but cannot account for insulating ‘positive’ factors such as other types of social engagement or wider family networks that rely on contextual knowledge and unstructured information. Furthermore, whilst there are attempts to aggregate data to identify broader social issues that shape opportunities and challenges for families and individuals, household-level and individual-level data tends to personalise risk, privileging individualised responses over collective and structural responses. For example, we run the risk of measuring the impact of school absences but not the impact of school cuts. In other words, these systems, in their emphasis on correlation over causation, can individualise social problems by directing attention away from structural causes of social problems (Keddell, 2015; Andrejevic, 2017).
The use of data collected in educational settings for algorithmic decision making and risk scoring in children’s social care is even more flawed where “models will potentially contain dangerous blind-spots” explained in greater depth in the section in this report on child protection and vulnerable children’s data.
The early years census is a statutory census that takes place every January. All local authorities must submit data about early years providers who receive direct government funding for 2-, 3- or 4-year-olds, including:
The individual child level data collection from PVI settings is a statutory requirement on providers and local authorities through regulations under Section 99 of the Childcare Act 2006 and The Education (Provision of Information About Young Children) (England) Regulations 2009. This is different from others, based on the Education Act 1996.
While Local Authorities validate and process the return through the electronic COLLECT system, many early year providers, childminders and nurseries, are very small private businesses and can provide data to the Local Authority in all manner of formats, including paper. The business and technical specification outlines the requirements for a single data file from each EY setting within an authority. If there are, for example, 120 EY settings within an authority then the Department will expect 120 files with each one containing a return from a setting. Each file must contain data as outlined in the specification in respect of content, structure, and format. (Early Years Census Guidance 2021).
The School Census as it is known today, began in 2002. When it first began, the annual 'Form 7' census, as it was known, involved only statistics for a school as a whole in aggregate, not information about individuals. Between 2002 and 2007, the Schools Census dataset was known as the Pupil Level Annual Schools Census (PLASC). Comprehensive PLASC data was first collected in 2002. Since then the termly data collections have included individual pupil names.
The School Census is a statutory pupil-level data collection collected each term, three times in any academic year, from all maintained (state-funded) mainstream schools in England. This includes nursery, primary, secondary, middle-deemed primary, middle-deemed secondary, local authority maintained special schools. Pupil Referral Units (PRUs - for children who do not go to a mainstream school) also complete this school census not the Alternative Provision Census.
Further schools not maintained by the local authority, including:
Service children's schools abroad also participate on a voluntary basis. Schools that are entirely privately funded are not included.
While under the fair processing obligations of Data Protection Law it is a statutory duty for educational settings to tell parents and pupils where data is optional, and tell them what it will be used for before submitting it to Local Authorities or Department for Education, this does not happen in practice.
Only four items of about 400 possible submitted items on each child across the censuses are optional. At the time of writing the optional items which can be refused by a child or parent and not ascribed, are:
School Census data is in part used to allocate funding, to measure performance, and the data is published online in various formats, as well as to make available to third-parties from the National Pupil Database at pupil level.
Some modules are collected for pupils no longer on roll, that is, those that left the school prior to census day:
• pupil identifiers
• pupil characteristics
• pupil status
• termly exclusions
• home information (for those pupils for whom post-16 learning aims are being submitted)
• termly attendance
• summer second half term attendance
• post-16 learning aims
• 16-19 bursary fund
The census dataset contains approximately eight million records per calendar year, and includes variables on the pupil's personal data, including name, home postcode, gender, age, ethnicity, special educational needs, free school meals eligibility, as well as educational history and attainment results.
The census also sends sensitive data to the Department for Education, such as absence, exclusions and their reasons, indicators of children in armed forces families, or indicators of children in care, or adopted from care. The data collected on children from age 2-19, three times a year, creates a "lifetime school record" of characteristics, testing and tracking, to form a single longitudinal record over time. This single central view of a child's personal confidential data and their educational achievement, behaviours and personal characteristics, is core to the National Pupil Database[208], a linked database controlled by the Department for Education.
The Department for Education also uses the school census to ask schools to provide establishment level information as well as pupil level information.
The data in this census are collected under section 83(3) of the Children Act 1989. The Unique Pupil Number is routinely processed by thousands of companies daily, since the protections around its distribution were loosened in 2010. We believe that there has never been any assessment done on the risk levels link between the CIN database and UPN distribution.
The children in need census is a highly sensitive child-level data collection. In 2018-19 there were four modules of data in this children in need census, each with its own set of data items[209]:
The census covers the financial year each year (1 April to 31 March).
It collects data on children referred to local authority social care services because their safety, health or development is at risk. This includes:
The number of records in a table does not necessarily correspond to the number of individual children. In the year 2018-19 the number of children counted in the annual period included 476,848 with a valid UPN and 231,993 without a valid UPN. A total of 708,841 incidents but not necessarily individuals.
If a looked after child is adopted during the collection year and the child remains a child in need, the local authority must submit two separate unlinked records for that child. The second (post-adoption) record for that same child must contain new child identifiers:
• A new ‘LAchild ID’;
• A new UPN which should have been assigned to the child by the school. Do not record the child’s pre-adoption UPN under former UPN;
• Referral date will match the date of the court’s adoption order
Children going through adoption who have been adopted from care are automatically included in the children in need census. Step-parent adoptions are not in the scope of the children in need census.
Local Authority IDs are assigned by the LA. Unborn children are allocated their own ‘LA child ID’. This must be a unique id for each child (no longer than 10 characters), and it should be retained from year to year. The ‘LA child ID’ can only contain alphabetic or numeric characters. It must be the same id that is used for other purposes, for example in the children looked after data collection return. Local authorities are free to choose their own child ids according to the above format.
According to the Children-in-Need guidance a unique pupil number (UPN) is automatically allocated to each child in maintained schools in England and Wales. It is an identifier only for use in an educational context during a child’s school career and it is subject to data protection restrictions. However, there is no explanation of what those data protection restrictions are. This children in need census is also added to the National Pupil Database, with limitations on the transfer of assessment factors, such as opinions about drug misuse by other family members.
In addition the dataset from each annual collection is retained by the Department in a discrete collection because of this subset of data that are not added, and because unborn children and under 5s, and other children not in school may not have a Unique Pupil Number. As a result, the CIN census is not maintained as a longitudinal database but a subset of its data becomes part of the National Pupil Database longitudinal database retained forever.
The total numbers of children across each year held by the Department, are therefore not easy to collect, though we asked for these and the Department did their best to provide them.[210] We have concerns about which third party organisations are approved users, unclear accountability for security of data after its release to third parties and data destruction and whether such data should not be entirely protected in 'sealed envelopes' restricted from distribution, except in aggregate.
Information on the number of times the databases holding CiN data have been accessed is not held by the Department.
CiN data is held on a number of databases within the Department for Education, for the purposes of primary and secondary statistical analysis.
Ten people have access to the dataset that contains the most recently collected CiN data. This is restricted to the data collection team and management chain, including the IAO, who are directly involved in the data collection and statistical release and therefore need this level of access to carry out their roles.
After the main statistical release, the CiN data is opened up more widely, to colleagues who need to do secondary analysis and require access for the purpose of carrying out their role effectively. Twenty-three individuals have access to this dataset and a further 137 users have access to the CiN data that resides within the Pupil Data Repository (PDR)database, the primary means of accessing NPD data for internal DfE staff.
The Department says that access rights to the PDR and analytical databases are subject to continuous audit and review, with users who no longer require access being removed.
The figures above do not include those with access to the CiN collection on the department’s COLLECT data collection system, such as a small number of Data Collections Service Desk staff and the technical development staff who prepare and administer the collection. These individuals require access to the collection blade to carry out their roles, but are not users of the data.
In 2020-21 new data will be collected, bout child on child and adult on child physical and sexual abuse factors. The data will be stored in the same way as the other assessment factors (to be collected by the Department between April and July 2021). This data will be available to internal users through the datasets.
A record from the CiN census is matched into the NPD if there is enough identifiable or characteristic information (mainly UPN, but also gender and date of birth) available in that record which either:
a) identifies the child as already being in the NPD via its appearance in other data sources (which includes the school census and the early years census and 6 previous CiN-NPD datasets) and from which the existing pupil ID can then be attributed to the new record from the current CiN census year, or
b) involves creating a new pupil ID for the child when the child's characteristics cannot be accurately determined as currently existing in the NPD.
Hence, even though a record from the CiN census has been matched into the NPD (i.e., given a pupil ID), it does not necessarily mean that it can be linked to other data in the NPD if there is no other information attributed to that pupil ID from other data sources. This is particularly relevant for the CiN census as a new child can appear, and therefore be matched into the NPD for the first time, before they start school if enough characteristic information is available in their CiN record to create a new pupil.
We believe that there is no necessity and proportionality for the data for Children in Need to be collected and processed by the Department for Education at national level on an individual child-level basis. This should instead be aggregated data.
There are privacy concerns as those requiring local authority support are generally of poorer socioeconomic circumstances, and such interference becomes discriminatory, reinforcing the groups of people that are routinely picked out as patterns in the data.
Taking a case study from the US in December 2017, The Illinois Department of Children and Family Services called it a day on a $366,000 program that used computer data mining to identify children at risk for serious injury or death. The agency's director called the predictive program technology unreliable. DCFS Director Beverly Walker told the Chicago Tribune that her social care agency was suspending use of the Rapid Safety Feedback program because it "didn't seem to be predicting much."
"Predictive analytics (wasn't) predicting any of the bad cases," Walker said. This has been echoed in the UK in the research by the What Works Centre of algorithms and machine learning in children’s social care.
The risks include algorithms disproportionately selecting disadvantaged children of colour or other profiled commonalities, for government intervention, and automated decision-making replacing the judgment of experienced child social care professionals, and their over reliance on the system being right.
Before 1998, the SSDA903 collection covered all looked-after children. Between 1998 and 2003, the SSDA0903 return covered only a one third sample of children, those with a day of birth divisible by three. There was then an aggregate return, the CLA 100 return, which was used to provide ‘fast track’ data and to gross the sample data. In 2001, the AD1 (children adopted) returns were introduced and the OC3 return (19th birthday) in 2002. From 2004, the CLA100 return was discontinued as the SSDA903 reverted to covering all looked-after children.
Becoming a web-based data collection for the first time, the SSDA903 return also incorporated the AD1, OC1 and OC3 returns. The OC2 data (outcomes for looked-after children) was introduced in 2009, the aggregate OC2 collection was discontinued the following year. Information on timings of adoption decisions was added in 2010. The OC1 data was discontinued in 2012. In 2014, the OC3 cohort was expanded to include all former relevant young people and collect information at their 19th, 20th and 21st birthdays. This was expanded further in 2016 to cover 17th and 18th birthdays.
Under the Children’s Act 1989, a child is looked after by a Local Authority if he or she is in their care or is provided with accommodation for more than 24 hours by the authority. They fall into 4 main groups of children who are:
(i) accommodated under a voluntary agreement with their parents (Section 20),
(ii) subject of a care order (Section 31) or interim care order (Section 38);
(iii) subject of emergency orders for the protection of the child (Section 44 and 46);
(iv) compulsorily accommodated. This includes children remanded to the Local Authority or subject to a criminal justice supervision order with a residence requirement (Section 21) or if they are likely to cause self-harm. ‘Section 25 children’ also includes children being remanded to local authority accommodation awaiting trial, or awaiting sentence following conviction (s23(1) CYPA 1969 and s21 CA 1989) or Children detained under PACE, who are helping the police with their enquiries, prior to either being charged or released. In all cases Social Services would be involved.
The ‘Updating Post Looked After Arrangements’ section of census collects information regarding children on roll on census day that have left care through either adoption, a special guardianship order, a residence order or a child arrangement order. Children who have left care under one of the measures listed above are eligible for the post looked after element of the pupil premium. This funding is needed by the school, but the onus is on the parent/guardian to provide it.
The DFE collects this information in each census, but administrators should only enter a pupil’s name if the persons with parental responsibility have self-declared to the school that their child was In Care immediately before Adoption, under a special guardianship order, a residence order or child arrangement order. As part of the self-declaration process, parents must show the school evidence of the adoption/special guardianship/ residence/ child arrangement order.
The Alternative Provision (AP) Census is a Local Authority return. The data is at individual level and requires attendance patterns to be recorded, enabling full time equivalent calculations.
Our research indicated a great deal of inconsistency in this collection, how numbers were recorded. Some Local Authorities counted each child only once who spent any time in AP across the year no matter how often. Others counted the same child more than once if that child attended AP more than once in the year. Others counted the total full time equivalent across the year. Some counted children only in the AP setting on the day in January of the census. We suggest this basic counting problem should be considered and assessed in any reviews of “missing children.”[211] We are not suggesting it is the cause of ‘missing children’ in the numbers, but we certainly think it is a contributory factor.
An expansion of the Alternative Provision census starting in January 2018, adds further sensitive data to the National Pupil Database including pregnancy, physical and mental health, and a code for young offenders, as reasons for transfer out of mainstream education.
This trend is typical of the scope creep of data collection towards every more individual and invasive data into children’s private and family lives. Pupil Referral Units were expected to enter the total number of teenage mothers expected to attend in Census week until 2014, however since 2018, the Alternative Provision Census includes a reason for an individual child’s transfer from mainstream into Alternative Provision education, that includes ‘pregnancy’.
Following (SI 807/2017) for the first time the new labels ‘young offender’ and pregnancy, mental health and others will be added to named records. Parents and pupils are not offered any opportunity to object. Personal data in the National Pupil Database are given out for commercial use. Government appears to fail to understand the increased risks this puts children at.
The Department amended AP Guidance 2017-18 to lower the age group for which such data should be collected. "Within the AP census, pupils should be aged between 2 (as at 31 December 2017) and 18 (at 31 August 2017) - those pupils born between 01/09/1998 and 31/12/2015.”
As the instrument was subject to negative resolution procedure and did not amend primary legislation, no statement on human rights was included in the Statutory Instrument Explanatory Memorandum and no privacy impact assessment was carried out.
We are concerned about the privacy impact because once collected, identifying data is added to the National Pupil Database and can be given out to third parties; over 1,603 times as of December 2019, many for commercial re-use and even to journalists[212] without a child’s or parent’s permission.
We appreciate the reasons why it is important [213] to understand these children’s life stories. We are concerned about how that data is collected, distributed, and never deleted. Every child has a right to privacy and confidentiality and this has been ignored.
Pupil Referral Units (PRUs) constitute one form of AP. Department for Education statistics indicate that there are 15,000 students in PRUs, an increase of 16% since 2011/12 but these organisations complete the School Census, not the AP census.
The duty of the Local Authority in Part 3 of the Children and Families Act 2014[214] is to have regard to supporting and involving children and young people and regard to the views, wishes and feelings of the child and his or her parent, or the young person; the importance of the child and his or her parent, or the young person, participating as fully as possible in decisions relating to the exercise of the function concerned; and the importance of the child and his or her parent, or the young person, being provided with the information and support necessary to enable participation in those decisions.
This duty is not met regarding their data management.
At the time of writing defenddigitalme is in the early stages of a judicial review of this expanded data collection and wider reuse of pupil data from the National Pupil Database.
Schools with a Sixth Form are required to provide details about Learning Aims once a year in addition to the School Census Autumn Return. The return collects information about Learning Aims that were undertaken during the previous academic year as well as those that were started or continued during the academic year.
Learning Aims are required for learners in National Curriculum Year 12 or above for whom the school claims post -16 funding from the Education Funding Agency (EFA). Learners are expected to be in National Curriculum Year 12 and above if the majority of their learning is at level 3 or above. Students in a different Actual National Curriculum Year Group 11 (or below) with the majority of their learning at level 2 are no longer included in the return.
Exam results are used to determine the QAN / discount code and are fed into the PLAMs data together with the individual level data about a young personal and their employment or course details. These include the
The SLASC applies to registered independent schools only. SLASC is an annual, statutory census that takes place every January. Independent schools are are expected to provide their contact details, the aggregate number of pupils on the attendance register, the number of teaching staff, the courses of study, the number of pupils with special educational needs (SEN), the number of pupils in local authority care, annual fees, details of accommodation, the number of staff members who have joined or left since the last census, and details of the school’s proprietors. This data is in contrast with the state system, not at pupil level.
In June 2016 the Department for Education expanded the school census to start collecting nationality and country of birth on every child in England.
Greg Mulholland MP on October 10, 2016 asked then Secretary of State for Education, the Rt Hon Justine Greening MP, whether the policy risked turning teachers into border guards.
In response, she said, “This is about making sure we have the right data and evidence to develop strong policy. That is a sensible approach, but it is important we respond to the concerns of schools that see additional numbers of pupils related to migration. We need to have a better sense of the stresses and strains, so that we can target resourcing effectively.”
The spokespeople from the Department, The School Standards Minister[215] , The Education Secretary[216], and the Lords spokesman[217] and press stories[218]
all stuck to the same line since May 2016; “these data will not be passed to the Home Office. They are solely for internal DfE use for analysis, statistics and research.”
The School Standards Minister said in July 2016, “The collection of data on the country of birth and nationality of pupils will be used to improve our understanding of the scale and impact of pupil migration on the education sector, and provide the Department with a better evidence base for future policy decision making.”
At the same time, there was a removal of the age limit restriction to ethnicity for under 5s.
And the data collected about home addresses expanded, without any explanation to include a unique property number (UPRN) which is generated ‘automatically’.
“This is a unique alphanumeric identifier for every address in Great Britain and provides a comprehensive, complete, consistent identifier throughout a property’s life cycle - from planning permission to demolition. For the 2016-17 school census the UPRN will be added as a voluntary data item to be returned alongside the full address. The collection of UPRN should be relatively “invisible” to data providers with the burden managed by the way addresses are processed within school systems. Schools are not expected to collect, or look up the UPRN and instead this would be generated automatically within the school system when entering addresses if using the OS AddressBase database as a reference.”
The failure to explain how this coding would be added to records, or that ethnicity would be sent to the national pupil database for the under 5s, were of less pressing concern compared to the explicit and more obvious risks of collection of country of birth and nationality.
Privacy notices issued by schools based on the Department for Education template, failed to tell parents and children that the new data were optional, and parents reported being chased repeatedly to return nationality data. Common misunderstandings include that the data would be used to allocate funding (untrue) or that it was somehow connected to allocation of resources for EAL children (also unfounded).
The data were optional, and parents and pupils could return a reply as refused, but schools were required to ask for the data. In effect it became an opt-out model with children as young as 10 we know of, put on the spot in classrooms to tell teachers if they were not British.
Schools Week reported in September 2016 that Pupils who were not white British told to send in birthplace data.[219]
Errors were compounded by poor forms and outdated software system design and practice, as well as what can at best be described as careless and insensitive classroom handling.
Common school information management systems used by thousands of schools across the country to record census information allowed staff to ‘ascribe’ a child's ethnicity where parents failed to provide it.
In our outreach with over 3,000 schools we mostly met surprise that data were being used for immigration enforcement at all, and often administrators were not aware that data was optional. In only two instances we met school staff who agreed that school children’s data should be used for immigration enforcement, and in only one discussion with school teaching staff at a research conference did we hear the view that it should be used, “to get them out of here as soon as possible.”
The vast majority of schools tried to meet their obligations to both tell parents and pupils data were optional and fulfil the data collection. However, the forms used were often poor. Parents and staff submitted examples on social media using the hashtag #FormFail in 2016-17 to highlight common failings to communicate choice, or misinformation on funding.
After 18 months of collection the data quality was poor. The Department for Education published figures based only on the January 2017 census data, that showed there were 8.1 million pupils registered at schools in England.
In April 2018 Schools Week broke the news that the Department for Education would no longer require collection of nationality and country-of-birth.
Monthly transfers of children’s data for immigration enforcement continue.
March 2013 Ministers planning immigration crackdown on 'education tourists'.
August 2015: After the General Election,Nicky Morgan, SoS for Education, announces a review of the impact of immigration on education or 'education tourism'. There are concerns this is a reworking of 2013 plans to 'crackdown' on migrant families.
Nov-December 2015: The Star Chamber Scrutiny Board approved the change to expand the school census collection to include country-of-birth and nationality. The purpose and reason given is: "Whilst the department collects limited information on children who have moved into the English education system from abroad, the collected information is insufficient to ascertain the impact that such entrants have on the education system. The introduction of these data items will assist in the identification of such pupils and may facilitate the targeting of support to such pupils."
December 2015: FOI request to ascertain the size of the NPD. The total number of Unique Pupil Numbers (UPNs) in the NPD as at 28/12/2015 was 19,807,973. (ca. 20 million). This has pupil records since 1996 so many people in the database are now adults, who have never been told their personal data is given to third parties without consent.
December 2015: A revised Memorandum of Understanding (data sharing agreement) in place since July 2015, v1.0 is finalised on December 18, to share the individual confidential data including nationality "(once collected)" of up to 1,500 children a month from school census with the Home Office. This did not become public knowledge until December 2016, and published on whatdotheyknow in February 2017. All references to Home Office (HO) in the document refer to the work of UK Visas and Immigration, Immigration Enforcement and UK Border Agency / Border Force. The agreement signatories are from the DfE Data and Standards Analysis team, and HO Director for Interventions/Sanctions. The Border Force Removals Casework Team facilitates the data transfers. There is no public acknowledgement of this use of school census data by DfE until late October 2016. It is not known in the public domain until our FOI requests are answered in August 2016, and its full extent is not known until December 2016.
2016
February 29: A later technical change notice includes language data expansion for all children.
May 4: The announcement was published "after a delay obtaining clearance". The Department instructs schools to collect the data for the 2016-17 school year census, and many schools begin before the school summer term ends, even before the Statutory Instrument is laid on July 21st.
June 23: Schools Week first reports the change and teacher concerns about the expansion to collect country-of-birth and nationality from every child age 2-19.
July 4: FOI request for Meeting Minutes of the Star Chamber Scrutiny Board where decision on census expansion was taken. The Secondary Legislation Scrutiny Committee was told in September by the DfE by letter, that the board would have raised concerns before signing off had they had any. Release of these minutes is refused by the DfE (not released until November 2017 after appeal, shortly before withdrawing from the Information Rights Tribunal).
July 6: defenddigitalme wrote to the Department raising concerns of forms failing fair processing, demanding data as required for funding, confusion, duties under the Equality Act, and that this is not in the best interests of children, but ‘to assess the education system” and concern it may be used in immigration purposes.
July 9: Schools Week reports that English as an Additional Language (EAL) professionals feel its implementation is rushed and poorly thought out "it was wrong to just “grab the [scales] from Wales, without consultation”.
July 25: Nick Gibb MP, Schools Minister denies that the new data will be shared with other Government departments in a written parliamentary question 42842, when asked if the DfE consulted (a) schools and (b) parents on the new requirement to collect country of birth data in the 2016-17 school census; how parents are informed of their right to opt out of providing that information; for what purpose that information will be used; and whether that information will be shared with other Government departments. Nick Gibb said. "The data will be collected solely for the Department’s internal use for the analytical, statistical and research purposes described above. There are currently no plans to share the data with other government departments unless we are legally required to do so. The Department has not consulted directly with parents regarding the changes to the school census."
July 26: Nick Gibb says in another written parliamentary question 42942, "The data will be collected solely for internal Departmental use for the analytical, statistical and research purposes described above. There are currently no plans to share the data with other government Departments" when asked what limitations will be placed on disclosure of such information to (a) other government departments and (b) private third parties; and whether her Department plans to change its protocols or processes for handling and disclosure of confidential information when country of birth data begins to be collected. Nick Gibb replies. "There are currently no plans for the Department to change the existing protocols and processes for the handling and disclosure of confidential information."
July 26: defenddigitalme wrote to the Minister for School Standards to ask how data would not be passed to other government departments following the claim made in the answer to Written Question PQ 42842.
July 27: Statutory Instrument 808/2016 laid before Parliament to expand the School Census collection via amendments to the Education (Information about Individual Pupils) (England) Regulations 2013 and the Education (Pupil Information) (England) Regulations 2005.
August 1st: In FOI request Ref: 2016-0032573 DfE reveals data sharing from the National Pupil Database data with the Home Office and Police: Since April 2012, the Police have submitted 31 requests for information to the National Pupil Database. All were granted, only 21 resulted in information being supplied. Since April 2012, the Home Office submitted 20 requests to the National Pupil Database. Of these 18 were granted. 2 were refused as the NPD did not contain it. (Note: The volume of Home Office requests, for over 2,500 individuals, will only be revealed on October 27, 2016).
September 3: defenddigitalme raised concerns on the Statutory Instrument to the Lords secondary legislation scrutiny committee. The Government responded to the Committee.
Sept-Oct: Schools scrambled to collect data in September before the on-roll census day on October 6th. Parents shared dismay and confusion on social media. A grassroots campaign Against Borders for Children (ABC) is launched by teaching staff, parents, civil society and — children’s, migrants', and human rights' supporters.
September 23: defenddigitalme writes to the Star Chamber Scrutiny Board to ask what they knew of the existing Home Office access when they approved country-of-birth and nationality data collection. After a reminder sent in November, an anonymous reply is received, which addresses none of the questions.
September 23: Schools Week reports: Pupils who were not white British told to send in birthplace data: “confusion could increase fears among ethnic minority families about how the data might be used, although education leaders say they have received assurances it will not be passed to the Home Office.”
September 26: over 20 rights' organisations under the ABC campaign umbrella write to the Secretary of State to call for the census expansion to be scrapped and urge a boycott by parents and schools of the optional collection.
September 26: Department claimed in BBC report that ‘these data will not be passed to the Home Office.’
September 29: defenddigitalme and Department meeting, we are told that no data sharing agreement is yet firm or in place, and it is under discussion. This was a lie. (See 2015)
October 6: School Census on roll day. Funded schools for children 5-19 start submitting 2016-17 census data including country-of-birth and nationality for the first time, among calls from parents for "this racist policy to be scrapped."
October 6: late afternoon, FOI Ref: 2016-0042333 confirms Home Office access to previously collected school census pupil data includes name, home and school address. Purposes of all cases within the Home Office requests “are a) dependant(s) of a parent/guardian who is suspected of an offence under section 24 or 24A of the Immigration Act 1971, or section 35 of the Asylum and Immigration (Treatment of Claimants) Act 2004 has been, or is being committed (or b) the child in question is an unaccompanied minor.)" "Each Home Office request includes details of the individuals concerned as held to enable to us to search for them in the NPD."
October 10: The Secretary-of-State Justine Greening insists in Education Questions insists that the collection is: .."about making sure we have the right data and evidence to develop strong policy."
October 12: Lords questions on the census expansion. The Lords government spokesman, Viscount Younger of Leckie said, “I reassure the House that the information is kept within the Department for Education and is not passed on to the Home Office.” The government spokesperson confirms that there is no sanction for parents or schools who choose not to provide country of birth and nationality data in the census.
October 12: The Guardian reports a new data sharing agreement is “in place” with the Home Office.
October 14: defenddigitalme writes to the Secretary of State, Minister for School Standards and Department to object to the use of pupils’ home address and school address divulged for deportation enforcement and absconder tracking of parents and adults through children’s data and asks to see a copy of the new datasharing agreement by end of data that will restrict Home Office use of new census data items. It is not forthcoming. Press told to use FOI.
October 18: Still in secret, version 2.1 of the original Memorandum of Understanding in place since December 18,2015 to share the individual confidential data including nationality "(once collected)" of up to 1,500 children a month from school census with the Home Office is signed off, amended from October 7, to remove reference to 'nationality' data. This does not become public knowledge until December, and published on whatdotheyknow in February 2017.
October 23: Against Borders for Children campaign social media Thunderclap on Twitter reaches over 1 million.
October 24: The Independent reports that Brighton and Hove City Council tell parents they can override parents and pupils' refusal of consent to provide optional ethnicity data, and that Heads can ascribe ethnicity.
October 27: Extent of Home Office access to pupil’s home address published in PQ 48635 answered by Nick Gibb. Very first government admission at all outside of our FOIs and questions asked since May. “Between July 2015 and September 2016 [...] requests relating to a total of 2,462 individuals have been made by the Home Office to DfE and 520 records have been identified within DfE data and returned to the Home Office.”
October 27: The National Union of Teachers calls for this use of pupil data by the HO to end, emphasising that “schools are not part of policing immigration”.
October 28: Lord Nash reported in Schools Week wrote new data will be kept separately due to its 'sensitivity'.
October 31: Home Office FOI 41221 confirms monthly access for immigration purposes. It appears that in addition to one request made in 2013 for 341 pupils, monthly access began in 2016. (see also PQ 48635 October 27)
October 31: The House of Lords agree to a motion of regret. Lords say, “this proposal has all the hallmarks of racism”, “Parents are upset, not just about how this information might be used but because these questions are asked at all,” and that “DfE denials of any ulterior motive do not sound convincing."
November 3: Schools Week reports former Secretary of State for Education Nicky Morgan “had to ‘fend off ideas’ from Downing Street” including blocking plans to share the new census information and that passing on the specific data to the Home Office was “not something we would want to see.” This calls into question the current Ministers’ honesty about its purposes of the collection.
November 3: Schools Week reports "MPs hope power of prayer will scrap pupil database changes".
November 16: Following letter from September 26, representatives of nine human rights organisations (as representatives of all signatories) meet with civil servants and Chief Data Officer at the DfE.
November 17: The Independent reports "Government scraps plans for controversial nationality census for 2-5 year-olds."
November 24: Right to retract data from autumn census. Lord Nash confirms that if parents have previously provided this information to schools and now wish to retract it, they should inform the school of this decision and the Department will remove any information collected on country of birth and nationality during the autumn census.
November 28: The UK Statistics Authority urges the Department for Education to set out plans for ongoing review saying that maintaining public trust is essential, requiring an ongoing commitment to communications and transparency including “clear communication of purpose”
December 1: The BBC reportsTheresa May had a plan to 'deprioritise' illegal migrant pupils and Laura Kuenssberg tweets that Cabinet Office letters reveal the collection of country-of-birth was agreed as part of this policy compromise.
December 15: The Department for Education released the Memorandum of Understanding with details of the agreement between the DfE and Home Office, via FOI. It reveals the intent was to give pupil nationality data to the Home Office. The original version in place between 2015 and October 14, 2016, would share "(once collected) nationality" (15.2.6). This had been backdated and amended effective Oct 7, 2016 (the day after the first collection of the new data) apparently after campaign pressure, from over 20 rights organisations and press scrutiny. The question remains unanswered whetherthis new census nationality data may be used for Home Office purposes within the DfE, even if the nationality data itself is no longer to be exchanged. Country-of-birth, language and nationality data could be accessed and used by the Department for Education search algorithms for this bulk data analysis for immigration purposes and the results providing a higher matched rate of data to the Home Office as a result, and the MOU would still hold true.
2017
January 10: The Department for Education released updated guidance on the school census to schools with one week notice before the Spring census on roll date. By now, most schools have already collected the required data since the announcement in May 2016.
January 12: After the DfE had seemed unwilling since November to communicate the changes to schools, the campaign coalition under Against Borders for Children supported by Liberty writes to every school in England to tell them about the intent of the expanded census and encourage them to tell parents and pupils of their rights to refuse and retract data.
January 19: Spring census collects nationality and country of birth data for the second time, and ethnicity data from 2-5 year olds for the first time in the Early Years census.
February: The DfE declines to provide the numbers about nationality and country-of-birth data collected in the autumn census in response to parliamentary question 62925 citing future plans to publish it.
April: The NUT conference supports motions opposing nationality and country of birth collection and calling for more information to be given to schools and parents.
May 18: Summer School Census on roll day. Nationality and country-of-birth collected by DfE for the third time.
May 18: The Information Commissioner's Office issues a Decision Notice on FOI in favour of the release of the Star Chamber Scrutiny Board (SCSB) meeting minutes from November 2015. Parliament had been told in July 2016 the SCSB, "approved the collection of country of birth data via the school census in November 2015." The Government response in a letter to concerns raised at the Secondary Legislation scrutiny committee on the SI 808/2016 also stated, "Should there have been concerns regarding these changes, SCSB would have raised these for discussion requesting clarification or amendment before providing their formal decision to accept or reject the changes." Our questions and letters remain unanswered whether the Board knew when they made the decision on the school census expansion, that pupil data was already being passed on a monthly basis to the Home Office and that "Nationality (once collected)" would be given to the Border Force Removals Casework Team as per the HO-DfE agreement in place between July 2015 and October 2016.
May 23: Ongoing monthly releases of pupil data to the Home Office Border Force Removals Casework Team continue, and the DfE releases the Q1 numbers via FOI.The DfE declines to provide an audit report to show if and how nationality data is being used.
June 13: the DfE appealed the Information Commissioner Office decision on the Freedom of Information Request for the Star Chamber Scrutiny Board (SCSB) 2015 meeting minutes. At a November 2015 meeting the SCSB signed off the collection of nationality and country of birth.
We believe they did not know nationality data "Once collected" was intended to pass to the Home Office for immigration enforcement or that an agreement was already in place, handing over pupil data on a monthly basis. This board meeting was where Parliament had been told the decision had been scrutinised and that had they had any concerns, they would have been raised. A hearing is planned for November 13, 2017 at the First Tier Tribunal on information rights, in case EA/2017/0122.
October 5: Sky News, a spokesperson confirms the Department of Education has become a Department for Deportation. “The Department for Education confirmed to Sky News that information about "illegal migrants" was obtained from the National Pupil Database and used to contact families to "regularise their stay or remove them”.”
This is not what schools are told how pupil data is used which the schools collect from parents and pupils in trust, and with a duty to tell them fairly what purposes personal data may be used for. The DfE revised privacy notice template for schools is published on October 5, but also fails to tell them explicitly that individual and identifying data is given to commercial companies, press, charities, think tanks and a range of information intermediaries, as well as researchers, without consent.
October 5: The DfE declines the Freedom of Information request to publish how School Governor personal data is used, how the pupil nationality data since collected are being used, or release the live data sharing Memorandum of Understanding.
October 10: The House of Lords debates the Data Protection Bill second reading.
Lord Lucas questioned new powers, “Paragraph 4 of Schedule 2, on immigration, takes away rights immigrants have at the moment under the Data Protection Act. Why? What is going on?”https://goo.gl/FUzaBo
October 13: The DfE withdraws its appeal at the First Tier Information Rights Tribunal. Sends a letter to defenddigitalme to accompany the release of some of the disputed materials from the FOI of July 4, 2016 [download .pdf 288 KB] to request the Star Chamber Scrutiny Board Minutes of November 2015. The minutes reveal little, and there is no recorded discussion or questions about the purposes of the collection of nationality data.
October 25: Government response to questions from the Secondary Legislation Scrutiny Committee (SLSC) about the purposes of the SI 808/2016 [response .pdf 424 kB] The DfE final remark (Point 15) confirms the Star Chamber were not told about Home office use of school census data that began in 2015, when they signed off the future nationality data collection in November 2015, or of the agreement which was 'updated with new purposes' on November 27, 2015 and included in 15.2.6 "(Once collected) Nationality". This is significant when Ministers relied on the assurance that the Star Chamber had approved the expansion and had raised no concern. [See PQs of July 2016, and the assurance given to the SLSC by the DfE in September 2016[response .pdf 707kB].]
November 22: Against Borders for Children (Schools ABC) launch a crowdfunder to take legal action with the support of Liberty. They campaign to end the Department for Education’s policy to collect country of birth and nationality information on 8 million children in England every year, and the use of all school census data in immigration enforcement and its strategic aim to ‘create a hostile environment’ for migrant children in schools.
November 27: a ministerial correction (HCWS272) made by Nick Gibb, Minister of State for School Standards, on the numbers of pupils data released to the Home Office and police. “Information supplied by the Data Modernisation Division of the DfE has been identified as containing incorrect facts in the response provided to Parliamentary Questions concerning the volume of children’s records passed onto the police and the Home Office (PQ48634, PQ48635 and PQ52645) and in figures quoted during a House of Lords Debate on the 31 of October 2016 on the Education (Pupil Information) (England) (Miscellaneous Amendments) Regulations 2016. “
December 14: The use of pupil data by the Home Office and police was included for the first time in the quarterly third-party data published figures of data shared with "external organisations".
The Department published its statistics on the collection of nationality and country of birth data for the first time - as collected in the January 2017 census. In summary, in January 2017, there were 8.1 million pupils registered at schools in England.
The data quality is clearly very poor, with over a quarter not providing nationality, and a significant active refusal rate. The not yet obtained rate is also highly significant, despite Local Authorities and schools being chased by the Department and Schools Information Management System providers, to make sure they returned data or refused, and cut down the not yet obtained rate.
2018
April 9: Schools Week reports, DfE ends divisive pupil nationality data collection "Schools will no longer be forced to collect data on their pupils’ nationality and country of birth following a major U-turn by the government, Schools Week can exclusively reveal. The Department for Education is expected to write to schools in the coming weeks to outline its plans to remove the controversial categories from the school census. It means schools will no longer have to ask parents and pupils for the sensitive information as part of the census, which takes place three times a year."
June 28: Nationality and Country-of-Birth and Language Proficiency (items under SI 808/2016) must no longer be collected for school census purposes, with immediate effect. This is confirmed by the Department for Education update to school census guidance 2018-19.
September 2018: FOI reveals the making of the original data sharing agreements, which would eventually include, “(Once Collected) Nationality”. This includes comments that show the bulk datasharing agreement would be unlawful, since the purposes were not educational, and therefore outside the remit of the legislation which permitted the data collection under section 537A of the Education act 1996. [ref p5 of 16, version 0.2].
September 21: Delete pupil nationality data or face legal action, campaigners warn ministersMore than 20 human rights and privacy campaign groups have signed a letter to Damian Hinds, the education secretary, urging him to destroy the information collected under controversial changes to the school census, introduced in 2016.
October 15 2018: DfE refuses in written response.
November: Together with Against Borders for Children, we wrote to every MP and some members of the House of Lords to explain that children’s data are still being shared for the purposes of the Hostile Environment, and to ask for support of the three point plan.
December 2018:
December 19: Minister confirms in PQ 201097 that school census data are used in immigration enforcement.
And says that families no longer have any available process to retract nationality data submitted to the Department between October 2016 and June 2018. “The last opportunity for parents to retract this information was via the last collection in summer 2018.” Schools and families have not been informed. [PQ 20201096]
In the background, the government has published the Immigration Bill 2018. It includes a blanket sweeping away of privacy rights, exposed by the 3 Million campaign. “Every EU citizen applying for Settled Status to accept its Privacy policy that allows it to share all data with “public and private sector organisations in the UK and overseas”.
Against Borders for Children, supported by Liberty, submitted its next stage of formal complaint to the Office of the Information Commissioner to ask for school children’s nationality data to be deleted and enforcement action, as well as regulation to be enforced.
January 2019
The Guardian revealed that the Minister had confirmed national pupil data collected for educational purposes, was still being used for immigration enforcement purposes.
The agreement between the Home Office and Department for Education was updated [download as a .pdf 267 kB].
No other related policy changes have been announced in the 2018-19 Guidance for schools, and parents have not been informed they can no longer retract data.
June 2019 defenddigitalme issued legal action in the form of a regulatory complaint to the Information Commissioner against the DfE regarding pupil data in the National Pupil Database.
October 2019: the ICO initial findings were reported that there were wide ranging and serious data protection issues but the ICO has made no public statement in the four-year course of the controversy.
“This investigation has demonstrated that many parents and pupils are either entirely unaware of the school census and the inclusion of that information in the NPD, or are not aware of the nuances within the data collection, such as which data is compulsory and which is optional. This has raised concerns about the adequacy DfE’s privacy notices and their accountability for the provision of such information to individuals regarding the processing of personal data for which they are ultimately data controllers.”
At the time of writing, we await a final ICO decision and although the ICO audit summary of the Department for Education was published in October 2020 there were no explicit actions released..
The first version of the NPD, including information from the first pupil level School Census matched to attainment information, was produced in 2002 after the first named data collection.
The BBC reported in 2002[220] that what caused most alarm about the changes, was “the requirement to include pupils' full names, along with their home postcodes. The Department for Education says this information is needed only by technical staff and anything that is passed on to other agencies will be anonymous.”
Ten years later, identifying data began to be given away routinely to journalists, businesses, charities, think tanks and researchers. In 2015 they started giving names and addresses to the Home Office to further the purposes of the Hostile Environment.
The NPD is now one of the richest education datasets in the world holding a wide range of information about pupils and students dating back to 1996. Data is retained indefinitely.[221].
In May 2019, the National Pupil Database contained over 21 million individual pupil records on a named basis, and it grows every year by an annual cohort of around 700,000.[222]
The various census datasets are merged together with Key Stage tests, at the Department for Education and accumulate throughout a child’s life as part of an individual record in the National Pupil Database, a longitudinal record created from linking around 25 different datasets.
The possible data items collected for each individual pupil is listed in the National Pupil Database User Guide. There are about 400 possible variables to collect on an individual pupil. The full national code sets of all the items of data that can be collected on individual children within educational settings for national needs, past and current, can be downloaded from the Department for Education are listed in the common basic data set (CBDS)[223].
The pupil level data is personal confidential data which include sensitive personal data, or special category data as defined by the Data Protection Act 2018. The National Pupil Database contains identifiers that come from the pupil’s personal data, as well as pupil numbers and school and local authority identifiers. With effect from 2010/11, the pupil’s UPN (unique pupil number) remained consistent throughout their time in school and remains permanently on the record. The include:
In 2017 the Schools Minister claimed to Parliament that the National Pupil Database is exempt from Subject Access Requests.[224]
“A subject access request for personal confidential information may be refused if it falls within a statutory exemption. The Research, History and Statistics exemption (section 33(4) of the DPA), applies to personal data being processed for research, historical or statistical purposes. As the personal information that a pupil or parent has requested is held on the NPD for these purposes, it is exempt from release to the pupil or parent.”
Since the National Pupil Database is processed for operational purposes, not only research purposes we have challenged this position. In May 2018 the author obtained an incomplete extract of her own child’s national pupil record. At the time of writing we await a decision from the Information Commissioner’s Office.
There is an obligation for schools to provide named data from the school register to the national government.
Taken from the Department for Education guidance on parental responsibility, 2016:[225]
“Regulation 5(1)(a) of the Education (Pupil Registration) Regulations 2006 requires a school to record the full name of every pupil in alphabetical order in the admissions register.
This is generally interpreted to be the child’s full legal name and not any other name that the child is known by.
However, there may be circumstances where a name change has already been effected by the school and it would not be in the best interests of the child, who might be known by a new name, to refer back to a different name. Ultimately it is a matter of policy for the
school to decide but the best interests of the child must be the paramount consideration when making a decision.
Where a child is subject to a special guardianship order there are particular considerations in cases where a school receives a request to use a different surname for a pupil.
Section 14C(3) of the Children Act 1989 (CA 1989) states that:
While a special guardianship order is in force with respect to a child, no person may cause the child to be known by a new surname [...] without either the written consent of every person who has parental responsibility for the child or the leave of the court.
Schools must therefore decline requests from special guardians for a child to be known by a different surname unless the above criteria are met.”
MPs in the House of Commons were assured on the changes to the "Central Pupil Database" in 2002 by then Minister of State for Education and Skills, Stephen Timms, that, "The Department has no interest in the identity of individual pupils as such, and will be using the database solely for statistical purposes, with only technical staff directly engaged in the data collation process having access to pupil names.”
Today names are used commonly in linking datasets across and outside government, as well as for direct interventions such as Home Office immigration enforcement and research led postal surveys and direct contact such as a test sent out by the Institute of Education for a research project.
(last updated Jan 1, 2021)
David Cameron announced in 2011, the government would be “opening up access to anonymised data from the National Pupil Database […].” This was an expansion to other third parties, since academic public interest researchers already had access.
The stated intention, come 2011 for the education sector, was that Open Data was going to be enabling, to allow parents to see how effective their school is at teaching high, average and low attaining pupils across a range of subjects, from January 2012.
In reality what happened in 2012 is that another purpose piggy-backed those under Michael Gove as Secretary of State at the Department for Education. Personal data was given away to commercial users to create a private sector marketplace from children’s public administrative data. And the distributed data was no anonymous but sensitive and identifying.
New legislation under the the 2012-3 changes to the Education Act 1996,was passed to distribute raw data from National Pupil Database under terms and conditions with third-parties who for the 'purpose of promoting the education or well-being of children in England are conducting research or analysis, producing statistics, or providing information, advice or guidance', and who meet the Approved Persons criteria of the 2009 Prescribed Persons Act amended in 2012.
In a presentation to the NPD User group in September 2016, the Director of the DfE Data Modernisation group acknowledged the excessive release of sensitive data: "People are accessing sensitive data, but only to then aggregate. The access to sensitive data is a means to an end to produce higher level findings.”[226]
And the DfE 2018 Hive data dissemination discovery report found that, “Users are required to download the entire dataset, then remove and manipulate extraneous data reducing it to a specific subset. Many expressed a desire to be able to customise the data they downloaded.”[227]
In answer to a parliamentary question in 2018, the Schools Minister, Nick Gibb wrote:
"According to centrally held records at the time of writing, from August 2012 to 20 December 2017, 919 data shares containing sensitive, personal or confidential data at pupil level have been approved for release from the National Pupil Database. For the purpose of this answer, we have assumed the term sensitive, personal or confidential uses of information to be data shares classified as either Tier 1 or Tier 2 as set out in the National Pupil Database area on GOV.UK. [In addition] There were 95 data shares approved between March 2012 and this classification system being introduced." [228]
Each of those releases of data may have included millions of individual records. Since then, the number of releases has nearly doubled to just over 2,000 by June 2020.
A list of completed National Pupil Database Third Party Requests and those in the pipeline, are published on a quarterly retrospective basis.[229]
The data items for release were classed into four tiers by the Department for Education and following the change of legislation, releases of the data since 2012 to third parties have not been anonymous, but have been mostly of identifiable and highly sensitive (Tier 1) and identifiable and sensitive (Tier 2) data. Fewer releases were of aggregated data that may be identifying due to small numbers (Tier 3) and identifying non-sensitive items (Tier 4).
According to our analysis, since there is none published, of the documented requests for identifiable data that have been through the Data Management Advisory Panel (DMAP) request process in March 2012 – June 2020, about 43% was released for academic research through universities, and about 33% of the individual applications approved (as distinct from volume of data used) were for use by commercial companies and think tanks (note this is *separate* from additional releases via the ASP service). These calculations also exclude police and Home Office reuse.
About 14% of all applications approved (as distinct from volume of data used) of identifying and sensitive pupil level and data, were from government and arms length bodies, with 6% going to charities and non-profit, 3% exam boards and 1% others making up the rest.
There were 21 rejected applications between March 2012 and June 2020, including a request "by mistake" from the Ministry of Defence to target its messaging for recruitment marketing.[230]
The Department does not release data on how many records are released each time, but every one of the releases could each contain millions of individual pupil records.
New data analytics products and services have been developed using taxpayer funded data generation, collection, cleaning and distribution. Data analytics that schools and Local Authorities buy back again, to meet the accountability measures driven by the government.
Approved uses include identifying and sensitive data released to the media, “to pick interesting cases/groups of students".
The Telegraph newspaper was granted identifying and sensitive data in 2013, for all pupils in the KS2, KS4 and KS5 cohorts for the years 2008-2012.
A Freedom of Information request at whatdotheyknow.com Ref: 2015-0054037 confirmed that the release of 5 years worth of data (2008-2012) from the National Pupil Database included children's sensitive personal data including SEN and FSM indicators, ethnicity and language. These are not statistics but at pupil level and there was no suppression of small numbers.
These identifying and sensitive items, or identifying data items were matched at individual pupil level with census data for KS2, KS4 and KS5 datasets before release at individual level.
The Schools Minister Nick Gibb replied in answer to a parliamentary question on the release of millions of children’s records to the Telegraph in 2013 that the release ”did not contain any data items that are highly sensitive or have potential to allow children to be identified.”[231]
But an email obtained via FOI in 2015, showed the Department sought "cast iron assurances", that the data the Telegraph would publish would not identify children. The data the journalists were given are from Tier 2, identifying and sensitive data, including special needs, free school meals, ethnicity and attainment, longitudinal records for about nine million children across multiple years of their education. If the data had not been identifying the paper wouldn’t need to give such an assurance. There was no small numbers suppression.[232]
“The Daily Telegraph requested pupil-level data and so suppression was not applicable.”
There is little detail on the precise uses of the data. We were able to get access to the original application for, but were told that, “There is no further written business case for the approved Tier 2 variables other than what is included in the application form. After a telephone discussion with the Daily Telegraph, the Tier 2 variables requested were subsequently approved as they were required to differentiate between the different intakes that schools have. To effectively compare schools, the Daily Telegraph wished to factor in the “different types of pupil” who are present at different schools. Information on pupil characteristics related to prior attainment: gender, ethnic group, language group, FSM eligibility and SEN provision status were deemed by the Department to be appropriate as these are seen as important factors in levels of pupil attainment. The approved Ethnic Group Major and Language Group Major variables are the least sensitive versions available of this data.
“At the time of this request (February 2013) we did not require the requestor to inform us of the conditions for processing that they relied on. The requestor signs an agreement which confirms that they will process the data in accordance with the Data Protection Act (DPA) and it is the responsibility of the requestor to ensure this is the case. As such there is no written evidence available of the condition for processing under Schedule 3 of the Data Protection Act that the Daily Telegraph relied on.”
The raw data is sent to the requestor's own location. DfE does not charge for data (and has not since the NPD process began), nor does DfE charge for the processing and delivery of extracts to customers.
There is no transparency of the volume of how many children’s data have been given away in approved uses either, because,“the Department does not maintain records of the number of children included in historic data extracts.” (PQ109065)
Public interest research use of pupil level data through other routes of access to the data, include increasingly projects linking individual data together with other education and employment data from citizens' interactions with other government departments and public services, known as public administrative datasets.
For example, the LEO dataset is made up of individual level information from the National Pupil Database (NPD), the Individualised Learner Record (ILR), the Higher Education Statistics Agency (HESA), Her Majesty’s Revenue and Customs data (HMRC), The National Benefit Database (DWP), the Labour Market System and Juvos, the unemployment research database. Further work by DfE compares self-reported salaries from the 2008/09 DLHE survey with earnings data from the LEO dataset coming directly from HMRC tax records.[233]
The volume of Police and Home Office use first made public through Freedom of Information requests in 2016, were first officially published by the Department, in the Third Party Release Register in December 2017, under "External Organisation Data Shares”.
Police requests were only documented going as far back as July 2015. This omits police access to records before this date, as noted in a ministerial correction (HCWS272) made by Nick Gibb in November 2017, Minister of State for School Standards, on the numbers of pupils data released to the Home Office and police.[234]
Interdepartmental transfers of data include to the Cabinet Office for preparation of Electoral Registration Transformation work in 2013, to match participant data in the National Citizen Service, and for use in the Troubled Families programme, as well as arms length bodies such as NHS Digital for a survey "What About Youth" mailed home to 300,000 15 year olds in 2014.
The cost of the data collection burden is immense, its necessity unclear, and tangible value unmeasured.
Since legislation changed over time to permit new uses and access to personal data by new third parties, over 15 million people whose data was already in the National Pupil Database and who had already left school pre-2012, have not been informed how their personal data may be used, for what purposes, and by whom.
As observed in 2014 by Owen Boswarva, "the central concern is that parents and pupils themselves are not sufficiently aware of the way the data is being shared with third parties."
There appears to have been no concerted effort to bring the consultation or the NPD initiative to the attention of parents or pupils."
In our analysis of DfE releases to third parties published between March 2012 and June 2020:
Children’s data rights are ignored in respect to National Pupil Data
Data is retained forever and named. The records given away include special needs and sensitive behavioural labels such as theft, violence, alcohol, sex. They are never filtered out from research.
Commercial exploitation: Controllers and Processors
Pupils and holders of parental responsibility are not notified of the commercial exploitation of national pupil data.
The GDPR imposes direct compliance obligations on both controllers and processors. [See Rec.22; Art.3(1)] and we believe the DfE is deficient in ensuring processor obligations Art.28(3)(h) as there is no standard process of audit after data is released.
We asked for an estimation of the extraction costs to the Department via FOI, among other questions on audit and security in November 2017.
What are “research purposes” and where do we draw the line with commercial exploitation Private tutor companies? The Football Association? National newspapers?
The organisations which are using it, include for production of heat maps on their website (Private Tutoring company), or very extensively, data analytics that resell their analysis back into the education sector and schools.
The Department for Education therefore passes out identifying pupil level data to commercial organisations to use, and in effect outsources the management of pupil privacy to the third party organisation. There is no notification to children or parents. "Privacy notices" with several clicks down into a document many never read, are ineffective by design.
The continued challenge of commercial exploitation in 2019 and beyond, is the continued enormous growth of the use of pupil data by third parties from schools, that the schools themselves give out to third parties - without parental and pupil consent or any real understanding - to third parties for behavioural profiling, using app designed classroom seating plans, homework and school cashless payment systems.
The market is vast and growing and we are reviewing only some of those in common use right now. They are unregulated and have no oversight.
For most of these, schools don’t offer any choice to parents or children at all.
The tools are designed to help schools run their administration, or manage homework, or behaviour and so on. But there is often zero understanding how they work and what it means for the children’s personal data and privacy, and schools are generally uninterested because “consent” they feel is unnecessary as they are processing as part of their public role and consent is therefore unnecessary.
Some of these most commonly used apps in schools in England, are overt about their parallel commercial offering, like the app "Show my Homework" which also offers private tutor matching services on their website.
Others exploit children’s data in ways we cannot see, especially if apps are processing data abroad (commonly the US), often without privacy policies, and those that do exist, are never sent home.
Schools can automatically connect the third-party tool provider to the school information management system (SIMS), without parental knowledge, permission or choice.
Parents often first know that the school is going cashless, when they receive an email with the parent password to log-in to accounts that have already been created, and the pupil profile has been created by the company, often including the child’s school photo.
Profiling using children’s data is the norm in behavioural tracking, classroom management, even seating plan apps that claim to use “AI” to design the plan based on behaviour.
We believe school staff are ill equipped to judge the security and privacy implications of these uses of children’s confidential data, and the commercial market is growing at a pace that needs independent oversight and standard policies for apps used in education. Teacher training must introduce privacy and data protection into standard basic training.
In September 2018, the Department for Education changed their data access model.
Access is granted through an applications process to the Department for Education Education Division and internal Data Management Advisory Panel (DMAP), and is subject to requesters complying with terms and conditions imposed under contractual licence arrangements. The DMAP Terms of Reference was first published in July 2016 by the Department for Education.
The Department for Education application procedures for handling requests for data from the National Pupil Database, from March 2012, enabled interested parties to request extracts of data from the National Pupil Database (NPD) using forms available on the Department for Education website. Data supply agreements, agreement schedules and individual declarations for researchers and third-party organisations who have received DfE approval for applications for data extracts are completed before users are sent the password protected data.
The sensitive and identifying items that require DMAP approval include name, date of birth, postcode, candidate numbers, Pupil Matching Reference (Non Anonymised), detailed types of disability, indicators of adoption from care, reasons for exclusions (theft, violence, alcohol etc).
There is no ethics committee review for the release of identifying or sensitive data directly from the National Pupil Database by the Data Management Advisory Panel or Education Division.
There was no privacy impact assessment of the National Pupil Database, despite ongoing expansions until April 2019.
The risks posed to children’s confidentiality by the addition of names to the collection in 2002[236] have proven to have been well founded.
The release of data permitting pupil level release of individuals’ identifiable data to third parties from the National Pupil Database was updated by 2013 changes to legislation. Section 114 of the Education Act 2005, and section 537A of the Education Act 1996, together with the 2009 Prescribed Persons Act, were amended in 2010 and 2013, to allow the release of individual children’s data to third parties.
The Data Protection Act 1998, in particular, Principle 1, and subsequently the UK Data Protection Act 2028, sets out a fairness obligation which cannot be set aside merely because of the presence of a legal basis such as a Statutory duty. On October 1, 2015, this latter point was again made explicit for public bodies in the judgment of the Court of Justice of the European Union in the Bara case (C‑201/14) in which it ruled that “[the Directive] must be interpreted as precluding national measures…which allow a public administrative body of a Member State to transfer personal data to another public administrative body and their subsequent processing, without the data subjects having been informed of that transfer or processing,” i.e. individuals must be informed when public bodies share personal data and why.
For sensitive data (Tier 1 and Tier 2 of the National Pupil database include all the data items classified as ‘sensitive’) an additional condition from Schedule 3 of The Data Protection Act 1998 must also be met to justify a legal basis for disclosure. These conditions are a high bar, for example, in the interests of justice.
The Data Protection Act 1998 (s33) gives research exemptions for the purposes of statistical and historical research purposes, most significantly on the principles of indefinite retention and data minimisation, as well as Subject Access rights, for as long as data is processed for the legitimate interests of the Data Controller. To qualify for the research exemption [20], the research must be able to comply with the following ‘relevant conditions’:
(a) that the data is not processed to support measures or decisions with respect to particular individuals, and
(b) that the data is not processed in such a way that substantial damage or substantial distress is, or is likely to be, caused to any data subject.
It does not relieve researchers of all data protection obligations.[237]
The largest scale linkage of pupil data across multiple datasets yet began in 2015. The Longitudinal Educational Outcomes data links together individual educational flight paths from age 2-19, together with their further and higher education data, employment earnings through HMRC data, Student Loans data, and welfare data through access and linkage with the Department of Work and Pensions.
In June 2017, WONKE reported some of the concerns over this dataset in a beginner’s guide to Longitudinal Education Outcomes (LEO) data.[238] We support David Morris’s conclusions that it is a deeply flawed dataset to be used as it is currently by policy makers, and LEO is amalgamated historical data that does not always belong together and is riddled with caveats.
It is being used as “a measure of the ‘value added’ by a university degree. LEO is not a performance indicator. It isn’t even a predictor of how much students at any university or on any course will earn in the future: it’s historical data, and future labour market outcomes are arguably too variable to be predicted based on past outcomes.”
For example, the data is skewed because it does not include whether a graduate is working full-time or part-time, or the details of the job they are doing, as these are not required by HMRC.
The tax data for part-time workers, for example, does not indicate that they are working part-time. This means certain types of workers may have a lower income recorded in LEO. As a result, if many people from a particular course worked part-time, the earnings may look much lower than for a course where everyone worked full-time. People with earnings from self-employment, or those undertaking voluntary work or unpaid internships will not be included because they are not earning. There’s caveats on people who work abroad and International students. Data from small numbers of people are less likely to represent typical earnings than data from larger numbers. Earnings data displayed on Discover Uni may relate to graduates from more than one course. For example, where there are fewer than 15 graduates who studied chemical engineering at a provider, their earnings data may be combined with graduates from other engineering courses. If these different courses have very different earnings, this will not be visible to the user. Also, past trends in earning potential are not always a reliable guide to the future. (OfS, 2020)
The Small Business, Enterprise and Employment Act (2015), passed in the final days of the Coalition, authorised the permanent linking and ongoing publication of this data by DfE.
The Rt Hon Nick Boles MP in evidence given In January 2016, to committee as Minister for Skills, Department for Business, Innovation and Skills said, "We did a huge study of more than half a million data points, matching up people who had completed level 2 and 3 apprenticeships and their incomes three to five years later. It was the first time this had ever been available. For some reason, we were able to do it with our data in a way that was not yet—though it is now—possible with education data."[239]
A huge new linkage of personal datasets across a lifetime was made without public consultation, and in legislation that no one could reasonably expect to be about a personal data extraction, and still few people in the data know about and no way to correct errors in their record such as people with Chinese names, where the match rate was poorer than others assumed to be because first and last names may be switched in some datasets.[240]
But just because you can do it, does not mean you should. The LEO dataset is an attempt at alchemy to turn data we already have into something valuable that you wished it could be. It is a fiction to suggest and despite this is used at face value as a predictive indicator of future earnings, for example as Liz Truss[241] set out in the House of Commons in 2018, saying “ The LEO dataset shows you how much you can expect to earn by studying particular courses."
The data do not control for differences in the characteristics of graduates. This is a very important caveat when comparing graduate salaries across providers. Some subjects have a much wider range of institution outcomes than others. For example, a very high level of variation is seen in Business and Management where the median earnings ranged from £17,900 at the lowest institution to £75,900 at the highest.[242]
The assumed simplicity of LEO may appeal in policy making and politics, but it is not the accurate or predictive data asset it is often portrayed as. In particular as we come through COVID-19 and as we leave the EU anyone using this dataset to imagine the future labour market based on past outcomes is wilfully keeping their eyes wide shut on reality.
At best we believe it suggests little more than things people commonly assume. Medics earn more than artists. Jobs in London earn more than in Yorkshire and The Humber. The cost-benefit of creating, cleaning and maintaining this dataset as well as its compliance with data protection law should be subjects of further research.
In July 2015, the Department for Education and Home Office Border Removals Team agreed a Memorandum of Understanding[243] to start using national pupil records for imigratin enforcement. Shared data included first and last names, date of birth, gender, home address and school address for up to 1,500 children a month, from the last 5 years of their records, for various purposes of direct interventions.
Version 1.0 of the agreement was valid from July 2015 until October 7, 2016 a day after the first collection of nationality in the 2016 School Census. It was updated in v2.1 to remove the item 15.2.6 (Once collected) Nationality. Subsequent versions of the agreement have not been released.
In October 2017, the Department for Education confirmed in an interview with Sky News[244] that information obtained from the National Pupil Database was used to contact families to "regularise their stay or remove them.”
During the course of 2018, the DfE inserted a statement into the national privacy notice for schools for the first time, that it is using national pupil data for Home Office purposes of immigration enforcement. It was nearly three years after it began processing pupil data monthly for that purpose.
The Home Office may provide the personal data to the Department for Education to check against and returns matching updated details of up to 1,500 children every month according to the terms of the agreement.
Upon receipt of a request from the Home Office, the Department will check the data held in the National Pupil Database (NPD) for a match based on name, date of birth and address. The data searched against in the NPD is obtained from the following sources:
In the Committee Stage discussion of the Data Protection Bill on March 13, 2018, speaking on behalf of the Government, Victoria Atkins MP, explained explicitly that national pupil data is one area that the exemption the government put into the Data Protection Act 2018 is intended to be used for:
"If we know they have a child, we will seek assistance from the Department for Education assistance to find the whereabouts of the child. That child has not committed a criminal offence, so I would be very concerned to ensure that the Home Office, Border Force or whoever else acted lawfully when seeking that data in order to enable them to find the parent or whoever is the responsible adult, as part of the immigration system."
This exemption in Schedule 2, Part 1, Paragraph 4 of the Data Protection Act regularised what was a secret policy and practice, only made public through our exposure and the extensive campaigning by Against Borders for Children, founded to oppose the collection of nationality and country of birth and the use of all pupil data for Home Office purposes in ways that are punitive for children and their families, and ignores children’s rights.
In December 2016, we were given to understand that the ICO shared our concerns over the Home Office use of pupil data, and in the failure of the Department to inform schools and parents about this new purpose of collection and use.
“It is apparent that the disclosures to the Home Office are not for research purposes, which could mean that the ability to rely on the exemption, at least for that data, is removed.
“ It is apparent that many parents and pupils do not have a clear understanding of what data their school sends to the DfE, and what the DfE does with it. Schools therefore need more clarity about this from DfE if they are to be able to give appropriate fair processing information themselves. This is also needed so that schools understand what is mandatory and what is not, so that they can make sensible and proportionate decisions about what data they ask parents and pupils to provide.”
Despite this 2016 written statement, to date the ICO has not taken any enforcement action. Four years on, nothing has changed and the process continues monthly.
1545 pupils’ details were given to the Home Office by the DfE between July 2015 and July 2020 out of the 5998 that the Home Office asked for.[245]
In September 2020 we asked the Home Office for information about the outcomes per month, per year, of the pupil data processing from the Department for Education after it was received by the Home Office, whether data was further onwardly shared beyond the Home Office, and if yes to whom (ie police / Border Force / Local govt), the total number of interventions with a family or individuals and the nature of that intervention, ie removal or visa renewal.
The Home Office declined to explain how they have used the monthly data once they get it.[246]
Which aspect of children’s rights and the National Pupil Database use will be upheld?
Data collected for educational purposes are being repurposed for incompatible purposes in breach of the second data protection principle.
And either the Department can claim research exemptions status[247] and its ongoing refusal to enable rights including Subject Access Rights and keep data with a longer approach to data retention, or it is a national database of children’s data used for direct interventions. The research exemptions under data protection law do not permit both statuses to be true.
In November 2019, personal data from all 2,136 pupils who attended a specified school during a four year period was provided to police from the National Pupil Database.[248]
Schools process data about criminal offences, exchanging information with YOMS and police. But much more often process data about behaviour which reads similarly, but where there is no criminal conviction; in reasons for exclusion such as theft, violence, and other misconduct.
Criminal records get expunged and restricted from distribution to third-parties at some point under the Rehabilitation of Offenders Act 1974. However pupil data is not treated with the same retention periods as the Rehabilitation of Offenders Act 1974 but are retained indefinitely, and at national level, may be distributed to a wide range of third parties at pupil level, long after the child has left school.
Reasons for exclusion from school are not criminal convictions, but often a Head’s best opinion which reason fits best. However children never see these data about them in their own record, or what has been sent to the national pupil database.
If a friend has ever taken the blame for another, without a right to correction, how accurate may their resulting Troubled Families flag be? What threats are there, where journalists have access? Commercial companies get these data in the name of “data analytics,” but should they and who measures or is accountable for benefits versus threats to the children?
In 2016 ten years’ worth of national pupil data across various years for each data set (2000-2015) were linked with police data, including children's highly sensitive personal data, looked after, exclusions, SEND and ethnicity and language.
These identifying and sensitive items, or identifying data items were matched at individual pupil level with census data for the Early Years (age 2-5) KS2, KS4 and KS5 datasets, Alternative Provision 07/08-14/15 (plus T1 and T2) and ten years of Children Looked After from 2005 until 2015 and Children in Pupil Referral Units 2009-2013
This was done at individual pupil level, and without suppression of small numbers which was considered important only for published outputs.
Similarly identifying and highly sensitive pupil data was used between September 2013 and 2019 by the Institute of Criminology at Cambridge University to explore the possibility of creating a brief predictive tool which schools can use to identify pupils at risk of exclusion, as an aid to informal teacher assessments.
The researchers asked for a lot of data. “As it’s not clear what the best predictors would be, we (necessarily) require as much information as possible on the pupils, short of knowing their names and addresses, hence the breadth of our request.” They said in their application, “We would also require as much information as possible about the schools in terms of school level measures of ethnicity, gender, academic performance, type of school (e.g. secondary, academy, specialist disability school) and so on. Once we have this model, we will then repeat the exercise with data from the 40 schools in the study. This will be a separate NPD application and/or may be collected directly from the participating schools.”
When we asked for a copy of the original request via FOI, the Department for Education was unable to find and provide this from the request they fulfilled in 2013.[249]
Outside of the national standardised testing system, schools may choose to administer additional tests or have them imposed by the government. These do not form part of a child’s permanent national record but may be used to support local-level teacher made assessment or meet other national purposes.
In 2019, three hundred and forty eight schools were selected to take part in the 2020 National Reference Test (NRT)[250]. In 2017 the Government introduced legislation which requires that the head teacher of a selected school ensures that pupils take the NRT.
Ofqual explains that its purpose is to provide additional information to support the awarding of GCSEs. The information from the test will be considered each year by Ofqual and the exam boards prior to GCSEs being awarded: this began in 2019. The NRT will provide information on changes in performance standards over time and is based on results from a national sample of students who take the test shortly before taking their GCSEs. The claim is that the “test will indicate if, over several years, there is a change in how students perform at the national level, which exam boards can then consider during awarding.”
The NRT provides additional evidence to support the setting of grade boundaries in 9 to 1 GCSEs in English language and maths. The NRT is a one-hour test, administered by NFER on behalf of Ofqual, which reflects the sorts of questions students can expect in exams for either GCSE English language and GCSE maths. It is taken by a representative sample of students who are taking their GCSEs in the same academic year. NFER organises and invigilates the test on behalf of schools, but relies on some help to prepare for the test from a nominated contact at each school.
This test is another designed to check the system, not directly for the benefit of or to support the children who take the test.
The Organisation for Economic Co-operation and Development (OECD) Programme for International Student Assessment (PISA) evaluates education systems worldwide by testing 15-year-olds in key subjects, like maths, English and science. It can also ask about wellbeing.
Tests are typically administered to between 4,500 and 10,000 students in each country. Students are also asked about themselves and their family and home; language learning in school; their views on reading; “How you think about your life”; School and the school schedule and learning time.
Questions about family life include things like whether children have poetry books and classical literature and art at home. How many rooms with a bath or shower, TVs, cars, musical instruments,and tablet computers probe at family wealth. Pupils are asked about their parents’ main job and level of education. Questions about the classroom include if there is noise and disorder, or whether the teacher sets clear goals for their learning or not. And the questions on wellbeing are detailed, and children are asked to what extent they agree with set statements, on loneliness and making friends, or their opinions like,’when I am failing, I am afraid that I might not have enough talent,’ and if they have discovered a satisfactory meaning in life. They also appear to seek to measure empathy with questions like, ‘Before criticizing somebody, I try to imagine how I would feel if I were in their place.’ and ask about their decision making and values.’I boycott products or companies for political, ethical or environmental reasons.’ ‘When I see the poor conditions that some people in the world live under, I feel a responsibility to do something about it.’ The questions about their own experiences are very personal including bullying and levels of parental support.
The PISA-based Test for Schools provides information about the learning environment and students’ attitudes gathered from student questionnaires, and estimates of school-level performance. The PISA database contains the full set of responses from individual students, school principals and parents. You can view examples of the questions asked, online.[251]
Additional tests such as CATs are often administered on the arrival of Year 7 new starters (age 10-11) to secondary school. This may be because the school does not trust Key Stage 2 SATs scores to be accurate, or consistent, or that they want a different kind of test for local purposes.
These are routinely administered in schools but not sent to the National Pupil Database. GL Assessment, claims that their CAT4 is the UK’s most widely used test of reasoning abilities for children aged 6 – 17+ years. Other tests are available including from CEM, Hodder, the NFER and Renaissance Learning. Parents tend not to know they happen because they are viewed as for internal-only purposes. Schools forget that the companies access the resulting data too, and often for their own purposes, and there is an obligation to tell families they happen and what data will be collected, by whom, for how long and why.
In a time of algorithmic techlash, the adaptive tests should come under particular scrutiny. The summative data can be misinterpreted as assessments of progress and their profiles acted on. Any subsequent significant effects are under researched and poorly understood by families.
Secondary schools use Cognitive Abilities Tests, CATs, to test general intelligence levels and to stream or set for certain subjects. They are designed to assess a pupil in verbal; quantitative; spatial reasoning, and non-verbal areas. The results are given in SAS (Standardised Age Scores) so they take into account a pupil’s age. The average SAS is 100. The scores are used to calculate predicted levels and are used by some to predict the outcome of GCSEs.
According to GL Assessment[252], these are entirely automated decisions based on profiling.
“Following administration, a range of reports are automatically generated. No need to data-crunch. Data is presented in visually accessible tables, bar charts and scatter graphs that allow for easy comparison, as well as clear narrative explanations that explain the pupil profile and its implications for teaching and learning.This allows you to address patterns and adapt your teaching methods to suit individual (or group, where a similar learning profile is shared) needs, ensuring feedback is appropriate and targets are achievable.”
The Cognitive Abilities Test: Fourth Edition (CAT4) provides a robust, standardised measure of cognitive reasoning ability, without reference to curriculum-based material and regardless of previous achievements or first language.
The NFER sells tests that align with the Key Stage one and two tests including KS1 reading and mathematics assessments.
Termly KS2 reading, mathematics and grammar and punctuation tests are available for years 3, 4 and 5 and one set of three spelling tests is available for use at any time of year at the teacher’s discretion. New year 6 reading, mathematics and grammar, punctuation and spelling tests are available to order for use in the autumn and spring terms
They suggest[253] that termly tests produce valuable data to help inform ongoing teaching and learning. This data includes both standardised and age-standardised scores, which allow pupil attainment to be benchmarked nationally, as well as enabling meaningful comparisons between pupils and groups of pupils. Using NFER Tests at different points in the year enables progress to be monitored within a year and year-on-year by comparing pupils’ standardised scores over a period of time.
A standardised score of 100 is not the same as, nor equivalent to, the year 6 scaled score of 100. On NFER tests, a standardised score of 100 represents the average performance of the sample. The scaled score of 100 represents the ‘expected standard’, as defined by the DfE and is not the same as an average score. The measure of age-related expectations provided in the summer tests may be a better guide, but NFER states that they do not claim to provide any measure that explicitly predicts or mirrors the year 6 scaled score.
Research in 2018 considered some of the properties of a range of the commercial tests that the Education Endowment Foundation has used in its randomized controlled trials. They looked at how well they predict Key Stage 2 scores, and compared this to the benchmark of the predictive validity of Key Stage 1 levels. Their results “highlight how the correlation between the commercial assessments and Key Stage 2 tests are somewhat disappointing, particularly in English.” The results of the study cast doubt over the predictive validity and reliability of some commercial assessments and recommended that
“Schools need far more rigorous and robust information on the usefulness of the myriad commercial assessments available to them. It is currently extremely difficult for schools to make sensible decisions around the choice and purchase of commercial tests for the learning and assessment of their pupils. Commercial assessments may well be useful and provide value to teachers and pupils but it is not yet easy for interested parties to compare and contrast the pros and cons of different tests. We also do not know which commercial tests are most predictive of future performance in national assessments, how well different commercial tests line up to contemporary school curricula, nor the extent to which different tests contribute to improved teaching and learning in schools.” (Allen et al, 2018)[254]
Before children even get to school many families have to navigate the complex interactions between education and health if their child is to apply for an Education and Health Care Plan (EHCP)[255]. It sets out any special educational needs a child has, and the provision a local authority must put in place to help them. Talking to parents that have been through the process I have met many that are in a war of attrition. The system to manage the data flows for this process are complex and inconsistent. Parents often speak of being left in the dark in knowing who has accountability where in the process, or of disagreeing with data recorded in the plans.
The system appears designed as a barrier to access in a world in which scarcity of resources has become ingrained for so long at Local Authority level that keeping above-average costing kids out of education or their costs reduced to the bare minimum is the default position. Children have become data on a balance sheet prioritised above the needs and best interests of every child. This policy choice is costly in the longer term not only to children and families during the process and the appeals and system that has grown up around its failure.
The National Deaf Children’s Society issued a warning after its new analysis of the Department for Education’s 2018 exam results for pupils up to the age of 19. It showed that just 44% of deaf pupils achieve two A-levels or equivalent, compared to 63% of hearing pupils. Deaf children arrive at school with amazing potential only to begin a lifetime of being left behind. (NDCS, 2019) The government must address all learning needs if we are to improve outcomes for all children.
An EHCP can cover a dependent child up to the age of 25, so can also ensure support or a specialist placement at further education colleges (but it does not cover universities).
One parent Rachel Mewes, told us her story to explain why the process takes its toll on parents.
“My daughter has a diagnosis of Down's Syndrome. She was diagnosed with Trisomy 21 before birth. Down's Syndrome is a clinical diagnosis of learning disability, yet, despite this, we were still required to submit extensive information on why Betsy needed assessing for an EHCP.
Being constantly made to revisit the things your child struggles with and to frequently focus on the negatives about your child's condition takes its toll on parents. We were required to fill out a parental views form, detailing everything we felt Betsy needed support with and had to input into an assessment from her nursery. Reading reports of how your child is assessed as developmentally delayed is tough. But we do it, as this is what the government says we have to go through in order to get our child what they need to access education.
Alongside those reports we had to detail all of the professionals involved in Betsy's care. Speech and Language therapist, Portage worker, Paediatrician, Physiotherapist and Occupational therapist and a few others. The LA are required to get information from them all in order to inform the report requesting an assessment that the panel will then approve or deny. We trusted them to do this. They didn't. A couple of weeks before the panel were due to meet I discovered that physiotherapy and OT had not been approached. My daughter has a physical disability and a report sent to the panel to decide if she needed an EHCP was devoid of any information on her physical needs. It was left up to me to approach OT and physio to ask them to submit information on my daughter. This was a hugely stressful experience as I had only found out what the LA had done so close to the deadline. OT managed to get their information in with a day to spare. Physiotherapy didn't. My daughter's EHCP assessment request went to the panel without any information from her physiotherapist.
These experiences fit into a few paragraphs, but they represent 6 months of stress, worry and trauma. What I didn't mention was that our request to even submit an EHCP assessment request was denied twice, before the council finally allowed us to. As a result, my daughter did not start preschool with her peer group. She still hasn't.”
The UK is a signatory to the European Social Charter, a Council of Europe treaty[256] that guarantees fundamental social and economic rights as a counterpart to the European Convention on Human Rights, which refers to civil and political rights. It guarantees a broad range of everyday human rights related to employment, housing, health, education, social protection and welfare. Article 15 guarantees the right of persons with disabilities to education, independence, social integration and participation in the life of the community and Article 17 ensures states provide children and young persons a free primary and secondary education. The government must do more to see that it is realised.
Hospital schools, Medical Pupil Referral Units, Alternative Provision and Home Tuition, all have obligations to maintain records, share information, and submit children’s records in the relevant census as any other educational setting. Simply by attending one of these settings will indicate need.
At local level, schools use a variety of third party providers to track and store health data online, often both extracting the core records from and duplicating information held in the school information management system MIS.
Records can include local incident recording such as bumps and scrapes, injury, accident, hospital visits and being sent home with illness, as well as medication and allergy details.
A range of commercial software third parties are commonly used in UK schools in addition to what is tracked in core information management systems. iSams's Medical Manager, Evolve Accident Book, apps and platforms compete for market space, such as StudyBugs, Operoo, and Medical Tracker, which may get a nightly transfer of the personal data from staff, parents and pupils including student photos from the MIS and can also store documents, such as GP letters, create reports of trends over time for incident type or per child and link to child protection data.
Section 100 of the Children and Families Act 2014 places a duty on governing bodies of maintained schools, proprietors of academies and management committees of PRUs to make arrangements for supporting pupils at their school with medical conditions and includes guidance on administration of medications.
Software may provide a vaccinations section for storing every vaccination, booster and notes, and appointments can be recorded complete with event details and medical notes. Results are displayed in a customisable grid and contain medical notes, warnings, health flags and contact cards. Some also have a large number of subprocessors.[257]
At national level, there are four datasets in which child level data is stored and used and that are collected from children in school, and all result in data being sent to NHS Digital.
NHS Digital is an arms length body of the Department of Health. NHS Digital is the national information and technology partner to the health and care system.
Data held on the Personal Demographic Service (PDS) includes the personal details such as name and contact details. As such does not hold any information relating to a child’s education but is in effect their electronic NHS ‘registration ID card’. This is used as core identification and linkage key for other datasets.
Record level demographic and health data are collected twice in a child’s primary school experience. The National Child Measurement Programme (NCMP) measures the height and weight of children in Reception class (aged 4 to 5) and year 6 (aged 10 to 11), to assess overweight and obesity levels in children within primary schools. Before the programme starts each school year, local authorities write to the parents and carers of all children eligible for measurement to inform them of the programme. Parents or carers can choose to withdraw their child from the process by responding to the letter.
NHS Digital holds the datasets and are the Data Controller and Data Process for the NCMP.
Primary Care Trusts (PCTs) were responsible for the collection, holding and processing of NCMP data until legislation[258] in 2013 transferred the responsibility to local authorities.
Secondary legislation[259] granted permission to local authorities and NHS Digital to hold and process personal identifiable data for the purposes of the National Child Measurement Programme.
Despite this, there is failure to understand at local and regional level that the NCMP is a named individual level dataset of children’s personal data which is passed on to NHS digital for linkage with both the existing or future NCMP dataset, and other longitudinal health datasets held and distribution to other third parties.[260]
Torbay and South Devon misinforms children and parents, in the same way that other local authority areas do, that data do not include identifiable data.
West Sussex similarly explicitly says that information submitted does not identify individuals.
There is no explanation or far processing therefore given to children and parents, how data is used within NHS DIgital, or its distribution to third parties, and there is lack of clarity in the 20-page long Statutory Instrument[261] at what disclosure point, data is made anonymous.
Demographic, behaviours and attitudinal data at individual level are collected in a biennial survey of secondary school pupils in England in years 7 to 11 (mostly aged 11 to 15), focusing on smoking, drinking and drug use. It covers a range of topics including prevalence, habits, attitudes, and for the first time in 2018, wellbeing. The 2018 survey was conducted by Ipsos Mori, and questioned 13,664 year 7 to 11 pupils, mostly aged 11 to 15, from 193 schools across England, between September 2018 and February 2019.
The Children's Dental Health (CDH) Survey applies to England, Wales and Northern Ireland, commissioned by the Health and Social Care Information Centre. The 2013 survey was the fifth in a series of national children's dental health surveys that have been carried out every ten years since 1973.
The 2013 survey provides statistical estimates on the dental health of 5, 8, 12 and 15 year old children in England, Wales and Northern Ireland, using data collected during dental examinations conducted in schools on a random sample of children by NHS dentists and nurses. The survey measures changes in oral health since the last survey in 2003, and provides information on the distribution and severity of oral diseases and conditions in 2013.
The CSDS extended the scope of the Children’s and Young People’s Health Services (CYPHS) data set extending the collection of data to apply to children and young people in receipt of publicly funded services. (Public Health England).
NHS Digital is the Data Controller for the CDHS and holds the latest CDHS 2013 dataset.
Providers of publicly-funded community services are legally mandated to collect and submit community health data, as set out by the Health and Social Care Act 2012.
The Community Services Data Set (CSDS) expanded the scope of the Children and Young People's Health Services Data Set (CYPHS) data set, in October 2017, by removing the 0-18 age restriction. The CSDS supersedes the CYPHS data set, to allow adult community data to be submitted. The structure and content of the CSDS remains the same as the existing CYPHS data set. The Community Information Data Set (CIDS) has been retired, to remove the need for a separate local collection and reduce burden on providers. Reports from the CSDS are available to download from the web page Community-Services-Data-Set/reports.
The CYPHS data set was mandated for central flow to NHS Digital from 15 October 2015 (for September 2015 data) for all providers of publicly-funded community services, who are legally mandated to collect and submit community health data, as set out by the Health and Social Care Act 2012.
The CYPHS data set describes national definitions for the extraction of data in relation to the following key areas:
Data may be pseudonymised during analysis and when published, are at aggregate level and small numbers suppressed.
While consent is not required to include patient data within the CSDS, patients do have a right to object to their data being collected.
National child measurement programme (NCMP) |
Smoking drinking and drug use amongst young people survey (SDD) |
Children's Dental Health Survey (CDHS) |
|
Data type held |
demographic and health data at individual level (height and weight), NHS number |
demographic, behaviours and attitudinal data at individual level |
record level demographic and dental health data (including oral inspection) plus behaviours data from children and parents about dental health and services |
Children |
Reception year (age 4-5) and again in year 6 (age 10-11) |
School children in England aged 11-15 |
Children in England in survey year |
Volume of database |
1million plus measured each year |
12.5k sample every other year |
Current sample collected 2013 |
Age ranges |
4-5 and 10-11 |
School years 7 to 11 (secondary Schools age 11-12 and age 15-16) |
5,8,12 and 15 yo |
Data controller |
NHSD Data Controller and Data Processor Jointly responsible with PHE - parliamentary regulation |
NHS Digital |
NHS Digital |
Retention period |
unlimited; data held since start of survey in 2006 |
unlimited; data held since start of survey in 1993 |
unlimited; survey started in 1973 by ONS but NHSD only hold latest 2013 dataset |
comms to collector |
collected by LA staff, often school nurses |
administered in schools by contractor's interview team working with teachers; fieldwork procedures and questionnaires included in each national statistics report |
administered in schools by contractor's interview team and clinicians working with teachers; fieldwork procedures and questionnaires included in each national statistics report |
link to more info |
https://www.gov.uk/government/publications/national-child-measurement-programme-operational-guidance |
http://content.digital.nhs.uk/article/3740/Dental-Health-Survey-of-Children-and-Young-People |
The Vaccination and Immunisation service is a key responsibility of Public Health England to ensure the safe and effective delivery of all vaccine programmes for children in England. There are a variety of vaccinations given at specific points during childhood, routinely administered in school settings. At the time of writing in August 2020, its future responsibility is unclear following the announcement that Public Health England is to be disbanded and replaced by a new National Institute for Health Protection (NIHP).
Some vaccinations are delivered by NHS community nurses and others via outsourced contractors, such as Sirona Healthcare. Information about the vaccines and their purpose is available online.[263] There is a mixture of communication approaches, in some areas a broad approach involving SMS text, school website and consent form home are used to communicate and collect the need for consent to carry out the immunisation.
Consent to administer the vaccine
Information collected via Freedom of Information requests from school settings show that consent to carry out the immunisation is clearly understood to be a necessary requirement. But the next stage of consent, to data transfers and re-use is poorly communicated and rarely properly administered.
Parental consent to administer the vaccination is gathered consistently on an individual basis by the body responsible for the vaccine administration or by the school on their behalf, using a variety of mixed communication methods, often on paper forms, sent home for the guardian’s signature.
“This is coordinated and vaccinations administered by Public Health School Nurses. School does not hold any information or data.”
“Obtained and collated by the Health service.”
“The consent form is supplied to the parents by Liverpool Community Health NHS.”
“Please contact Sirona Healthcare for the relevant information.”
“This is provided by the local NHS trust and copies are not kept by the Academy.”
Sometimes consent decisions between children and parents may conflict.
“Consent is required from parents but if it is refused a pupil can consent for themselves if a health professional deems that they understand.”
Although in the majority of cases written consent is required from legal guardians, settings also suggested that it is not always the case.
“Year 11s can consent themselves, but we do always prefer a sign[ed] consent.’
Consent to collection and use of data is missing
In contrast with the permission asked to carry out the vaccination itself, there is no consent asked for data collection, further retention, or its distribution.
There is no permission sought for the associated onwards sharing of personal and medical data, and there is no explanation given how these data is stored at the central NHS Digital database or explanations how data may be distributed from there. Opt out is not offered.
The only leaflet we were sent in 2017 which mentions the medical records at all, dates from 2014 and says, “Information about your vaccinations will be added to your NHS records.”[264] Which “information” will be added, and where is not explained. There is also no consent option offered for its collection or choice offered about the indirect re-use of data for commercial or other purposes, administered through Local Authorities, Public Health England and or NHS Digital on receipt of the electronic records.
The overlap between education and children’s social care for the purposes of this report are in three areas. One, children’s social care as a governmental department, sits under the Department for Education. Secondly, personal data from children in the education systems are often core data that feed into other systems at Local Authority level, in children’s social care. And thirdly, a number of commercial third party providers sell their software to schools for tracking sensitive data about children in a school’s safeguarding capacity, and these also are used to feed into other public sector data systems, in the NHS, social care and policing.
The use of algorithmic decision making and its ethics in children’s social care are flawed.
In the report on the ethics of machine learning in children’s social care (Leslie et al, 2020) researchers found that “models will potentially contain dangerous blind-spots” and that “these issues related to the safe and ethical functioning of a predictive ML (machine learning) model are magnified in high impact and safety-critical domains such as children’s social care, for system errors, unreliable performance, and lurking biases may have life and death consequences.”
It is one thing for a child to miss a place at university through an unreliable statistical model, it is another for them to be missed as at risk of mistreatment or be wrongly removed from their family where the computer said so. 2020 should be the year that starts the end of such predictive algorithms in the children’s social care sector. They are not fit for purpose.
The foreseeable failings of such predictive systems in children’s justice and child protection were extensively documented by Anderson et al. in 2006. Their work documented the databases that are first necessary in order to use other tools on top of the data at speed and scale.
How data were being built and used in 2006 were indicators of the direction of travel towards the position we have today. Excessive, unreliable and inaccurate data used to try and turn hindsight into foresight, using historical data to predict the future not only of populations but at individual child level. The assumption that this information provides knowledge is dangerous. That knowledge is now acted on with effects on the most vulnerable children who are least able to be able to understand its effects or have routes of redress when those effects are detrimental; in crime prevention, child protection, and as FIPR wrote in 2006 – the notion that computerisation can be used to drive changes in public-service organisation and professional working practices’ has only grown, enhanced by commercial providers who profit from that misconception.
Rosamunde van Brakel in her essay, The Rise of Pre-emptive Surveillance, (Surveillance Futures, 2017) draws attention to comments made by Simon Wessely, in the House of Commons Home Affairs Committee fifth report (2008) that the problem in predictive systems is poor accuracy. In the same session the then Information Commissioner warned that automatic compiling and searching across databases to detect patterns of behaviour and predict future behaviour could "build up images of people which may take you in the wrong direction".
“If you are trying to identify children who will commit crimes later in life—I understand that the Cabinet Office is doing a lot in this sort of area—I understand their motivations and I understand what they are trying to achieve, but if they get it wrong—if they label that youngster as someone who is going to be a criminal in 10 or 15 or 20 years' time or that family as a problem family—it needs our intervention. Technology can take you a long way but it is not going to be 100% effective. When we raised concerns about profiling we raised concerns about social sorting. It is to signal the risks involved without the human intervention. Machines can do a lot to gather and to help you inform your decisions but without the human intervention I think there are grave dangers.”
Risk factor research is characterised by numerous methodological flaws such as oversimplifications, imputation, determinism, psychosocial bias and a scant evidence base. As a consequence ShareCare, which is based on RYOGENS, which is based on ONSET, which is based on ASSET, has ‘updated broadened and further engrained the misapplication of risk factor research within the youth justice process (Case and Haines, 2013). (Van Brakel, 2017)
eCAFis a web based IT system to support the CAF process. The Common Assessment Framework is a shared assessment and planning framework for use across all children’s services and all local areas in England.
The CAF process is not for a child or young person about whom staff have concerns that they might be suffering, or may be at risk of suffering harm. It is aimed at children and young people with additional needs that are not being met by their current service provision. The CAF can be used for unborn children and children and young people up to the age of 18, but its use can be extended beyond 18 where appropriate to enable the young person to have a smooth transition to adult services.
It can be accessed from any work PC and will allow users to store assessments, Team Around the Child (TAC) members details, TAC meeting notes, actions and service provisions. Tools such as Holistix eCAF are another assessment tool to support the local early intervention agenda. The solution is a re-creation of the de-commissioned National eCAF system with additional functionality and accessed by multi agencies.
CPOMS currently supports just under 12,000 schools across the UK and in 23 countries worldwide from its Skipton office, and children's data is securely held in tier 3 data centres offsite. [265]
The company securely processes some of the most sensitive data about children one can imagine.
Many schools use the system for recording behaviour and SEND but it is also a go-to system of choice for many schools in England to record more serious child protection incidents. The potential for scope creep of who can access what information clearly demands strong local governance processes and contextual understanding, that should be overseen by Local Authority or school management policies. Not only might a child’s record include an outline body diagram on which staff have recorded the sites of injury or suspected abuse, their subjective opinions can include ‘Professional Disagreement/Reconsideration’ and this may be important to note when staff reading the information, may not have created it.
Information on children’s records can be provided, where appropriate within legislative requirements, to other agencies’ from a school designated safeguarding lead (DSL), for example, to ‘The Forced Marriage Unit’ or police.
The programme can transfer files between schools, if a child moves school and carries across a child’s life where a record is relevant for child protection for example. CPOMS contains secure granular data transfer control and a data retention control. Such control is the responsibility of the relevant Head Teacher.
In order to facilitate an immediate response to an urgent incident, all children’s details are transferred across to the company in the initial set up, which similarly to many software in the education sector, means the company receives a relevant subset of data about every child at a school, including those who may never have data subsequently processed about them in CPOMS.
MyConcern is a safeguarding software used by schools, colleges and other organisations in the UK and worldwide. As a brand of the company One Team Logic, created by former Senior Police Officers, and part owned by the Octopus Investments Group, MyConcern is designed to capture and distribute information about child safeguarding and child protection to enable staff to record, report and manage all safeguarding, well-being and pastoral concerns. Linked profiles of siblings and other pupils can feature in one record.
MyConcern supports 'working together' with other safeguarding colleagues and trusted external partners such as social care and police.
During an initial set up with a new school using MyConcern for the first time, a record for each child may be exported from the school into management system export for *every* child from a school. Similarly to other safeguarding systems, this means children who may never need a record processed through MyConcern, are still sent to the company.
In 2017 there were around 150 Virtual Learning Areas[266] each with a Virtual School Head.
Virtual school heads promote the education and well being of children who are looked after by the local authority. The heads also manage pupil premium plus funding. It is an extra payment that schools receive for each looked-after child, and previously looked-after child, on their roll.[267]
We interviewed a Virtual School Head about the role data plays in their job and where they think it should be improved in policy and practice:
We walk a very narrow tightrope with the data gathered, used, and held on looked-after learners: what do we need to know and keep vs when is it right to share? All data used for the sole purpose of providing targeted support to meet their exacting and often multitudinous, complex needs.
RP is a Yr 8 learner who has recently come into care. The Designated Safeguarding Lead (DSL) as well as key staff within her mainstream school were instrumental in gathering evidence to support their concerns that RP was a victim of severe abuse by her father. RP had built a strong, secure attachment to the DSL whom she disclosed her abuse to. The DSL actioned a social care (SC) referral.
The Local Authority sought and was awarded a full care order for RP. RP remained at her school. It swiftly became apparent to her however, that it was not just the trusted DSL that knew about her but also her teachers who began asking ‘are you okay?’. One went as far as to have an open conversation with RP about being in care within earshot of other students.
Next, her peers learn she’s in care too! She only told one friend she thought she could trust.
RP has to come out of lessons for her Personal Education Plan (PEP) meeting with her SW, carer and a whole team of other professionals (CAMHS, SENCo, someone from the Virtual School who tracks and monitors the attainment of looked-after children; drives-up standards in educational settings; and champions their voice)), all in school to talk about her and her education: good and bad. She feels shame, she feels uncomfortable that she's made to feel different.
In the meeting, how RP behaves at school and placement, is spoken about. How well she is achieving…or not. SMART targets were set to drive up her attainment and a discussion on how pupil premium would be used to assist RP in achieving her target(s), was held and agreed.
The following term, her PEP meeting is synchronised with the Annual Review of her Education, Health and Care Plan (EHCP). She looks sad and uneasy as she and others in the room read information about her life before and during care.
News that RP is in care and speculation as to why, spreads far and wide through her year group. The school that was once a safe and stable sanctuary for her is no longer. She begins to get bullied, children talking about her dad and what he did to her. Her behaviour swiftly deteriorates-she gets into fights. She is referred to external services to manage her behaviour. She wants to leave the school, wants a fresh start. Now with nothing to lose she begins to abscond from placement, there's concern she may be a CSE (child sexual exploitation) risk. Fixed term exclusions build and she is on the cusp of a permanent exclusion. She needs to be moved before this happens. However, professionals tread cautiously in handling her as she's already twice tried to commit suicide... in school. Application is made to another school. Information is shared with the new school that she is in care and her vulnerabilities and risks. They received her school file.
Home was tricky too. Her carer has given notice and wants her out as soon as possible. RP receives this as yet another rejection: from parents, school and now her carer. She is fast becoming numb to change. Her ‘story’ is shared with all along the way only to provide the very best ways in which to support her. She is numb to everything, losing more and more control of her life: decisions made for her. Does she know or care to know who her information is being shared with and how it is being managed?
Potential carers read her profile (unsure if anonymised), decide whether or not to show interest in her. One does, but it is a carer in Wales, at least no-one will know about her there and she will have the fresh start she craves. Application is made into a new school, she gets accepted falling into the ‘excepted group’ as a looked-after child. First day, her class teacher, wanting to be helpful and supportive, reveals she is in care in front of others and the whole cycle of shame begins again.
Issues:
School had to share her ‘story’ with SC to safeguard RP. At this point, RP lost all control over where her data goes, why it goes and how it is stored.
Four questions:
1) What should we know?
2) Who should know?
3) When it is right to share information
4) How do we protect the highly sensitive data with the respect that it deserves?
It is true that children in care do not want to have to repeat their ‘story’ over and again to the multitude of professionals that are in their lives, supporting them on the frontline and behind the scenes.
Yet, more transparency is needed with young people. An audit trail of where her data has been and how it is being held and by whom. Information is shared securely via email, in both the written and oral word and via referrals to agencies who offer alternative targeted support to make a difference to RP's life.
Once pupils reach the age of 13, schools pass pupil information to the Local Authority and / or provider of youth support services, described as required to meet responsibilities in relation to the education or training of 13-19 year olds under section 507B of the Education Act 1996. The Department for Education model privacy notice says nothing about how this information may be used at national level, so it is not passed on to schools, pupis or parents.
There is no opt out but which data items are processed can be limited on request. A parent or guardian can ask that only their child’s name, address and date of birth is passed to their local authority or provider of youth support services by informing the school and this right rests with the child from age 16 and up.
Inspectors may analyse pupil-level and group-level pupil data before an inspection and on-site, collected, created and ascribed during the course of a child’s education.
Inspectors access parents’ personal data and opinions submitted for the purpose of inspections.
In March 2017, Schools Week reported that the education watchdog planned tracking Facebook and Twitter to check on schools in its “innovation and regulation plan”. It said Ofsted is engaging with the Department for Education in a “data science project” to “explore the possibility of using near real-time data and information from social media and other sources to predict and prevent decline in school performance”.
The School Inspection Handbook, updated March 2018[268] section on Outcomes for pupils emphasises the importance of progress, especially of pupils currently in the school:
“In judging achievement, inspectors will give most weight to pupils’ progress. They will take account of pupils’ starting points in terms of their prior attainment and age when evaluating progress. Within this, they will give most weight to the progress of pupils currently in the school, taking account of how this compares with the progress of recent cohorts, where there is any. Inspectors will consider the progress of pupils in all year groups, not just those who have taken or are about to take examinations or national tests. As part of pupils’ progress, inspectors will consider the growth in pupils’ security, breadth and depth of knowledge, understanding and skills.” (Para. 175)
There is a growing use of the Ofsted brand as a carrot-and-stick in marketing third-party software by private companies.
“Always be prepared for Ofsted with seating plans ready to print as beautiful PDFs in seconds and successfully demonstrate why students are seated together.”
When a family registers a child for school, some personal details are passed over from the Admissions Process to the school. Name, home address, and date of birth for example. This forms the core of the child’s school record.
Legal guardians then provide additional factual information directly to the school during the admissions process for local purposes, such as immunisation history or planned method of transport to school, and first language at home.
This data is also stored in the school Management Information Systems (MIS), a software decision made by the setting, Multi Academy Trust or Local Authority.
This core record linked to a child’s unique pupil number is then built on in the course of a lifetime of education by a wide range of administrators, staff, and the write-function of any permitted third-party software.
Information is created by the school staff and stored in the MIS for administrative purposes that parents may or may not see, such as behaviour and attendance data.
The vast majority of a child’s record that grows over time, is not seen by them or their families.
There is rarely differentiation in MIS, which data were provided by the child, by their family, created by staff, or created by a system. Some that were problematic such as ethnicity ‘ascribed by school’ has been shut down. But there is far more opinion based just like it, which is given undue weight not only at local level, but in transfers between schools, and as soon as the data leaves the school it loses the context in which it was created and becomes perceived fact. Whether this is information collected from a child’s facebook record, attending an environmental protest and added to their permanent school record, or machine created suggestions of being at risk of suicide, these data can have discrete but lasting consequences.
The weight it is then given by researchers is compounded in policy making.
The feedback loops of fake data become fact, are the greatest argument for rethinking the entire data collection design process in England, to reduce it to the bare minimum, as required under the data minimisation principle of data protection law, to get back to a stronger if narrower set of sound data. That could then be better managed by families, kept accurate and offer tools to view what is held, where it has been processed, any for what purposes better enabling families own oversight, rather than relying on entirely absent national mechanisms.
Certain fields of school information management systems’ data are only collected and used for local administrative purposes, and are not sent to the Department for Education in the School Census. For example, parents’ email addresses.
But where the same data item may be collected for both national and local purposes, there is a problem by design in a MIS that does not allow any permissions to be recorded that data has.
School Information Management Provider Systems tell us it would be too expensive for them to manage granular data permissions. For example, the family may be content for a school to know and retain information about nationality or adopted-from-care, but not want those same data to be sent to the Department for Education in the school census. Right now, nationality is either present in the MIS field, or it is refused. It cannot be both at the same time.
But MIS providers seem content to allow children instead of them to pay the costs of poor system limitations. Schools cannot honour their data protection law obligations to restrict distribution, where the system design extracts all or nothing. Similarly, these core MIS records are the data lakes from which third party software extract records to create core records in their further apps and platforms that schools choose to use in a child’s daily administration, learning or safeguarding. The results for the child are excessive data processing by too many people which could have other knock on effects in terms of increased risk of exposure, loss and misuse. It’s most basic problem is it is a breach of law that fails to protect children’s data by design and default.
A child has no choice over whether their name is provided to a school for the admissions process, but they should have a choice over whether it can be processed to facilitate the research linkage of various data sets, that each of themselves have less data in them, but once joined up create a new set of knowledge and insights (otherwise why bother linking it to start with) and the purposes of that new dataset could be far less beneficial for the child, than enjoying the right to their school place. Most children’s data is handed over by adults who have no idea where it goes and who uses it, and are not given any way to manage the child’s rights on their behalf.
Before looking at the wide range of third parties and their software that extract data about a child through their school, we’ll first consider the system infrastructure that enables this.
It is also important to consider another layer of the problem of multi-purposing the same data when that data is subjective and in different people’s different best interests.
“Using data for multiple purposes is a problem, and surely everyone can see the risks of using teacher assessment to measure school performance. Teacher assessment is a vital component of school data but it is essentially human opinion, which is unavoidably broad and subjective at best. And using human opinion to measure the performance of humans is probably not going to work out well. This is the paradox of accountability in education: the high stakes undermine the very thing the system relies on to function.
To put it bluntly, you can have reliable teacher assessment or you can use it to measure performance. That’s the choice. So, what should the DfE do?
The answer is not to increase the use of standardised tests in primary schools or even to reduce the number of statutory assessments. The answer is to reduce the amount of data that is collected and reduce what is done with it.” (Pembroke, 2020a)
Chrome Sync enables Google Account holders to log in to any Chromebook or Chrome browser and find all of their apps, extensions, bookmarks and frequently visited web pages. It is one of the reasons why Chromebooks have become popular in classrooms, especially for schools that can't afford a device for every child.
Syncing works with Chrome OS version 18 and later, and is on by default when you sign. Personally identifiable Chrome Sync data in G Suite accounts is used to power features in Chrome for each account user, for example allowing students to access their own browsing data and settings across devices.
In addition, the systems compile data aggregated from millions of users of Chrome Sync, and, after completely removing information about individual users, use this data to holistically improve the services that we provide. For example, if data shows that millions of people are visiting a web page that is broken, that site would be moved lower in the search results. This is not connected to any specific person, nor is it used to analyse student behaviours. If they choose to, administrative staff can disable Chrome Sync and users can choose what information to sync.[269] Google says that G Suite users' Chrome Sync data is not used to target ads to individual students.
Just as commonly, social logins are being used by children as a single sign-on that employs a child’s login information from a social-networking service such as Facebook, Twitter, or Google+ to sign in to a third-party website instead of the user creating a new login specifically for that website. It is designed to simplify the login process.
Some educational settings ask pupils to use Facebook, for getting access to a shared project for example, but this cannot be reasonably or lawfully ‘required’ by staff, as there are easy alternatives that do not infringe on their fundamental rights the same way that requiring a pupil to create a Facebook Account they don’t otherwise already have.
Schools in creating Google log-ins for children under the age of 13, also create social log-ins which are used on sites such as school related apps. Schools we know also require students to use YouTube when recording team performance projects in drama for example. Such third-party processing is cheap or free and convenient, but schools cannot require children to do so, and should remember that ‘consent’ in these circumstances can only be considered freely given and therefore lawfully obtained, if there is another alternative offer available. Contrary to widespread media belief, not every child uses TikTok or is happy to post their face to social media platforms. If a less intrusive tool is available, it should be used for all students.
At university some ask students to use Facebook as an identity manager. Edwin, an AI-powered English tutor, requires students to sign into Facebook Messenger to use it for example.
An additional problem in asking school staff and children to use a Social Media platform for a school purpose, is that the use conflates the personal account, with ‘professional’ / educational purposes. It means that content that you look at as a single user, is all tracked and stored associated with the one account — this is especially problematic for school staff when it comes to Subject Access Requests.
For staff, as much as for pupils, use of single sign on can be poorly understood and Google encourages users to link accounts, in order to esly see stored files, or shared documents for example. It can mean that staff, having merged accounts, accessing content from a personal account but now linked and recorded under their professional school Google ID, ask questions of the school IT admin such as when schools need to purchase more account space.
Unless we know what is collected, why it is collected, how it is used and we can understand how a company uses that knowledge about us to target ads or influence our actions on a page, we cannot properly mitigate harmful effects of discrimination, influence, or bias.
Lack of training often leads to less than optimal practice. When using YouTube on a Google domain, it is possible to restrict[270] the video to educational content or if it has been pre-approved by teachers who approve video by logging into YouTube. But from case studies we hear, this approach requires a little knowledge how to do so, and is applied inconsistently.
1-2-1 parent purchase and leasing schemes are increasingly routine, and yet we question whether these meet the lawful[271] or ethical requirements on charging for school activities especially where the purchase is in installments and devices remain the property of a school, until all payments have been made, and ownership has been handed over to the parent. While at the same time, parents are denied complete control of the device, and must subject its use to school filtering and monitoring.
While not all school iPad schemes are extortionate, there does seem to be a huge discrepancy in terms of how much parents are being asked to pay. This is perhaps linked to how much of the school budget has been allocated to subsidise the scheme, as well as which models and payment plans are on offer.
While some schools will offer loan schemes for pupils with pupil premium, but still require “a small charge to support the idea of ownership of the programme” or may offer others ‘financial help’ it is a significant move away from state education providing the minimum required provision and some schools demand families buy iPads, ChromeBooks or Laptops under Bring-Your-Own-Device schemes and across multiple year groups, which can have cumulative financial effect for families with multiple children, and stigmatising effect on parents that raise concerns and those that would prefer not to use them but are not given a choice.
“Sandringham School operates a Bring Your Own Device (BYOD) policy where all students from Years 7 – 11 are required to have a tablet device in school for use in lessons and at home. Sixth Form students require a laptop dependent on their subject choices.”[272]
Many policies we have seen prohibit children from bringing an existing personal device from home. The hardware must be purchased fresh from schools, and commonly come with insurance, warranty and a case. If one of the more convincing arguments in favour of using these tools for learning is that it encourages more out of hours pupil practice, it seems unnecessary to require the exclusive use of newly purchased hardware, when the same apps could be installed on an existing personal device.
Some that do enable students who already have an iPad to use them only do so if under a year old, whereas others accept the use of an existing iPad Air 2 or better, purchased within the last 3 years.
Due to safeguarding procedures schools may insist that these iPads are factory reset and set up fresh by the school.[273]
But generally parents/carers are constrained in what may be downloaded by the filtering system that is imposed. Schools generally require that access to the Internet be via their filtered WiFi some of which will require software installation. All iPads generally also have school tracking systems installed, using Mobile Device Management Systems (such as Casper/Jamf) and “Find my iPad” (Apple) functionality will be set to active.
Some case studies suggest that schools that try to bolt iPads onto a Windows infrastructure, “had never quite worked seamlessly” and that the lack of interoperability is problematic. (Jigsaw24, 2020)
Schemes may rely on the backbone of Google for Education suite of tools, and / or Apple apps. This again embeds the powerful market positions of two key players in the sector and supports the procurement models and distributors of both hardware and software, such as the Grids for Learning.
Whether there is a conflict of interest in any recommender schemes offering edTech demonstrator training to sell its benefits to prospective schools, and also delivering its infrastructure underpinned by Google, Microsoft and Apple,[274] may depend on your political or ethical standpoint.
The use of biometrics in schools began in 2001 with fingerprint technology as a means to identify children taking out library books from a school library. The company, MicroLibrarian Systems, approached the Information Commissioner’s Office (ICO) in 2001 to see if using biometric technology would breach the Data Protection Act. The ICO’s reply indicated it would not. In fact, their reply from 2001 suggests limited knowledge of the technology at that time and little thought given to the implications for long-term maturity and application of the technology.
“It is understandable that concerns will be raised over the use of such technology if it is believed that it involves the holding of a database of pupils’ fingerprints.“
Letter from ICO to MicroLibrarian Systems, July 2001.
The technology takes points from a fingerprint, known as loops, curves and swirls, and stores the ‘map’ of those salient points of the fingerprint as a number string or algorithm. Children may be asked to provide three of the same fingerprints to the machine, all potentially slightly similar. Then parameters are set for that fingerprint algorithm when it is next presented to the machine. The database stores a number code in effect of a child’s fingerprint referencing their name, photograph, and associated personal data.
“However, from what I understood of our discussions although theoretically possible to use the information obtained from this system to match fingerprints taken from the scene of a crime, the resources this would require make this highly impractical.”
Letter from ICO to MicroLibrarian Systems, July 2001
Pippa King reports that it is possible to transfer biometric (numeric) data from one database to another, there are interoperable programmes that can do this established by National Institute of Standards and Technology (NIST). Therefore, she believes the UK police have potential access to school fingerprint biometric databases, potentially confirmed by David Smith, the Deputy Information Commissioner giving evidence to the Home Affair Committee enquiry ‘A Surveillance Society’ in May 2007 (See Q50 and Q51). David Smith's answers revealed that police potentially could have access to a school biometric [fingerprint] database without the child or parent's knowledge.
In May 2013 King sent every police force in the UK a Freedom of Information request asking if that force had ever accessed a school biometric database.
4 forces failed to answer the request (8%)
2 forces answered stating they had no information (5%)
4 forces stated no they had not accessed a school biometric database (8%)
38 forces refused to answer citing cost (79%).
It is therefore unknown if school pupils’ biometric data has been transferred from a school database to another biometric database or whether data has been scanned from a school biometric database.
Uniquely Britain was the first country worldwide to introduce biometric technology into schools in 2001. UK schools have used biometric:
All of which were introduced in schools quite often without the consent or even knowledge of the parents/cares of the pupils. In 2013 the Protection of Freedom Act 2012 was enacted ensuring schools obtained the consent of parents and children under 18 to process and store students biometric data.
Unfortunately the same rights that are afforded to children in England and Wales with regards to consent to educational establishments taking and processing their biometrics do not apply to children in Scotland and Northern Ireland.
Since 2001 biometric systems in schools have been used to verify children’s identity for food, library books, attendance, locker entry and to make payments for school trips.
The biometric of choice that is prevalent in UK education is the fingerprint reader, mainly used in secondary schools and further education colleges for canteen use.
2002: Privacy International estimated that over 200,000 children from the age of 7 had been 'fingerprinted'.
2002 - 2010: Opposition Members of Parliament, from the Conservative and Liberal Democrat parties, frequently asked questions in Parliament about the practice of using children's biometrics in schools. The responses from the then Labour government were non-committal and consistently uninformative.
Non statutory guidance was published in 2007 by the government quango BECTA (British Educational Communications and Technology Agency) and the Information Commissioner's Office.
May 2010: The Conservative and Liberal Democrat parties formed a coalition after the UK general election.
February 2011: The Protection of Freedoms Bill was introduced. Included in the bill was a provision for schools to require written parental consent in order to process children's biometric data.
May 2012: The Protection of Freedoms Act was given Royal Assent and passed into law in the UK. Chapter 2 of the Act, clauses 26-28, deals with consent when processing a child's biometric data. This becomes enforceable September 2013.
December 2013: The following document was published by the Department of Education as guidance, detailing schools responsibilities under the Protection of Freedoms Act 2012, with regards to parental consent and processing under 18 year olds biometric data: Protection of Biometric Information of Children in Schools - Advice for proprietor, governing bodies, head teachers, principals and school staff September 2013:
The Protection of Freedoms Act 2012, clauses 26-28, became enforceable.
Tracking the use of biometrics in schools is problematic due to the fact that local authorities, the Department for Education or the Information Commissioner’s Office does not record, or require, the explicit log of biometric data held in schools on children to be recorded. The only legislative requirement is that since September 2013 is that educational establishments require consent by parents and pupils for them to store and process pupils (up to the age of 18 biometric data) via the Protection of Freedoms Act 2012:
Clauses 26-28: 26. Requirement to notify and obtain consent before processing biometric information. The salient points of the Act are:
However, how this legislation is interpreted by schools is widely open.
In December 2016 Freedom of Information Requests (FOIRs) were sent out to 216 secondary schools in Northern Ireland, Wales and England.
Just under half, 103 schools answered. (47%)
Based on those responses 59 schools (57%) used a biometric system, 20 of those schools (37%) used biometrics for more than one application.
91% of the 59 schools used biometrics for canteen,
13.5% used biometrics for the school library
13.5% used biometrics for registration
22% used for other purposes (printer, copier, door access, school gate, laptop, school trips)
Whether these figures are an accurate indication of how many schools use biometric systems is unknown as the Freedom of Information request response rate was poor.
In Scotland schools are no longer obliged to answer Freedom of Information requests under the Freedom of Information Act (Scotland) 2002 so 5 local authorities were sent FOIRs on the use of biometric technology in schools covering 83 secondary schools.
4 local authorities replied (covering 53 schools) among which they said that only 2 schools use biometric systems.
England: 8.8 million children in 24,000+ state schools[275]. 20,832 are primary schools, 4,188 are secondary schools. Approx 70% of secondary schools use fingerprint readers.[276] No statistics for primary school use, but has become common in school libraries since 2005.[277]
Wales[278]: 470,000 pupils in Wales. Use is widespread[279], but numbers unconfirmed. We estimate upwards of 70% across primary and secondary schools, similar to England.
Scotland: 700,00 pupils. In use by around one-third of Local Authorities. Where used, close to 100% use is common[280]. (For regional distribution in Scottish Authorities see Fig. 3).
Northern Ireland: 335,000 in Northern Ireland. Number using biometric readers is unknown.(Fingerprint readers were introduced in N.I. in 2010).[281]
We can provide numerous examples of where consent is invalid in practice even if it were not on the basis of the power imbalance alone. 38% of parents whose children were using biometrics in schools polled by Survation in 2018[282], said they had not been offered any choice, and over 50% have not been informed how long the fingerprints or other biometric data is retained for, or when they will be destroyed — despite the Protection of Freedoms Act 2012 (that only applies in England and Wales) requiring active parental consent be asked for, and an alternative to be on offer without detriment. In particular, children already at disadvantage and in receipt of Free School Meals (FSM) can feel disempowered to decline the use of the fingerprint system, where they feel at risk of further disadvantage.
There is no benefit from the fingerprint per se, as a protecton from discrimination by not revealing which child is or is not in receipt of FSM (an argument often promoted by the vendors) beacuse every child (using a fingerprint, card, or giving their name) can use the same cashless payment system.
Andi, a young person at the Warren Youth Group in Hull, is just one example of a common story[283]. “The option for me was either you get the biometrics [taken] and am able to go to the school, and can participate and can get into the school, you had use your thumbprint to unlock the door, or you just couldn’t go to the school...and you could bring your own packed lunch but you’d be at more of a disadvantage from everyone else.”
Using biometric technology in schools opens up a wider debate on the proportional use of such technology in education when another form of identification, PIN or swipe card, would suffice.
It is primarily a form of verifying a student for canteen identification and cashless catering, library usage. , etc., (and in some countries a form of verifying a pupil for an exam or a distant computer based course).
On another scope the more prevalent use of educational technology demands verification of the student using the computer or tablet they are working on.
For example in San Diego a form of facial recognition was trialled, scanning the student’s face every 60 seconds, to confirm that the student doing the work was the actual student, rather than a friend or sibling/parent. The parents and students had grave concerns about privacy (San Diego Union Tribune)
“Encinitas Union launched its iPad program in 2012, equipping each third-through sixth-grade student with a device to use for school work and lessons. Since then, the district has put iPads into the hands of all of its 5,400 students, at a cost of $2.7 million. “It’s creepy to take a photograph of a kid every 60 seconds,” said Gil Saidy, who has a third grade son at Flora Vista Elementary School. “I don’t trust them. I don’t want these roving cameras in my house.”
Findbiometrics (the online biometric industry magazine) said, "At a time when biometric technology is increasingly finding applications in schools – from deployments as innocuous as cafeteria lunch administration to more serious security systems – the EUSD fiasco signals how precarious this area can be, given the privacy and security concerns that often seem to come up with public biometric deployments, and parents’ particularly strong interest in protecting their kids."
Research into the perception, peer pressure, and psychological effects of surveillance only occasionally looks at biometrics in children. This is an area where more research would be welcome.
One of the main concerns with parents objecting to the use of biometrics in schools a decade ago was the concern about the security of the data.
A child's biometrics data, now ‘secured’ on what will be an immature school system in the decades to come, could very easily (potentially) be compromised. That is an unknown that has yet to be understood, even now. A child’s biometric data needs to be secure, and stay uncompromised for their lifetime. We have no knowledge how their biometric data may be used in years to come.
If their biometric algorithmic data is compromised there is no knowing how this may affect their ability to (for example) obtain a bank account, mortgage, or life insurance.
Many computer industry experts have expressed their concerns on the use of biometrics in education, there are too many examples to list here but here is a comprehensive list of security and computer industry experts and their concerns of using biometrics in education.
Recently, from edtechmagazine.com ‘Are Biometrics in K–12 Schools Really Necessary? September 2017:
“…if the biometric data is stolen, it is forever compromised.
Biometrics also can be used to collect data on student behaviour, which causes some to fear that young people will grow up thinking it’s normal to be tracked.
This is why informed parental consent is absolutely necessary, something that hasn't always been taken into account. Parents and students need to be well informed about how it all works and the consequences of a breach, with the choice to opt out.
Leaton Gray and Phippen (2017) found that, “pupils were not inducted into biometric systems in the same way that they had been in 2006 when such systems were relatively novel.There were no talks on the purpose of the system and related data privacy issues (indeed we found that data privacy was not mentioned at all other than in the context of e-Safety.)”
Frustration of use in the systems was summarised by the researchers, in four areas: Pupil resistance, Pupil mistrust, Hygiene and Parental surveillance. And while we agree with their assumption that school staff intentions where these technology are employed are benign, we also support their finding that there was no reflection on the potential future impacts, “staff and pupils are persuaded by the convenience of such systems to a point that they do not reflect on the potential social harms, or related legal issues. Schools did not have effective data protection policy or practice in place to be able to manage data such as biometrics effectively and in a legally compliant manner.”
Biometrics may be a great time-saver in the short term, but when it comes with the risk of serious long-term consequences for our students, is it worth it?”
When another more proportionate personal data method can be used to access the canteen, library, or complete registration, i.e. – using a pin or swipe card – (schools must offer some form of alternative under the Protection of Freedoms Act 2012) then this surely must be a considered a more prudent use of technology in schools, and subsequent potential transfer of data via commercial companies (supplying said biometric capabilities) and possible security services access and sharing.
iDStore integrates with all major 3rd party Library Management systems, which allows library staff to track which pupils have borrowed which books from within an Identity Management platform. Biometric recognition is part of that system.
Print and Copy systems[284] may also enable staff and pupils to print and retrieve documents upon verification at the printer; either by entering their individual PIN or via fingerprint recognition.
In addition to the canteen fingerprint readers themselves (currently suspended at the UTC Leeds[285] for example, but prior to the COVID-19 response, that were routine in everyday use), the software connected to the cashless catering system in thousands of UK schools may process religion (ParentPay[286] did directly until 2018) or can infer it from other personal data from profiling meal purchasing patterns —a history of a pupil’s meal selections and spends for school meals or non-meal-related items, including free school meals.
Where fingerprint readers are used, schools can offer alternatives such as a touch card, or simply giving their name to at the canteen till or to the librarian. Since the Protection of Freedoms Act 2012 made it a requirement for an alternative to be offered, and where such alternative is offered it means that a higher standard of data protection and privacy practice is possible by design and default, we recommend that the routine use of fingerprint readers, and facial scanning or recognition, in all educational settings is found unlawful by the ICO.
In February 2020, the President of the Personal Data Protection Office of Poland (PUODO) imposed a fine of PLN 20,000 (EUR 4,700) on a Primary School in Gdańsk for unlawful processing of children's biometric data when using the school canteen. In this case the UODO found that the consent given by the parents was not valid in particular because of the imbalance of power between the parties, hence the processing of biometric data did not have a valid legal basis. It also stressed that the identification of the students could have been achieved through less intrusive means. For the mentioned reasons, the UODO ordered the primary school to delete the biometric data concerned, and to cease the collection.
Based on evidence from other EU countries, and acknowledging the August 2019 ruling in Poland on fingerprint use specifically, and facial recognition in both Sweden and France, we believe that the UK is the only country in Europe that has propagated the widespread use of children’s biometric data for routine practices in schools, such as an ID for cashless catering systems, or to borrow library books. This UK expansion was enabled in part by the ICO and Department for Education response[287] to early UK commercial lobbying (2001).
We want the UK Regulator, the ICO, to better protect children’s rights and freedoms fit for the current understanding of the sensitivity of biometric data as recognised under the GDPR, and UK Data Protection Act 2018. The risks to the rights and freedoms of the child under the UNCRC article 16, may not be fully realised yet today, but the normalisation of such technology clearly underplays the vital importance of biometrics to children for life.
Cashless payment systems require online infrastructure both at home, and in school. Families need to be able to go online to manage the financial account, to make payments or ‘top up’ the prepayment account from which a child’s spending in school is deducted.
For families without any online access, the cashless system may not be the convenience it is marketed as, but an added burden. Instead of giving cash or cheques straight to the school office, they must take printed handouts to a PayPoint outlet in a local shop that will perform the computer interface part of the process on their behalf and process the cash.
Eduspot’s SchoolMoney system suggests ‘families without online banking can make school payments in supermarkets, newsagents and post offices.’
But as sQuid notes,”not all schools have PayPoint top up functionality, so if you would like to use PayPoint to top up your sQuid account, please contact the school directly.” Students use a variety of identifiers in school to make the payments, these include: smart cards, biometric & picture identifiers, and key fobs, which are all linked to the online sQuid account.
For cash being returned to parents, which has increased in COVID-19, ParentPay also uses CashOut payments to refund PayPoint payments, for example when a school trip has been cancelled, or for a dinner money balance remaining at the end of term.
Pay+ is another system, designed by Atomwide and claims to serve over 2 million UK users and is owned by AdEPT which in London is Chief Technology Partner to London Grid for Learning supplying over 3,000 schools and across the public sector.
Biometric reader equipment in the UK is commonly used to accompany and interface with cashless systems. At the time of writing we believe that England is the largest user of this kind of technology with school children in educational settings in the world.
Concern over the hygiene using fingerprint touch points has meant some schools have suspended the use of fingerprint readers under COVID-19. [288] Alternatives can include a key card, fob, or simply using your name at the canteen till.
CRB Cunninghams is another leading provider of cashless catering solutions to the UK education market, specialising in cashless, online payment and identity management systems with a focus on primary and secondary schools. CRB Cunninghams’ solutions are used in over 2,500 schools across the UK. Bought out in 2013, it is part of the Jonas Family Software that operates 95+ independently managed software companies around the world in 25+ vertical markets.[289]
Cameras are becoming increasingly normalised in a child’s everyday environment. A child may pass CCTV multiple times in the playground, school library and corridors across the course of a single day.
When we polled 1,004 parents in 2018, of children in state education age 5-18, 53% said that the schools used CCTV or IPTV systems. But 25% replied they didn’t know. Schools operate these systems in a variety of ways but CCTV that is always on and running in the IT room or library freely accessible by visitors makes children’s images accessible to an unlimited number of people in passing. Not every school sets these systems up in a secure way. This was discovered in Blackpool in February 2018[290] on a magnified scale, as a US website was discovered streaming footage from internet-connected security cameras from around the world, including sites in the UK. Such set up and design errors must be avoided.
It is also increasingly common to see news reports of CCTV in school toilets[291] and parents upset at the intrusion and lack of consultation. There is no corresponding evidence of effective behaviour change or its psychological effect on children, as a result of introducing cameras in cloakrooms or all school surveillance.
Any intrusion on privacy must be necessary and proportionate in each and every case, and if footage is ‘only examined once a year and two cases’ in five years[292], the complex and costly technology solutions do not appear to reach that high bar.
Ten years ago, Casella (2010) was already writing about the integration of cameras into other building infrastructure when used as human presence detection tools; automatic locks, controls for lights and thermostats, and with it, how less intrusive cameras could become. We explore this further in the day-in-the-life case study.
A further concern is the potential for scope creep for CCTV includes risk that come from the joined up nature of combining facial recognition technology, video and physical tracking. Systems that are common in the use under the arguments of anti-shooter software are thankfully not applicable here. But we should not be complacent about the privacy risk or assume UK schools don’t use tracker systems.
Body cameras worn by children’s road crossing patrol officers[293] and by school staff in and out of the classroom appear increasingly normalised.[294] There was limited sector reaction when Oftsed announced it would begin using body cameras in enforcement inspections in early 2020[295].
Basingstoke College, a Further Education college, has a broad range of pupil ages. The groups that have used the cameras are all aged over 16, equivalent of year 12 upwards. 96 students have used the cameras to date. The Data Protection Impact Assessment only suggests they are using recordings in classes for teaching and learning purposes.[296]
We contacted the Head of Marketing for Reveal, the company behind the Calla brand, but were unable to get any response to our messages asking for more information about their uses in what appears to be different South Hampshire settings.
The Calla/Reveal website in 2019 spoke of a South Hampshire case study of staff using body cameras when school intruders were on the premises.[297] and a Headteacher that “didn’t understand why schools aren’t already using them. We were careful to say though that we were not bringing the cameras in to deal with behaviour from our own students; this is for the wellbeing and safety of our students.”
In 2017 the company wrote that they did not “underestimate the privacy concerns and ensure that all footage is recorded on encrypted cameras and accessed through secure video management software – similar to that being used by the police.”[298]
Dr Emmeline Taylor has written widely about the effects on children and unintended consequences of CCTV and other camera surveillance in schools,and addressed some of the key questions in her presentation, Life through a Lens[299] prepared for the launch of this report.
Radio Frequency Identification (RFID) is the wireless non-contact use of radio frequency waves to transfer data between tags and sensors.
A West Cheshire school adopted technology from Zebra Technologies to tag and track people around the campus. In a three-year trial, that ended in February 2013, pupils at West Cheshire College wore tags that allowed them to “to monitor and track the location of all staff and students” and triangulate[300] their location in detail throughout the college's three campuses. "The technology was introduced with the aim of assessing how it could be used for self-marking class attendance registers, safeguarding purposes, and to improve the physical management of the buildings."[301]
The College had a fixed total price for the full build that amounted to approx. £65 Million for both campuses, this included integrated technology, including the Wireless, CCTV, RFID.[302]
Software apps and platforms that may be used by both the school staff and children directly, and sometimes also by parents, may enable a broad variety of functions, from administration of school-home interactions, to classroom management and learning.
Some platforms enable a whole range of features from school sms services and push notifications, classroom resource storage to homework management, data and document management and distribution tools to letter merges and video sharing.
Parents log in to an app or online platform to report absence and the reasons for sickness, or manage financial top ups to a child’s financial account with the school to pay for routine classroom materials such as Science Lab costs, Technology class fees, Art materials, Home Economics Food costs, or make payments for one off items such as school trips or to make school fundraiser donations.
The assumption by Moe in 2014 that some technology are time or money savers for schools may be correct if one looks at cost on purely monetary terms at some administrative processing tasks. By strategically substituting technology (which is cheap) for labor (which is expensive), moreover, schools can be far more cost-effective than they are now—which is crucial in a future of tight budgets. But technology more rarely enables a substitution of technology for labour, as Moe and Chubb suggested in 2009, rather it more often displaces it. In this case redistributing some of the financial administration work from school support staff to parental workload instead. His assumption is misplaced when it comes to learning tools. In educational tools it is the public sector worker, the teacher, the administrator whose time is used to set up and administer the data that goes into these digital tools that is the state-paid labour that private companies then use as free resources to further develop their product or find ways in which to profit from its repurposing — upselling services to parents, using data for marketing or embedding market position for other hardware and service offerings.
Children go to school for a wide range of purposes, but their right to education, embodied in Article 29 of the UN Convention of the Rights of the Child does not include a caveat, that it should be part of a faustian bargain with for-profit companies or even social enterprises supported by private equity, to create their product.
To understand this in detail, listen to then CEO of Knewton, Jose Ferreira, presenting at the 2012 White House Datapalooza, who explained how the activities of the users of the Knewton products, do the work needed for the company to create their product.
“if you do 10 minutes of work in Google you produce a dozen data points for Google. Because everything that we do is tagged at such a granular level if you do 10 minutes of work for Knewton you cascade out lots and lots of other data.” (Ferreira, 2012)
By using the product you create a resource that the company can profit from. Users do work for the company. Children become child labourers. And the teaching staff’s time is unseen and unpaid labour that goes into supporting that learning, donated free for the company’s supporting benefit.
When a product is free to the user, the company is often, although not always, making a profit by tracking how many users spend how long each on a web page, which page they go to next, and whether or not they clicked on the advert on the page; all user behavioural data that is valuable to advertisers and is a revenue stream for the company whose online tool you are using and they sell that behavioural data to data brokers, who can sell it on again and again to companies that want to place ads. Your behaviour is what is traded between companies. Your eyeball on the page is doing work for the company because you have donated your time and attention which the company can monetise. Often even more simplified it is explained as if the product is free, you are the product.
In the same way, pupils using online edTech produce data that gets passed on as raw data, or more often turned into analytics for the company to either pass on to others, or to use to develop its existing product further or develop new products. These products use children’s time and labour, as free labour.
There are two questions here. The first in data protection terms, is often is it lawful without a freely given consent process. And the second, is it ethical?
Our detailed research of over 150 of the most popular and many more apps and platforms operating in the UK education sector finds a wide array of competence and compliance with data protection law. However this is not the most striking finding. It is that parents have no control at all, over which companies and third-parties may initially process a child’s data as a result of the choice of company that the school makes, and then has no control or even sight of the third parties that those companies choose to pass re-processed and re-purposed data onwards.
Each company may only have sight of its own practice. Each teacher may only have sight of the front end of the few apps they adopt and some of the data processing at school level. Each school business manager may see their own school-wide data submitted in the school census. But who has the view of the child at the centre of the vast data gathering industry, whose life is observed, recorded, tracked, profiled and surveilled not only by familiar school staff, but by cameras in the playground and classroom, hundreds of strangers and unknown data processors in public-private partnerships, for the national purposes of the accountability system, for profit advertising, influencing their behaviours, used for research, risk scoring by Local Authorities, for product development, policing and the Prevent programme.
This is fundamentally not a question of case-by-case good practice or poor practice, and in some cases unethical or unlawful practice, but a principle question of whether it is right that families should lose control of their child’s digital footprint, simply by sending them to state school in England today.
Data protection law can do a very poor job of protecting privacy. It may be no help at all when it comes to ethics and offers no route of redress when a parent does not want a child to be subject to a product trial or research intervention delivered in routine classroom time. As our children have become increasingly datafied, practice has become normalised in the UK teaching profession as staff have grown accustomed to routine processing of sensitive data at scale when other countries do not permit it, whether in the national government census or global platforms or using biometrics.
There is widespread misunderstanding that deidentified or depersonalised data are the same as anonymous data, which can be a very hard thing to achieve, and therefore failure to act in accordance with data protection law where the data remain classed as personal data.
Schools simply accept what companies tell them at face value, about the lawfulness of processing or when it may not be necessary to ask for consent when it comes to biometric data.
Some companies bypass school oversight when they market product upgrades or premium content to parents’ own mobile phones directly, once the school has established the relationship between the parent, child and company through a free-to-school service and personal data extraction from the school information management system to set up school-pupil-home accounts.
So not only have children lost control of their digital footprint by their fifth birthday, but schools have too. That role of data controller must be restored in the oversight and accountability of power in data processing, if children are to have any chance of having their rights respected and having a place to go to, for support and redress across their education and lifetime.
A £4.6m partnership[303] between Nesta and the Department for Education to make more effective use of technology in education, began in April 2019 planned to the end of 2021. The programme aimed to stimulate industry innovation, support the improvement of products, and build an evidence base. Following the COVID-19 outbreak in March 2020, the work did not continue as planned.[304]
The Nesta EdTech Innovation Testbed to trial ‘software, such as apps, websites or online programmes’ explicitly told participating schools there is no need for individual consent on children’s data processing. In doing so they appear to conflate their own lawful basis for researchers' access to the data with the third-party processing. The only lawful basis for processing mentioned, is vaguely public interest, which does not take into account the lawful basis needed for processing by the product company or the potential additional legal requirements when processing children’s special category (sensitive) data.
“Since this project is generating evidence on products to help existing school and
college objectives, and is in the public interest, there is no need for individual consent.”[305]
At the time of writing, Nesta is ‘working in partnership with the Department for Education to understand how remote learning EdTech tools can better meet the needs of teachers and students.’
We hope that includes how they better meet children’s rights. If the crisis of remote learning has shown anything, it is that the calls for technology to replace labour are far off the mark from what children need to support their learning journey and do not put the child at the centre.
Technology may support teachers but cannot adequately replace them.
On February 21, 2020 the Department for Education announced the promotion of six apps for Early Years children at home and in Early Years educational settings.
The Hungry Minds website the DfE[306] instructs adults to download the products[307] as part of the government’s drive to help parents make informed decisions about the use of technology in creating positive learning environments at home.
The expert panel that accredited the apps, appointed by the Department for Education, included children’s digital media consultants, early learning charities and researchers at universities. However it was not in their remit to consider any data protection or privacy impact assessment, or to carry out due diligence of the companies and their data processing locations or financial structures for example. This approach may not be sufficient to support families to make informed choices when the accreditation through the Department for Education may come with an assumed level of safety and quality standards.
Notable gaps in transparency include the French data server hosted company Learn&Go’s Kaligo app’s privacy policy in which there is no information at all about what data is used in the AI algorithms, nor how the child's data may be being used to develop and train the product, or what its training data set was and how data is used ongoing.
This is despite the fact that Kaligo is, “Powered by Artificial Intelligence (AI) and machine learning, the application dynamically defines the learning path to the individual user. If an exercise is not understood, the app will propose new learning exercises until mastered. This customized learning path is also inclusive by design in that people of any age, level and skill can use it. The application also has an enhanced tracking and analysis of each exercise for each user built in. Currently no other existing ICT solution has developed an AI engine of this kind.”[308]
Kaligo is reportedly in the “development stage for a SEN version of the application with exercises focusing on children with dyslexia and dyspraxia.”[309]
The Lingumi app uses Soapbox technology which is licensed to third-party developers and platforms to voice-enable any technology product for reading, robotics, language learning, smart toys and more. The Soapbox company may obtain voice data, “through a third party service or application that uses our speech technology to provide voice interface functions (“Indirect Access”)” and may listen to a child’s recordings and may transcribe and retain data with rather broad and vague purposes.
“We may use, store, process and transcribe Recordings in order to provide and maintain the Services, to perform, test or improve our own/proprietary speech recognition technology and artificial intelligence algorithms, or for other research and development and data analysis purposes. Additionally, these recordings may be listened to by human ears for the specific purpose of generating transcriptions or labelling data.”
SoapBox Labs has “built a dataset of 25,000 children’s voices in 150 countries. These are real children from real places,” and sums up why it has such a large dataset of biometric data by saying, “If you want to leverage deep learning and the improvements in accuracy, you need to leverage massive amounts of data.”[310]
The Lingumi app privacy policy suggests that parents should expect this, but fails to mention SoapBox Labs explicitly, stating only, “In the case of identifiable voice data, we gather explicit consent from you as the Parent to collect and handle such data, and remove all other identifiers from such data before processing through third parties.”
The Lingumi privacy policy is a bit vague how children’s other data may be used, saying “We do provide your personal data to third-party services, but only do so for our business purposes,” and the data volume is broad, and can also include other people’s data that may not have consented to Lingumi processing it, if existing users use the ‘add friends’ function[311] which suggests there is no legal basis for such processing.
Phonics Hero offers over 850 fun, varied and motivating games take a child step-by-step through the 44 sounds, the reading and spelling of words, and how to conquer sentences as it “tracks every click” in individual child accounts, “giving you in-depth teacher reporting and real-time assessment data.” It is free for teachers, with paid options.[312] The limited company registered in Hong Kong. We are grateful to the CEO who explained the business model to us as follows:
“Our business model is not to extract and monetise data; we make all of our revenue from selling our online platform and worksheets to schools and parents in the UK and around the world. There are some free teacher resources and free trial periods (30 days for teachers, 7 days for parents) of our add ons but these are introductory offers so that potential customers can test the product, but, we only generate revenue when people choose to purchase our add-on products.”
What surprised about the selection of these six apps was that all bar one seemed to be in limited states of readiness for children to use or were still in development at the time of DfE accreditation, often with limited website information, at the time of the government announcement.
Fonetti as part of Auris Tech Limited with significant investment from venture capital companies, and some big names, was only officially launched on launch February 26, 2020[313] five days after the DfE press announcement although the app was available to purchase in the AppStore from August 2018.
At the time of DfE launch accreditation, and still today, the Navigo Game is a research project-in-progress with ongoing product development and part of a consortium.
“Navigo Game focuses on developing skills that underpin reading, including phonics, letters and sounds, designed by UCL Institute of Education and Fish in a Bottle”.
How does it work?
“To facilitate our approach to progress, there is a user model of the child’s learning that is updated based on the child’s game performance, and interacts with an adaptive algorithm that determines the content/game to provide next as well as the rewards unlocked. [...]”
There was limited online information available on how its adaptive algorithm uses a child’s personal data, so we are grateful to a spokesperson who replied to our query.
“Based on appropriate language domain models, a profile for each child is created. The profile reflects an estimate regarding the child's reading capacity for several linguistic aspects present in the language domain model. Based on the child's performance on the presented games/tasks, the profile is continuously updated. For example, when a child successfully replies to the tasks presented to her during a game session, her level of mastering the relevant linguistic aspect improves and, eventually, lead to her exposure to more advanced linguistic aspect of the language. In contrast, if the user fails in some tasks, the game may decide that the learner has still to master some more primitive aspects of the language and present relevant (and simpler) material appropriate for earlier stages of learning. The navigation among the different linguistic aspects is based on the developed language domain models which form a map of the different language aspect that have to be mastered in a successful "learning to read" trip. Finally, based on the user's current status, appropriate learning content is selected for the games/tasks to follow.
“Navigo is part of an ongoing EU project iRead, and we therefore have to adhere to rigorous ethical procedures around data storage and processing. All parents of participating children receive an information sheet about the app and are required to give explicit consent for their child’s data to be processed within the game. We provide contact details to answer any additional questions parents may have around the use of their children’s data and all of our project deliverables which detail information about the app functionality are freely available on our website. We also have detailed GDPR policies which we provide to our participating schools.”
These statements conflict with the privacy policy and so it is unclear at the time of writing whether the UCL researchers are processing on the basis of consent 6(1)(a) or on public task 6(1)(e).
(Source: iRead privacy policy)
If they are processing basis of 6(1)(e) ‘public task’ there is no mention online, of how a setting should explain the Right to Object[314] to the UCL / project consortium data processing, which applies if Early Years settings use this app. Parents must be told that as a result of use, their child is part of a research trial, and must be offered a Right to Object under Article 21, if processing on the basis of 6(1)(e).
It’s not necessary for this processing as part of an Early Years setting’s public task to take part in trials, so one would expect it could only be processing children’s data on the basis of informed and freely given consent 6(1)(a). And that would need to be explicit and not bundled together, for each part of the processing i.e. using the app at the setting, the research by the UCL, or by partners.
“If consent is bundled up as a non-negotiable part of terms and conditions it is presumed not to have been freely given.” (EDPB Guidelines, 2020)
Can the parent/child agree to use the app if the Early Years setting wants them to, but decline to participate in the research trial? It appears not.
“Objecting to the processing would make NAVIGO non-functional. During registration the user (i.e. via the child’s parent) agrees to that processing. In addition, they have the right to delete their account and data.”
The Navigo game was being trialled in schools in the UK, Sweden, Germany, Spain, Romania and Greece. The trials aim to identify how the Navigo app is used by teachers in whole classes and small groups. Feedback from these trials will be used to recommend further enhancements.
Personal data is transferred by default indefinitely to UCL researchers, with anonymisation after 4 years, and may be transferred to the European Office of Horizon 2020 project, according to the privacy policy.
Since the trials aim to identify how the Navigo app is ‘used by teachers in whole classes’ it suggests that consent may not be valid, if in fact the expectation is that no one declines to use it. You cannot use the app and not be part of the trial. Will children whose families decline to use it in a class be at some detriment and worry that their child will be treated differently from others? The usual problems apply to this question of freely given and informed consent, as outlined elsewhere that freely given consent will also be more difficult to obtain in the context of a relationship where there is an imbalance of power – particularly for public authorities.
Certainly one might wonder if the DfE launch should have presented this app differently; making explicitly clear it was a research trial, rather than as a ready-to-use product among its selected six.
While the Teach your Monster to Read appears by far the most competent and established app of the six, set up by the publishers Usborne, it processes data in similar ways as many children’s apps do, for what in our opinion is excessive marketing and adTech purposes, using 10 Cookies (8 First Party & 2 Third Party) and 21 Third Party requests to 13 unique hosts. We set out a comparison of the cookies and tracking across these DfE Early Years accredited apps in a table in the Annex.
We have not made any comprehensive study of edTech in nursery provision, partly due to the large number of private providers, and partly because we are focussed on the collection of children’s data, rather than staff or parents, however an Internet search for biometric fingerprints and nursery schools will show a noticeable trend, that using biometric readers at the entrance doors appears to have become normalised.
From Edgbaston, to Harrow, Nottingham to Wolverhampton, the first ten Local Authority maintained nursery schools with under 100 children aged 2-4 on roll that we researched, all mentioned their use of CCTV outside and inside settings, as well as biometric door access among their security features.
In early 2021, the BBC reported that a popular early years provider of camera technology had switched off its product after its problems were exposed by a security researcher. "NurseryCam said it did not believe the incident had involved any youngsters or staff being watched without their permission, but had shut down its server as a precautionary measure. The Guildford-based company told the BBC its service was used by about 40 nurseries across the UK. It said it had also notified the ICO."
Educators in the Early Years also commonly use cameras in the classroom to record video and audio of the children to evidence children’s activities. Photos and video recordings may be uploaded and stored on a setting’s hard drive which may in some cases be backed up at night by County or City Councils.
Many also use cloud-based platforms that collect biometric or special category data for reporting purposes to document a child’s time spent at the care provider.
Sometimes this is marketed by supporting technology providers as a time saving tool for staff. “Many childcare professionals struggle under the increasing pressure to complete mountains of paperwork to satisfy Ofsted, but it doesn’t have to be such a laborious process”, says iConnect on its website.
iConnect is not only used for recording children’s observations, “practitioners can carry out all of their admin tasks; from assessments and progress reports to planning and next steps, at the touch of a button on an easy to use childcare app.”
iConnect generates Assessment Reports, Child Progress Reports and Cohort Trackers offers data analytics tools and staff can log observations against multiple frameworks. EYFS, SEND, COEL, ECAT, Montessori and many other development frameworks.
Other products are more focussed on setting-home communications and family interactions. For example, Seesaw collects personal data from teachers, family members, and children. This information may be entered by a teacher or the pupil or populated from the pupil’s account with a third party sign-in service, such as their Google account.
There are two options on costs. If at the end of a 60-day free trial, teachers decide not to upgrade to the premium model Seesaw for Schools, they can still keep using Seesaw and its core features for free. SeeSaw Premium features include classes “moving up” to the next teacher at the end of the academic year.
Seesaw collects content that is added to a class or child’s journal. This content may be photos, drawings, files, notes, hyperlinks, video and children’s or adults’ voice recordings.
Seesaw collects in-app messages that are sent and received in Seesaw by teachers and family members.
Blogs are not a default part of classes on Seesaw but if settings choose to use them, the Seesaw website says [315] that by default a blog is public on the Internet. That means that anyone who visits the URL will be able to access its content, including children’s photos, until it is actively password protected.
There’s an option for parents to be able to link and comment on a child’s content. Families can view and respond to assignments using the Family app. On the Class app, they receive notifications when assignments are uploaded.
Other parents and adults in your child’s class may also be able to access your child’s image or work if your child is tagged in a journal entry with their child, such as when your child collaborates with other students on a group project, and vice versa.[316] Parents of children in photos alongside others, cannot directly control this themselves.
Teachers control what is shared and with whom. Teachers can choose to invite family members to see the work their child has added to Seesaw or post some items more publicly. Teachers control who can tag students on items and can choose to only tag one student per photo or piece of work.
Seesaw receives users’ log data such as IP address, browser type, operating system, device information, and your mobile carrier. In addition, they may receive or collect additional information such as the referring web page, referring search terms, and pages visited. Last names, email addresses and profile photos are optional for students, but not required. If the account used is as a teacher, parent, or administrator, Seesaw may use your IP address to determine your approximate location for the purposes of sending you customized marketing and other information about our products. Such adult users can opt-out of marketing communications from Seesaw.
Children’s data is not used for marketing but may include processing for product development.
To help the company analyze the personal data they process, they use a number of third-party services with a wide range of third-party policies.[317] It is impossible to truly understand where a child or parent’s personal data goes from there. ‘Launch Darkly’[318] requires opt out not opt in, of receiving third party targeted advertising for example. And on whether it is GDPR compliant it says, “GDPR isn't just a line in the sand, but a continual journey—LaunchDarkly is committed to adapting to the changing landscape of privacy and renewing its practices as the landscape evolves.”
To use some of Seesaw's tools, like the video upload tool, you must allow third-party cookies and site data in your browser settings. This consent process is not valid because it is bundled with use.
Educational settings that use Seesaw for Schools (a paid service) have the option to store all data associated with their school in the EU, UK, Canada, or Australia. Seesaw offers a Data Processing Agreement with Standard Contractual Clauses to all schools in the UK or EU to help them comply with GDPR if their data is stored in the US.
A spokesperson for Seesaw confirmed that their, “data processing agreement allows schools to certify that they have either collected consent or have a legitimate interest in performing a public task that permits the school to collect data and transfer it to Seesaw to process to provide the services to facilitate that public task.”
The problem here is that a public task is not ‘transferable’ in that sense. Seesaw needs its own basis in law for all aspects of its data processing where it goes beyond that which is necessary for the school’s narrow purposes, and for example processing that involves sub-processors, who also need to meet the condition for processing each child’s voice data and other personal data. And public task doesn’t reach to the development of “new features for our products.”[319]
Currently, “the Seesaw Terms of Service require that schools get parental consent before using Seesaw with students.”[320]
Educational settings cannot therefore impose this app on families, or use it lawfully without it. Yet it is a quandary. Since schools operating as a public authority will find a valid consent basis difficult to apply lawfully for required data collections in educational settings.[321] In our opinion, apps like this are processing data in ways incompatible with data protection law in educational settings in England.
We use this day-in-the-life to begin a better understanding of the variety of types of activity that results in data processing about children in secondary school today, and it is far from exhaustive.
Few schools seem to do as good a job as John Taylor Multi Academy Trust (JTMAT) in Staffordshire, explaining all its third party processing on one webpage.[322] The JTMAT (a school trust) shares information with partners, trusted organizations, vendors and other affiliated organisations to process data on the school’s behalf. It lists them including links to what data is shared and the services they provide for JTMAT as well as links to their privacy policies.
From cashless payment providers to apps and platforms, the school photographer, and exam boards it lists over 85 third-parties and sensibly says that although this is an extensive list it is not exhaustive. A range of additional third parties may include
Fig 2. This illustration is intended to show a high level digital day-in-the-life of an eleven year-old at state secondary school. The common core administrative functions in educational settings that are supported by the Management Information System, the digital centre of a school records system. The outer ring are the activities in which children’s data most commonly leave the school and are processed or controlled by third-parties for daily interactions. It also includes the processing of hours that children do not see: data transfers to Local Authorities and the Department for Education as part of regional decisions on risk scoring and predictive analytics programs, or for the national census termly for the school census and annually for some others. These functions may entail processing of a child’s personal data by dozens of companies in one day, every day, across their entire education.
Schoolcomms has been part of the ParentPay group of companies since 2017, and provides a gateway into a wide range of products from cashless catering, to club management and parent evening bookings carried out by an additional third party.[323]
Schools can send out messages to parent groups at no charge. Parents receive these messages in the app. Any parents not using the app will still get a regular text message, so it remains inclusive of parents not using smartphones.
School staff can access Schoolcomms directly in Capita SIMS but the company says normally it is instead carried out by a dedicated client or via web-service.
On its website it calls itself “a sophisticated parent app, integrating our features for messaging, reporting, club management and cashless payments, so that parents can engage from any place at any time. The solution is now trusted by over 3,000 schools across the UK.”
They use a range of service providers to perform various processing activities. Types of sub Processors and data processed are listed in the company privacy notice.[324]
ParentPay reserves the right to place promotional content within ParentPay group products e.g. banners within the ParentPay application or website, but as this app is designed for parents it does not target advertising at children.
Schoolcomms does not have a dedicated support channel for parents, and asks that parents raise their query such as to opt out of marketing with the school who can then contact the school support team.
They use “a range of trusted service providers to help deliver our services. All of our suppliers are subject to appropriate safeguards, operating in accordance with our specific instructions and limitations, and in full compliance with Data Protection Law.”
These service providers include:
As a parent who contacted us concerned about using the site said, “I expect most if not all the places the data ends up are legitimate and necessary (backup sites, etc), but it's the question of what we are told at the point of signing up to these services. Of course, each of those 15 in addition to ParentPay are likely to have their own terms, which may involve sharing - we have no idea about those, and finding out would be quite a task.”
In 2018, Schools Week reported that a university technical college in Birmingham was the first school in the country to install always-on, 360-degree cameras in all of its 28 classrooms. Aston University Engineering Academy, which caters to just over 600 14- to 19-year-olds, officially launched the equipment at an event at the UTC.[325]
ONVU’sLessonvu[326] is described as a “non-intrusive classroom video lesson observation system that is controlled by teachers. Its unique technology allows for the complete recording of 360-degree video and audio, giving a comprehensive view of the entire lesson.”
As Lessonvu is set to be always on, every aspect of a lesson will be recorded. Lessons are captured using a high definition 360-degree fisheye camera and via high definition microphone. The recording is then converted using ONVU’s proprietary software into a more traditional view. By default all recorded material is stored locally on the school’s network, and schools can opt to store video clips to the cloud if required while controlling permissions.
In September 2020 the company reported that in lockdown teachers have been sharing footage from their lessons in internal CPD sessions in order to continue to provide quality effective support and development within COVID-19 restrictions. During an action based research project in 2019 experienced teachers in India securely shared short video clips from their lessons, following their agreed school policies, with an external coach in the UK.
Lessonvu’s FAQ states, “Most schools will already have parental approval to photograph students but a school should review its policy on video use. Approval from parents may be required.”
In our opinion, because this processing may be of biometric data, the law requires a different approach from static photographs. As such processing this personal data from anyone,—children, staff and visitors on film and including their voice data—falls under not only data protection and privacy laws, but the Protection of Freedoms Act 21012 and Chapter 2(26) ‘the requirement to notify and obtain consent before processing biometric information’. That requires active, explicit consent and if either the child or either parent objects, processing must not go ahead.
(3)The relevant authority must ensure that a child's biometric information is not processed unless—
(a) at least one parent of the child consents to the information being processed, and
(b) no parent of the child has withdrawn his or her consent, or otherwise objected, to the information being processed.
(5)But if, at any time, the child—
While ONVU may suggest that ‘their child is not the subject of the observation’ they are the data subject of the data processing. The Data Protection Impact Assessments we have seen, fail to address this[327] and schools fail to ask for consent[328] or even offer any right to object which would be required under the basis that they have [wrongly in our opinion] chosen, processing on the basis of ‘ public task’.[329] And the DPIA fail to address the additional conditions for such data processing needed in Article 9 of the GDPR.
Given the power imbalance in a school setting, we would expect regardless of the basis chosen, for this processing not to be lawful, in line with the GDPR in Sweden in August 2019.[330] Under Article 6.1 (e) of the General Data Protection Regulation, processing will be lawful if it is necessary in order to perform a task in the public interest or as part of the controller’s exercising of public authority. But in our opinion, and certainly since other schools perform the public task of teacher improvement without resorting to always-on cameras, this processing would fail to meet a high bar for sensitive data under the necessity test.
Furthermore, personal data that is processed must be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (data minimisation) whereas these cameras can be on all the time so collect excessive data. It follows from recital 39 that personal data from children may only be processed if the purpose of the processing cannot be achieved in a satisfactory manner using other methods.
When children’s attendance is registered by staff this is recorded in the the school’s choice of
Management information systems. (see The central pupil record record infrastructure). Each child will have a unique record with a unique pupil number in the system and it forms the core of the school’s storage of information for each pupil. Throughout the course of a day, other data may be linked into the core record, such as behavioural data. Third-party systems all work differently and schools can make decisions about what is or is not recorded in the core MIS record. The child’s data may be extracted in order to create a record with third-party apps in the course of a day, some of which children or parents may see. What they more rarely see is the data that then gets used to function as business information in helping run the school.
Data analytics tools will often combine with Microsoft technology in PS Financials for example, and connect disparate data held across multiple schools or systems in one setting. These enable linkage between facilities systems to treat them as if a single source of information; budgeting, HR, payroll and finance together with the core Management Information Systems holding pupil data.
These give the view of a school as a business, and will include pupil data processing as part of the overall business intelligence to ‘calculate the true cost of individuals to your school’ or analyse differences made by experienced teachers over the newly qualified for example.
eSafe monitors approximately one million school children plus staff across the UK (as of August 2018)[331] in approximately 2000 schools and colleges, according to the Salford based company and also exports internationally.[332]
eSafe has provided a cloud based solution to the primary, secondary and further education sectors since 2009 to deliver a monitoring service of online and offline user behaviour and content. Keystrokes are scanned for character patterns to identify the words and phrases held in the Threat Library. The automated detection system identifies what is determined as potentially harmful content and retains data from video, streaming, the webcam and static files. It also monitors multi-lingual keyboard entry (what children type) and screen content from websites, documents and social media applications.
Following capture, and secure transfer to the servers, all incident data, including the screenshot, is reviewed and interpreted by eSafe’s team of specialist Behaviour Analysts, who assess them for safeguarding risks.
The purpose of the Service is to provide warning markers of risk affecting the welfare and wellbeing of pupils and staff, to enable school and college leaders to meet their statutory duty of care to safeguard and protect, with a focus on neglect, abuse and other safeguarding issues. They extend across a diverse range of behaviours including: criminal activity (gang crime, child sexual exploitation, trafficking, sexual grooming), anti-social behaviour, physical abuse, FGM, radicalisation and terrorism, substance abuse, mental health issues, self-harm and suicide risk, peer on peer bullying and harassment.
The word strings, euphemisms, slang, terms and code words etc associated with the range of behaviours schools and colleges have a statutory duty to identify, totals tens of thousands per language variant and it is a dynamically changing environment. A localised euphemism or slang term for suicide can disappear and be replaced in relatively short period of time. The critical factor, eSafe told us, is to ensure accuracy rather than volume, but the range of behaviours dictates that the Threat Library is measured in hundreds of thousands.
Detection algorithms are continuously optimised. eSafe claims to have an“unrivalled knowledge bank, from which we are able to take unique advantage of machine learning (ML) and artificial intelligence (AI) techniques.”[333]
The following data is captured where the Service identifies a potential safeguarding incident:
When a device is used to access the school G-Suite environment, a school owned or personal device, it may be physically outside the school or college site, but is monitored regardless of the user’s physical location i.e. sitting at home, because the application sits inside the G-Suite environment, and activity is monitored until the user disconnects from G-Suite.
The location of the user at the time they are using the school or college owned device, or accessing the school/college G-Suite environment, is not known to eSafe says the company.
“We can speculate that a school device which is viewing pornography at 2am is not in school but we cannot be certain, and knowing the location of the user is not part of our safeguarding brief.”
Following the Behaviour Analyst’s review, interpretation, and assessment of an incident capture, any identified safeguarding concern requiring intervention is reported to nominated contacts at the school/college. The report is escalated either via direct integration with the school/college safeguarding records systems (CPOMS); email, containing an encrypted report; or direct telephone call, supported by an encrypted report, depending on the severity of the risk that has been identified.
This public case study is published in full in eSafe’s online marketing materials[334]. We are concerned it may risk exposing information that would cause distress to the individual or those known to her (or readers) so redacted the college name, incident, and full name of the school staff member:
"X Sixth Form College has relied on eSafe to protect users for the last 6 years. Mental Health & Deputy Safeguarding Officer X can still recall one of the first serious incidents the service detected. A female student had been writing an emotionally charged letter to her Mum using Microsoft Word, in which she revealed XXXXXXX. Despite the device used being offline, eSafe picked this up and alerted John and his care team who were able to quickly intervene."
We invited the CEO of e-Safe Systems Limited to take this case study out of his public marketing materials on data privacy and ethical grounds in May 2020. He declined.
Mark Donkersley, told the House of Lords 2016 Communication Committee enquiry on the Children and the Internet[335]
“Bearing in mind we are doing this throughout the year, the behaviours we detect are not confined to the school bell starting in the morning and ringing in the afternoon, clearly; it is 24/7 and it is every day of the year. Lots of our incidents are escalated through activity on evenings, weekends and school holidays.”
Although the over 400 school Acceptable Use Policies we have viewed, may mention monitoring applied to the use of a school or college digital environment we are yet to see any that adequately explain that this includes offline documents and out of school hours and the implications. Some only state a single line ‘I understand my use of the Internet may be monitored.’ The templates at South West Grid For Learning (SWGFL) and Safer Internet Centre do not mention monitoring while a child is at home for example, which is important if one considers remote learning in the context of the current COVID-19 pandemic.
Case studies and anecdotal evidence from school staff and IT network staff that have approached us with questions or concerns, suggest that it has become routinely accepted that there is a high error rate in some products that create automated alerts about individuals.
eSafe’s own website says of alternative and software-based solutions, “The problem with software-based monitoring solutions is that the data they report often contains a significant volume of ‘false positives’, which means the underlying incident baseline can never be a true or accurate representation of the actual situation.”[336]
Errors are particularly problematic when schools pass on data onto other people and other systems.
The CEO explained to us that “a high error rate is the case with competing products”, but not with eSafe.
“The markers we seek are dynamically updated - on a regular basis by eSafe staff - to take into account developments and trends in the way, for example, risks may be articulated. This includes input from specialist organisations and agencies involved in the welfare of young people, such as mental health, law enforcement, abuse, gangs, youth gambling etc. Our value would be eroded and our business would not survive if we generated high numbers of incidents reports to schools and colleges that proved false or worthless.”
Nearly every family indicated in our commissioned Survation 2018 poll[337] of 1,004 parents that they believe children and guardians should be informed how such monitoring works.
Children want to understand how their data are processed and restore power imbalances, outlined for example in, The Internet on our own Terms: how children and young people deliberated about their digital rights (Coleman, S. et al, 2017)
We believe that it is hard for a child to understand how these algorithms work. If a child uses a search term or text or pictures on screen, that the system has decided warrants action by the company, at what point is the child told?
eSafe only passes safeguarding incident information to school contacts nominated by the school headteacher e.g. Designated Safeguarding Lead, as per the Service Overview.
Our concerns include that information might be passed on to the school or by school staff to others, which is misinterpreted. For example, who was actually using the device at the time of the incident. These outcomes may have very serious and significant consequences for a child.
We are concerned that the multi-lingual aspects of all such monitoring systems, that is not specific only to eSafe, and the nature of the topic areas of activity and language that the system has been programmed to deem significant, may build-in cultural discrimination and bias that is not apparant to the staff in schools or children it affects. Since we are not able to see the lists of content that trigger ‘actions’ or ‘flags’ from any of the suppliers we cannot however make a judgement here and it would be a good area for Independent and academic research.
The CEO believes we are wrong. He explained to us that monitoring for safeguarding risk is a specialist function requiring objective review and the interpretation of risk markers, many of which are not obvious, or may be in a foreign language reflecting cultural nuances, requires specialist analysis. Schools and colleges neither possess the time nor the specialist resources to conduct such analysis. ”We employ behaviour specialists from varied cultural backgrounds and with specific language skills to ensure that the markers we employ accurately identify the safeguarding risk our school and college customers need to be aware of. To suggest cultural discrimination or bias is employed by eSafe is wholly incorrect.”
The risks created by the practices in reality behind some of these policies however have been ignored from a child rights perspective and gone without debate while the blanket-term of safeguarding is hard to be seen to address critically, as Sandra Leaton Gray and Andy Phippen set out in their book Invisibly Blighted (UCL IOE Press, 2017).
We would like to see all of the companies that operate in this field to have a duty of transparency, to publish error rates and how errors are defined, the keywords that would trigger action from a past year for example, the full sources of input, and for companies to offer a standard subject access report template that would enable a school to print annually, and on demand, the information created about a child from monitoring. This would also need to ensure that any shorter data retention periods still have adequate audit functions to be able to meet requests.
In our opinion there should also be a review of lawfulness across all of the sector suppliers, from the perspectives of human rights, privacy, and communications law.
The public should have a much better understanding of how all of this works, and as a parent it shouldn’t be difficult to find this information, or for children to have fully foreseeable consequences of every one of their actions.
Founded by Sir John Rowling, PiXL (Partners in Excellence) emerged in 2005 from the school improvement programme, the London Challenge. When government funding for the initiative ceased, the 50 member schools at that time decided voluntarily to continue with its model of collaboration around leadership and shared resources. PiXL has since grown to become the largest network of schools in England and Wales.
PiXL is a not-for-profit partnership of over 1,700 secondary schools, 600 sixth forms, 500 primary schools, and 75 providers of alternative education. It operates PiXL Edge “a systematic framework to enable schools to develop and capture the student attitudes, attributes and skills which are so important for employment and life beyond school.”
PiXL processes pupil data at the request of Member Schools. The types of student personal data processed will be dependent on the selection of applications that member schools choose to implement in their settings. Usually it will include names, login information, school email addresses. For some applications, should the Member School wish, year, class, and attainment data are processed. In some limited circumstances (classroom management apps), special categories of data may be processed such as Special Educational Needs.
The website says that “being a member of the PiXL family includes access to a wide range of resources and experienced subject specialists to help support your school.” School membership fees range from primary through to secondary school with sixth form, between £2,600-6,600. PiXL Edge membership is a further one-off £500 fee plus an annual administration fee which is £100+VAT.
It delivers ,“A responsive programme, which receives guidance and strategic direction, not only from educationalists at a national level, but also from industry” and in 2014 it launched its character development programme, The Edge and the Them and Us project in 2017, “driven by a deep moral purpose.”
It is based on four areas of work with the LORIC framework: (Leadership, organisation, resilience, initiative, communication); Oracy, Well being, and Careers, apprenticeship and Higher Education. The programme is for students from ages 4 to 18 and they progress through 7 levels (Foundation, Contender, Champion, Apprentice, Graduate, Master).
PiXL notes 14 partners on their website, including ClassCharts, GCSE Pod, ALPS and more.
“A small element of PiXLs offer to its member schools is the provision of a number of tools to assist in student learning and providing teachers with visibility of the challenges and success for their students. The personal data for these tools is controlled by the schools and is not shared with other parties. Some anonymised aggregate data is used to help inform member schools of wider patterns and provide direction for areas of focus across member schools both in terms of PiXLs strategies and the content produced to assist its members in the education of students.”
At the same time as Google has become a verb in classrooms across the country, the company products have proliferated in UK schools. Low-cost laptops called Chromebooks, have become commonplace in classrooms from age 2 and up, sometimes bought through parent-school lease partnerships or parent-purchase schemes and offering free classroom apps and administrator single console management log-in, which can be operated remotely and round the clock.
G Suite for Education offers two categories of Google services:
Additional Services were originally primarily designed for consumer users but can be used with G Suite for Education accounts if allowed for educational purposes by a school’s domain administrator.
G Suite for Education administrators determine which Google services their users can access, but our discussions with IT administrators in schools shows little understanding if and when they are required to provide or obtain consent for the use of the Additional services by their minor users. And no understandng that a consent basis that must be by its nature freely given cannot appply to required services. We raised this lack of consent and therefore lawful basis for Google's data processing from UK schools with the Public Policy and Government Relations Manager.
We were told,"On the issue of G Suite accounts, it is important to note that the decision to use these is one for schools, not for Google, and it is the schools who make the choice of how this should be used, including whether or not they activate Additional Services, and for which users they activate such Additional Services. We provide practical tools for schools to exercise these choices: schools can see the apps that are in use in their admin console, where they can also decide which Additional Services they want to enable, if any. Schools can also turn on or off Additional Services for only specific users."
The bundling of information for parents means that there is not the level of clarity about these choices schools may make on children's behalf, or clarity on Google's legal basis for children or parents' data processing that there should be, and it leaves open questions on whether there is sufficient legal basis at all for Google's data processing for its own purposes, that go beyond the narrow data processing purposes that schools require.
In our opinion, it comes down to whether the company can rely on the basis of being a data processor, doing only what schools want using the legal basis of public task, or whether Google makes decisions on the nature and method of any data processing, making them a data controller and requiring consent. If it is the latter this is a basis that cannot be valid.
Better school policies we have seen, might contain a permissions page in the child’s Admissions booklet which lists the Core Google Apps a school uses. But there is no real option to refuse or choose which services a child will or will not use, and therefore consent cannot be considered freely given and is an invalid basis for core data processing in an educational setting.
Where parents and pupils are required to sign off acceptance of all online terms and conditions set out by a school, these tend to be bundled together rather than single product specific, such as an Acceptable Use policy which includes a wide range of agreements in one, such as a social media policy agreeing not to bring the school into disrepute in public fora. We have not yet seen any school that provides a copy of each privacy notice to parents, about every edTech company they use and the attitude of each company, including Google would seem to lay the fault for that at schools feet. But how easy would it be for a school to give a simple overview to families how their data are used by Google?
There is not an easy way for a child to understand which data about them are collected, processed or retained by whom, for how long and for what purposes in the complexity of a Google policy and policies can change on a frequent basis. The G Suite agreement for schools is long[339] and the G Suite for Education Privacy Notice[340] describes how Google collects and uses information with G Suite for Education accounts but can be updated as often as the company chooses.
"(b) To the Agreement. Google may change the terms of this Agreement from time to time and will post any such changes at https://workspace.google.com/terms/education_terms.html. These changes will only take effect at the beginning of Customer’s next Order Term, at which time Customer’s continued use of the Services will constitute its acceptance of the changes. This Section 1.5(b) (Modifications to the Agreement) does not apply to changes to URL Terms."
In the privacy notice Google suggests an unsuitable process for a child becuase they need to read multiple policies, "We hope you will take the time to read this Notice, the Google Cloud Privacy Notice, and the Google Privacy Policy, which apply to Google Workspace for Education accounts," and there is often a complex bundling together of information that applies differently in different countries.
During the 2020-21 COVID-19 lockdowns, schools that were tracking student usage and teachers may have made inferences about a child’s remote learning engagement. “Google Classroom automatically tracks interactions including such as creating new documents, opening instruction documents, registering and asking questions. We can see from our data that X has been completing a lot of work, well above the average in their year group. It is not a perfect science but a very good indication,” according to one teacher.
We believe that a range of transparency metrics should be a requirement made of companies with access to the state education system. If the product is free, schools and parents should know what's in it for the company. Google staff to date have been unwilling when we asked to talk about its intentions for any future business model for education for example.
We have asked Google for comment and how many UK teachers it has trained as Google ambassadors and would welcome transparency of its UK business model and any future plans but have not received any satisfactory answers.
In 2017 Natasha Singer wrote in the New York Times, How Google took over the Classroom[341]:
“In the space of just five years, Google has helped upend the sales methods companies use to place their products in classrooms. It has enlisted teachers and administrators to promote Google’s products to other schools. It has directly reached out to educators to test its products — effectively bypassing senior district officials. And it has outmaneuvered Apple and Microsoft with a powerful combination of low-cost laptops, called Chromebooks, and free classroom apps. Today, more than half the nation’s primary- and secondary-school students — more than 30 million children — use Google education apps like Gmail and Docs, the company said. And Chromebooks, Google-powered laptops that initially struggled to find a purpose, are now a powerhouse in America’s schools. Today they account for more than half the mobile devices shipped to schools.”
“Unlike Apple or Microsoft, which make money primarily by selling devices or software services, Google derives most of its revenue from online advertising — much of it targeted through sophisticated use of people’s data. Questions about how Google might use data gleaned from students’ online activities have dogged the company for years.”
While no ads are displayed to children using Core Services, and no Customer Data from Core Services are used for advertising purposes (in either Core or Additional Services) this does not mean that children are not exposed to advertising while logged into G-Suite accounts. Additional Services may show ads. For G-Suite for Education users in Primary/Secondary schools, Google does not use any user personal information (or any information associated with a Google Account) to target ads, but ads will be shown in YouTube for example. Children may still see ads, but data about each child as an individual will not be used to tailor which ad they see, targeting uses broad location data or class of product.
In 2020 under COVID-19 the Welsh Government announced that they would require every child to use a Google account. ‘Previously, these services have only been made available where learners or their parents/carers had given consent, from Monday 23 March 2020, schools will no longer rely on consent.’ We received and shared parental complaints with the Welsh Education Authority and the Information Commissioner (the ICO).
At the time of writing in October 2020, the Norwegian Data Protection Authority had announced it is investigating whether it is legal to use Google in schools.
And the US state of New Mexico launched a lawsuiton February 20, 2020, which says that the use of Google Education and other Google products, “comes at a very real costs that Google purposefully obscures.”
In March 2021, the Dutch cabinet contacted the Dutch Data Protection Authority (AP) and the European Commission because Google products used by schools do not appear to be safe enough. The privacy risks are too great, reported AG Connect, about a letter sent from the Minister of Education Ingrid van Engelshoven to the Lower House.
"For the education world, Google's handling of metadata is the biggest problem, Van Engelshoven writes. “Google's position is that it sees itself as the sole controller for metadata. This means that it may determine the purpose for which it collects metadata and how this is done. Google has also included in the privacy agreements that it may unilaterally adjust the conditions surrounding metadata, without asking the user for permission."
We hope that Supervisory Authorities will now address:
ClassDojo is a communication platform and app for the classroom and home. Teachers, parents and children use it to share photos, videos, and messages through the school day. They use ClassDojo to “work together as a team, share in the classroom experience, and bring big ideas to life in their classrooms and homes”.
Teachers use ClassDojo to make a schedule of activities known to parents. It began as an app with its core aim to nudge children’s behaviour, but has moved more towards recording behaviour. Teachers can snap and send photos or videos to parents showing off a child’s latest work or participation in activities.
It is one of the most well known education technology startups with extensive reach to millions of teachers and families worldwide, and significant venture capital backing. Its investors include over 20 venture capital companies, including Imagine K12 itself (merged with Y Combinator, a leading Silicon Valley startup accelerator), General Catalyst Partners, GSV Capital and Learn Capital.[342]
The child, the parents, and the pupil's teachers or school leaders can see the child’s profile and Student Story. Children upload content (such as a photo, video, drawing, journal entry, or document) to the Service on Student Story through their student account. Either the parent provides parental consent directly to ClassDojo or the child’s teacher represents to ClassDojo that they have obtained any necessary parental consent. Parents are able to view their own child’s Student Story on their own account after the teacher has approved the pupil-submitted content. The company website states that "1 in 3 US kids aged 5-14 have learned about Growth Mindset and Empathy with ClassDojo."
ClassDojo is, perhaps surprisingly, highly divisive (Williamson, 2018b) and the expansion of ClassDojo has both enthused some teachers and appalled others.
On its business model Liam Don, ClassDojo co-founder and chief technology officer set out in an interview to Inc. in 2016, how Class Dojo plans to make money after its freeware is distributed to children through schools.The company says it has no plans to sell advertising. Instead, ClassDojo is looking at selling educational content. With access to so many teachers and students, the startup is leveraging its distribution capabilities to spread educational videos to an audience of teachers and students on a level that's never been seen before.
"It's a huge distribution platform to reach parents," Don said. "We want to, In the long term, enable parents to be consumers for their child's education."
It is a “freemium” model, in which users are given the basic tools to use the service.
There is also a separate paid offering that has no impact on the free, school-based experience, including the ability to stay connected with school. And, it keeps the school-based experience free for all teachers, children, families and schools in the world. And for those willing to pay, more content can be added to enhance their experience. Materials may be marketed at parents via direct email from the company to parents.
From a data perspective, children’s personal data is processed in the US. This is a raised risk factor for protection of UK children’s personal data, and potentially staff and financial data if the upgrades are bought. Given the invalidation of Privacy Shield, many businesses are revising how consent is collected under GDPR. ClassDojo says that they are closely following the updates regarding the Privacy Shield from both the EU Commission and the US Department of Commerce. In the meantime, ClassDojo continues to operate in full compliance with GDPR. This includes drafting standard contractual clauses for their users, as well as for vetted third-party service providers that help ClassDojo provide the service. Article 49 of GDPR provides for various mechanisms for international transfers, including consent.
In addition to using their own Online Tracking Technologies, Class Dojo works with over 30 third-party service providers[343] and does a very good job of listing them all on its website, including Google Fabric, Amazon Web Services, Data Dog, Survey Monkey, and Zen Desk to help analyse how the Service and Website is used, for functionality and serving content, for advertising and "to improve ClassDojo to deliver the best possible experience for teachers, parents, and students. These third parties may use web measurement and customisation technologies (such as cookies) in conjunction with the provision of these services."
There are 3 pieces of information that get shared with the messenger service Twilio for example: ClassDojo’s phone number, the destination phone number, and the text of the invitation (a generic message, that cannot be customized). So, the only information shared with Twilio is the destination phone number when an SMS invitation is sent.
If the teacher uses a different invitation mechanism (like email or paper), no phone numbers are received or shared at all.
Class Dojo’s data retention policy is to automatically delete a child’s profile of feedback points[344] after a year, and to delete accounts if they’re inactive for more than twelve months. However, some content within a pupil account is kept after deletion of the account, the company says "for school legal compliance reasons.” The data retention policies of the support third-party service providers are all different and may be somewhat opaque to the child and parent when a school signs them up to the Class Dojo platform, but this is the school's responsibility to explain.
The founder Sam Chaudhary and some of his team members generously gave me some of their time in 2018 to discuss the company's future plans.
By uploading parents’ email to ClassDojo who generate an email to parents with a link containing a unique parent code, or the child’s teacher may alternatively send an SMS to parents’ phone number with the link; or a child’s teacher sends the unique parent code home with the child on paper, and asks parents to enter their email and code onto the Class Dojo website to set up the parental account.
There is clearly no lawful basis for schools to require the use of this app by parents or children in UK schools. Schools cannot impose its use across a class or school, because it uses consent as its lawful basis for processing children’s special category personal data. It is unlikely this app could be used lawfully under performance of a task carried out in the public interest or in exercise of official authority partly because of all its third-party processing, and that processing would not be found to be ‘necessary’ school purposes.
Consent[345] is not an appropriate lawful basis for data processing, if there is a clear imbalance of power between a public body and the individual, so consent cannot be considered freely given if it is detrimental to decline, in particular where no suitable alternative substitute process is on offer.
As Class Dojo founder, Sam says, however [as long as ] “parents aren’t forced to use ClassDojo, and neither are schools. This is doubly true because unlike top-down platforms schools purchase and then mandate staff and families must use (e.g. Capita SIMS), ClassDojo is entirely voluntary to use.
Where parents choose not to join ClassDojo, schools can simply connect with parents through other means, including email, SMS, a communication system they’ve purchased, postal mail, or even Twitter accounts and school websites. All of these mechanisms will have pros and cons; we just believe ClassDojo is a safer and more secure way for everyone in the school community to communicate with each other.”
There are many learning management systems competing for space in the education sector, including Moodle, Blackboard, Schoology and Haiku and for the purposes of this case study, Edmodo.
Bill Fitzgerald, a privacy researcher writing about education technology, broke the news in May 2017 about Edmodo’s[346]tracking systems using Doubleclick. Edmodo students and teachers were exposed to targeted ad tracking that clearly identified the user as a student.
Once he revealed it, the company removed the tracking that he observed and discussed in his post, and he wrote, “Their response was fast, and they deserve a lot of credit for making this decision, and implementing it quickly.”
Fitzgerald asked the kinds of questions that should be asked of many providers.
The full blog post is worth a read on his website, Funny Monkey.[347]
Later in 2017 Edmodo faced a security breach of personal data on 77,010,322 unique users, more than 2 million of them in the UK, across 550,000 schools worldwide.
NetDragon Websoft, a publicly-traded company based in Fuzhou, China then bought Edmodo for a reported $137.5 in 2018.
They created a new partnership with IBM Watson Education in 2018/19 and we wondered if and how UK families whose children use the app were informed of any of these events and any implications for their data processing, of the takeovers and partnership.
In August 2020 defenddigitalme asked IBM’s Chief Privacy Office for an update on the Edmodo partnership and about the processing of children's data in its collaboration. A spokesperson replied that, “Nothing was released by IBM or Edmodo (it did not get past the prototype stage). Also, the project never loaded any student data.”
Edmodo may display ads today for its own products and services on their Platform. Outside of this, their privacy policy says that they do not serve ads to users under 13 years old. [348] “Only in the instance where we have obtained prior, express, written parental consent will Edmodo serve third party ads to users under 13 years old. Edmodo may share the Personal Information of those 13 years or over with our advertising partners to promote Edmodo or other products. Users may opt out of the sharing of Personal Information with third parties.”
Parents that objected to a school introduction of ClassCharts in 2019 brought their concerns to us. They were upset because the school had not consulted on its introduction.
The data sharing agreement we have seen set out a list of the child’s personal data transferred in the initial account set-up but the parent was concerned not only about what was passed on but about the creation of behavioural data after the initial set-up, —the Red Amber Green system, behavioural comments, and the use of Artificial Intelligence (AI).
In 2020 the company removed previous statements from its website[349] that its use of Artificial Intelligence was its unique selling point. Their website and marketing[350] had previously stated that ClassCharts used AI in various ways.
“Our behaviour management is driven by Artificial Intelligence. Everyone needs a unique selling point and ours is that we are the only people who can actually identify how pupils influence each other in the classroom. NQTs, supply & cover teachers will love how our AI engine automatically suggests seating plans optimised for learning & behaviour.”
“Our Artificial Intelligence Engine learns about how pupils influence each other in the classroom and seating plans can be optimised for behaviour at the click of a button.
The AI engine also suggests seating plans optimised for learning – dispersing or clustering pupils by ability to suit the teacher’s approach to differentiation.”
Profiling is integral to the product to determine ‘normal behaviour for each pupil’, including the “artificial intelligence engine, which builds profiles of students, and highlights trends..”
We contacted the company to ask for clarification and they responded with this letter that we include in full.
Renaissance Learning is widely used in UK schools, through its leading product Accelerated Reader. “Founded over 30 years ago in the USA, Renaissance has become a leader in pre-K–12, serving 20 million students in 45,000 schools in more than 90 countries worldwide.[351]
And it’s got a lot of data about school children.
“Throughout the past three decades, we’ve amassed more than 2.8 billion real-world data points that show us exactly how students build skills and achieve mastery in core subjects. These data points help us continually refine and improve our solutions. Analysis of these data points—what we call learning analytics—helps us continually refine and improve our solutions.”[352]
After investment in 2014[353] from CapitalG (formerly Google Capital) the private equity arm of Alphabet Inc., Renaissance was bought by Cayman Island registered Private Equity Firm Francisco Partners.[354]
The Education Endowment Foundation Accelerated Reader is widely used in England, but much of the evidence for the approach comes from the US. The EEF funded a pilot evaluation to see if it could have an impact in English secondary schools, designed to improve outcomes for struggling readers at the transition from primary to secondary school.
The study found that Year 7 pupils who were offered Accelerated Reader made 3 months’ additional progress in reading compared to other similar pupils. For pupils eligible for free school meals the figure was 5 months’ additional progress.
A larger trial is due to conclude soon.[355]
What does it do?
Accelerated Reader[356] is a computer software program that helps teachers and librarians manage and monitor children’s independent reading and aims to foster the habit of independent reading among primary and early secondary pupils. A child picks a book at a determined appropriate readability level, and reads it at their own pace. Each student is assigned to a specific range of books on the program’s BookFinder list. Books in that range will be challenging for the student but not too hard to read. This concept is called a zone of proximal development (ZPD) determined by the company. When finished, the child takes a short online quiz. (Passing the quiz is used as an indication that the child understood what was read.) Accelerated Reader gives children, teachers, and librarians feedback based on the quiz results, which the teacher then uses to help children set goals and direct ongoing reading practice. The company designs the system itself and while parents can see the points assigned to each book, and that build up over time into award levels of bronze, silver gold and so on, there’s no way to understand how these are calculated or values behind them.
In 2019, a BBC article quoted the director of professional services at Renaissance UK saying "Mr Men books look really easy but they are at a really high difficulty level".[357]
The company also began a new activity in 2019 in the UK, that it calls norming.
“Norming is a process of comparing individual or group scores/results with other students of a similar age or Year group across an area or country. We provide two sets of normed scores – Percentile Rank (PR) and Norm Referenced Standardised Score (NRSS). Teachers can see the results from both norming processes and how well their students or class compares with others across the country.
“Our new norming dimensions are calculated by NFER (the National Foundation for Educational Research) – the country’s leading provider of education research and insights. Their long history, vast experience and pioneering methods have established their reputation as an authoritative, trusted and respected voice in education.”
One parent talked to us about the app and was frustrated when their child had been told by the librarian they ‘had got worse at reading’ and had to go in for remedial reading at lunch time. The child was a high achiever and the system had identified that she had been reading slightly ‘easier’ books than usual. But had the child’s reading ability really fallen behind? That’s what the child understood from the intervention and it knocked their self-esteem. And if you look outside of the UK, you find similar stories.[358]
Susan Straight was critical in the New York Times of the “reading management” software system of Accelerated Reader over ten years ago in 2009, noting many classic novels that have helped readers fall in love with story, language and character are awarded very few points by Accelerated Reader.[359]
“I find the idea that we can apply a numerical formula to reading a bit insulting to literature. I’m not against all quantifying. But as Renaissance Learning itself emphasizes, Accelerated Reader’s formula cannot measure “literary merit for individual readers.”
In June 2020 here’s what some of those scoring systems looked like. Malorie Blackman’s Noughts and Crosses is level 4 and 14 points, at 96, 607 words. P.J Palacio’s Wonder, is measured at a level 4.8 and gets you 11 points. Harry Potter and the Prisoner of Azkaban is level 6 and 18 points. By comparison, Susan Cooper’s level 6 book The Dark is Rising only gets 13 points.
“In many cases, a book’s interest level coordinates with its book level. Hank the Cowdog, for example, the content of which is suitable for fourth-graders, has a book level of 4.5. Many books, however, have a low book level but are appropriate for upper grades and vice versa. For example, Ernest Hemingway’s ‘The Sun Also Rises’ has a book level of 4.4 because its sentences are short and its vocabulary is simple.” (Renaissance Learning, What are Points?)[360]
In the words of its Managing Editor in 2016, "No formula could possibly identify all the variables involved in matching the right books with the right child."[361]
In the lockdown of 2020 Renaissance released pupil level data to UK researchers, including a “pupil ID reference that is used for internal Renaissance purposes only, a date of test, minutes spent per test, and the test scores. In all, around 160,000 data points were provided for the period September 2019 through mid June 2020.”
In our opinion such processing and distribution was not lawful. The schools did not ask Renaissance to process the children’s personal data in this way to make it deidentified before passing it on, so that makes Renaissance a data controller not a processor. And the company failed to inform each family that their child’s data would be processed for these purposes or offer them any choice about it.
The privacy notice for Europe updated on August 11, 2020 confirms that the company may share Personal Data with the parent company, Renaissance Learning, Inc., with Governmental Departments, Ministries or Research Agencies (for example, in the United Kingdom, the Department for Education, Education Policy Institute, National Literacy Trust); or entities working under their authority to support alignment studies and educational research among others.
The questions we’ve asked ourselves as parents is what scope is there for this algorithm to create a misplaced perception of ‘value’ of a book to a child? Is the app limiting what she will read out of curiosity, sticking to her assigned ‘level’ or age band? It influences her choices in ways that are not fully transparent. How does the ‘norming’ work and what influence does it have and why? Is it biased? In what ways, and how are its unintended consequences mitigated? How do children of different abilities feel about the product over time? These questions would be interesting to see research trials address and whether and how the product makes ‘better’ readers and how attainment and progress compare to teacher encouragement to read with non quantified goals rather than turning children’s reading for pleasure into one of over ‘2.8 billion real-world data points’. Or is it the reading practice that matters, not the product?
Accelerated Reader costs £450 per year for 50 pupils, or £9 per pupil per year. This covers the annual licence needed for each participating pupil (the minimum subscription is for 50 pupils), one day of teacher training and year-long access to a free hotline telephone service.[362]
Renaissance Learning was awarded a DfE contract in September 2020 to Understand Progress During the 2020/21 Academic Year.[363]
A company spokesperson responded to our statement saying they welcomed the opportunity to address the points raised in the article.
“Point Value
Data Monetization
Data privacy informs all of the decisions we make at Renaissance which is why we recently launched the Privacy Hub[364] to ensure our privacy practices are transparent and accessible.”
Jen Persson includes this case study as a parent. My child used this product at school. In 2020 I made a Subject Access Request to find out what data they processed, asked to receive a copy of the data, and to receive a list of the third parties with whom data are shared.
Times Tables Rockstars, owned by Maths Circle Ltd, describes itself as a carefully sequenced programme of daily times tables practice. Maths Circle also owns NumBots.
“Each week concentrates on a different times table, with a recommended consolidation week for rehearsing the tables that have recently been practised every third week or so. This format has very successfully boosted times tables recall speed for hundreds of thousands of pupils over the last 8 years in over 14,000 schools - both primary and secondary - worldwide.”
In 2020 my research found that the company has shared data with the Centre of Mathematics Cognition in Loughborough who are carrying out studies into the learning of times tables. I was surprised and asked both the company and the Centre to provide more information.
A Loughborough University spokesperson told me, “We are satisfied that Maths Circle put in place appropriate technical and organisational measures before sharing de-personalised data with the University as part of an ESRC funded study. The data shared with the University was limited to only that which was necessary for the purpose of the study and was rendered impossible for our researchers to identify any of the children, either directly or indirectly.”
Although I later read that the company website states we may share your data with, “Government education departments, universities, bodies managing or representing schools, or schools themselves, to help enhance the learning from our Sites through the preparation of statistical reports and analysis based on aggregated or de-personalised data,” I had not known this prior to doing the research to make a Subject Access Request.
The company spokesperson said, “We use techniques to de-personalise any data we share so that the recipient (the Centre for Mathematical Cognition at Loughborough University in your example) is unable to directly or indirectly identify any children from the data, thus applying the appropriate protections of personal data as required by the data protection laws of the UK. It is lawful for us to do this because (a) we are not sharing any personal data, (b) it is lawful for us to convert personal data into non-personal data for this purpose, and (c) at the point the data does not directly or indirectly identify an individual child, it is no longer personal data and the data protection laws do not apply. These points have all been confirmed by an external independent data protection specialist.”
Leaving aside questions over the degree of depersonalisation versus anonymous data, we disagree.
By going beyond the instructions of schools and making the independent decision to process children’s personal data for the purposes of transfer to researchers, Maths Circle determined the 'purposes and means'[365] of the processing. The company is therefore a data controller because it is they, not the schools, that took the decision to do that. As such, they need a legal basis for the processing they carried out to de-personalise the data (which appears to be absent in this case because the schools have not given them that level of authority) and they would need to comply with the general principles of data protection law including transparency and fair processing, informing families.
A spokesperson for Maths Circle clarified an important question we had. Had the company received any money for the data? It had not. “We have not been paid by the CMC/ Loughborough University - this is a joint research collaboration to help children's learning.”
“It is our understanding that we can be both a data processor and also a data controller of the same personal data, when processing for different purposes. When de-personalising data, we acknowledge that we are acting as a data controller for that element of the processing and so must have a lawful basis for doing so. We are relying on legitimate interest as our lawful basis, having carried out the necessary legitimate interest assessment and having considered both the best interests of the child and the public benefit. Our schools are notified of this as it is set out in our Privacy Notice which schools contractually accept in our terms and conditions.”
"The data shared with the CMC at Loughborough University is limited to: a selection of multiplication and division questions given by a selection of pupils with the pupils' response to those questions and the time taken to answer the questions, along with an encrypted unique research identifier to allow the researcher to link up the different answers given by one user. For the avoidance of doubt, no pupil names, class names, school details, location, or any sort of digital address or other information about the pupil is shared."
“This de-personalised data does not directly or indirectly identify an individual child and so no harm can be caused to any child. If anything, the CMC research will ultimately help more children (not just TT Rock Stars users) to learn and understand the times tables. The intention of the research is to contribute to the academic body of knowledge around learning of basic number facts; as such any consolidated findings are intended for the public domain. We are passionate about improving mathematical outcomes for all children and have no commercial gain from this academic research.”
The core processing activity for which Math Circle is a Controller is the processing to de-identify data for the purpose of onward transfer. That needs a lawful basis, disclosure, and option to opt-out.
We warmly welcome Maths Circle’s collaborative approach on this , and to discuss going forwards how the Subject Access Process may be improved for children and parents.
Read Theory is a U.S. based app, owned by Pomador a private equity firm based in Israel, that offers “Personalized reading comprehension exercises for K-12 and ESL students” and claims it will “Improve your students’ Reading Comprehension. It’s free.” It also claims to be adaptive and appears to use some sort of machine learning. “Our solution automatically recognizes the student’s reading level and matches them with the appropriate text and questions.”
A parent contacted us fed up with its constant advertising.
“My son aged 9, is using read theory, which has been recommended by the school, but every few minutes adverts for Disney keep popping up and he cannot stop himself from watching them. In fact he seems to spend more time watching the adverts than he does on the work.”
On its privacy policy[366] Read Theory is clear that it uses child-specific advertising for under 13s.
“While our Website includes third party advertising, ReadTheory ensures that any such third party advertising only shows advertisements that are appropriate for children. Furthermore, targeted or interest-based advertising is displayed only for those users who have been expressly flagged as being over the age of 13.”
It may, as it claims in the privacy policy, not serve ‘targeted advertising”’ that it does not use a particular child’s tracked behaviours, to target specific ads tailored to them as an individual, children, but it certainly shows them plenty of adverts aimed at children and its cookies policy is not consent based.
“Cookies are also used to help ensure that the advertisements you see, if any, are relevant to you and your interests. You can find more information about cookies at www.allaboutcookies.org. You may remove our cookies by following the instructions of your device preferences; however, if you choose to disable cookies, some features of our Website and/or services we may offer may not operate properly and your online experience may be limited. From time to time, our Website may use third-party cookies served by our advertising partners. Such cookies may be used to serve you personalized advertisements, measure such advertisements’ effectiveness, or otherwise personalize the content provided to you through the Website.”
It seems (in line with many U.S. based companies), to not realise that collection of other personal data (it uses the term PII, personally identifiable information) from other places may not be lawful.
“We may combine your PII with publicly available information and information we receive from cross-reference with selected partners and other third parties. By combining this information, we are better able to communicate with you and to create a more customized user experience.”
In terms of disclosure, “We may transfer or disclose PII to our affiliated companies, subcontractors or such other trusted third party service providers or partners who may be located in different jurisdictions across the world.”
We may disclose your Personal Information, or any information you submitted to us in connection with the Website and/or services we may offer, as follows:
to third parties in our business network in connection with providing services we may offer you (including advertising partners, business consultants, or professional advisors), all subject to confidentiality undertakings by such third parties in accordance with applicable laws;”
At the time of writing, we have requested from Read Theory, a list of third party advertising companies that have been displayed on Read Theory to UK children aged under 13 in 2020 and a list of all current and past recipients over the last 2 years, June 2018- present; of children’s data from use of the product.
According to Satchel, their software is now used in over 1 in 3 secondary schools in the UK and in 23 countries across the world, making a difference to the lives of millions of students and parents.
Show My Homework (Team Satchel/Techercentric) privacy policy, onlne in September 2020, suggests a misunderstanding of UK Data Protection law that others commonly make in the education sector. Namely that having a contract in place with a school that documents the terms of data processing, for having a legal basis of contract, to process personal data. That basis under law, for example as set out under the GDPR Article 6(1)(e) has to be with the data subject, i.e. a child for pupil data processing. And it cannot be made by proxy by the school. The Team Satchel Ltd[367] policy sets out for both Students and Teachers / Staff Members
“Our legal basis for processing your data is Contract. We have this in place with your school. They may have a different legal basis for collecting your information originally which they will be able to provide.”
The Team Satchel Ltd policy also says they may disclose personal information to any partner of Teachercentric and/or a member of the group, which means subsidiaries, their ultimate holding company and its subsidiaries, and may disclose personal information to third parties:
Mother M that contacted us about this app, was concerned that she had never been asked or told that her daughter would have accounts opened in her name using full names, photographs, or how her interactions onsite and between sites when using social media log-ins would be tracked.
After Team Satchel was contacted in September 2020 for comment, they provided us with a link to a revised privacy notice[368] and lengthy comment some of which we include.
Privacy Policy
The wording you refer to in the link provided was part of our old privacy policy which should not still be publicly viewable. When we last updated the policy we changed the name of the URL from Privacy Policy to the more appropriate Privacy Notice. Regrettably, we did not remove the link to the old privacy policy after the transition and therefore it may still show up in search engine results. We have now removed that link and can confirm our privacy notice is at the following:
https://www.teamsatchel.com/legals/privacy_notice.html. If you are aware of any links within our site pointing to the old policy please let us know and we will change those ASAP. Your readers may also be interested to see, after a review of our international transfers, we have amended the Privacy Notice to reflect the recent CJEU decision in “Schrems II”.
Lawful Basis for Processing
“Even though your article refers to old privacy notice content, we have reviewed our current wording and agree it could be worded better. We have now updated the privacy notice to better convey our relationship as a Data Processor working for Education Providers (the Controllers). It was not our intention to infer the lawful basis we rely upon is “performance of a contract” but that we have a contract in place with each of our Education Providers - as should be the case in any Processor-Controller relationship. We really appreciate you bringing this to our attention so we could make the wording clearer.
“Ultimately, we are a company that respects and upholds our users’ data protection rights and freedoms. As I am sure you can understand, data protection law is still evolving (such as the recent Privacy Shield ruling) and so where we don’t get it quite right, we will quickly work to fix the issue.”
It is not uncommon for apps to publicly post ‘winners’ tables of user performance.
In February 2018, a researcher in Australia found that children’s identifiable data were publicly available to download from the app Mathletics. The configuration of the leaderboard allowed pupil level data to be extracted from 169,686 pupils in over 6000 UK schools.[369]
The global network of 3P Learning websites and resources, includes websites and digital learning resources for the Mathletics, Spellodrome,Readiwriter, Reading Eggs,Mathseeds,WordFlyers,World Maths Day and World Education Games brands, together with challenge websites for national and regional events, as well as 3P Learning Uservoice and feedback forums are owned and operated by 3P Learning Ltd and/or its subsidiaries and/or licensors.
3P Learning – Mathletics was the BETT Show 2019 International Digital Education Resource winner.[370]
The company was contacted for comment and we are grateful for their review input.
“As you see from our privacy policy, we do take the protection of student and teacher data very seriously, and we continually strive to adapt to changing legislation and the needs of our customers. Our approach to student data is very simple: we collect the absolute minimum amount of data required for a teacher to identify one of their students in our product. This could be as little as the student's first and last initials, although typically students like to see at least their first name when they are using our product, or when they receive a certificate of achievement.
“In the past, our Hall of Fame feature has been in great demand, especially during events such as World Maths Day. When a student appeared in the Hall of Fame, they often wanted the whole world to know, so the Hall of Fame was published on our website. However, customer needs change, and several years ago we made the decision to only display the Hall of Fame to authenticated users of our product, and we have implemented privacy settings to allow teachers to control how and when student information is displayed.
“We also make a strict delineation between product information and marketing information, keeping the data in different systems, and capturing communications preferences so that only teachers who have provided their consent will receive marketing communications from us. We never use students' personally identifiable information for advertising or marketing, and in fact we only ever communicate with adults - teachers or school administrators.”
In 2019 parents contacted us concerned about school asking them to use this app and the way that the school implemented it.
John’s** parents were willing to share the details of his medical condition — asthma — with the school administrators, for in-school and school-trip safeguarding and wellbeing. They trust the school to give this information to those who need to know in school in case of necessary care. But they were not happy to use the app that the school asked parents to use to record his condition. The first they knew of the new process was an email home that contained a link for his parents to accept the account. John was 15. The school had created a parent and child record with the company already without their parental consent.
“The family declined our consent, and offered to continue with the school's existing paper-based system. This has not gone down well: we've been told there's no choice but to consent and sign up”.
Personal details such as an individual’s name, date of birth, nationality, family details and other information defined as “Personal Information” had been extracted from the School Information Management system and sent to the app company to set up an account for every child.
Operoo integrates as an extension to the school’s management information system, with both software applications working as an integrated solution to help schools manage processes in a paperless manner. The school, as Data Controller, is obligated under the Data Protection Act 2018 and the GDPR to have communicated the legal basis under which it is processing the personal data of the data subjects to the parents and students via their privacy policy and other documentation which may be shared with parents.
The company address is in Australia but its data servers and a third-party DPO are based in Ireland, inside the EU. It also has a number of partners[371] to provide integrated solutions. The system uses third-party products to send messages such as profile requests and eForms. To support the delivery of the Services, the company also engages third party sub-processors to provide certain infrastructure services. These currently consist of Amazon Web Services for hosting services, Twilio for SMS messaging, Google for language translation services, and Zoho for customer support live chat services.
The app may process information to create a record about the health, disability, health services, medical histories, prescriptions, allergies and other information about an individual as defined by the school as “health information”. Forms may process photographic copies of a child’s passport where requested for a school trip.
This app shares much in common with others of a similar nature. Children’s data are processed by adults about them and on their behalf, and assuming their consent, for data they never see.
Operoo says:
“as a Data Processor, it is neither our responsibility nor our prerogative to obtain the consent of the Data Subjects (whether parents or students). As a rule, schools have in place methods to communicate with parents regarding the use of the various software applications that they use to manage school operations. We coach our customers that it is best practice to provide information to parents about our system before implementation, and we make templates available to schools to facilitate that communication. In the vast majority of cases, schools inform parents of the school’s plan to use Operoo well before parents receive a link to activate their accounts. Parents can - and sometimes do - opt out of inputting their child’s information into the Operoo system.”
“Operoo maintains data processing agreements with each of its sub-processors that include legally valid data transfer mechanisms.”
“The school, as Data Controller, is obligated under the Data Protection Act 2018 and the GDPR to have communicated the legal basis under which it is processing the personal data of the data subjects to the parents and students via their privacy policy and other documentation which may be shared with parents.”
“More broadly, for John in the Case Study, the far greater risk is that important information will not be available when and where it is needed.”
*From Monday 10th August 2020, the CareMonkey product and company rebranded to Operoo.
**we changed the pupil’s name.
A single paragraph in 2016 Statutory Guidance created rapid growth in the use of computer activity monitoring services imposed by schools in — and outside — the classroom, without independent oversight or guidance on what was acceptable policy. Compare the volume of just one popular UK schools provider in November 2016 in the research report carried out by Big Brother Watch[372] in which they believed a product was used by 403 schools; and today it is used in over 1,400 UK schools.[373]
Another service is provided by Smoothwall and the CEO explained some of how their product works in a panel discussion[374] as part of the launch of this report.
Another service in this area but that has different functionality again, is set out in our case studies in the day-in-the-life, that is a service that monitors not only online but offline computer activity.
While web filtering software was routine in schools for years, there has been a marked increase in the use of individualised monitoring which scans and may capture screen content as compared with only filtering and blocking access to inappropriate websites since the introduction of revised guidance in September 2015-16 Revised Prevent Duty Guidance: for England and Wales[375].
Schools (and registered childcare providers) in England and Wales are required “to ensure children are safe from terrorist and extremist material when accessing the internet in school, including by establishing appropriate levels of filtering".
It expects that they “assess the risk of [their] children being drawn into terrorism, including support for extremist ideas that are part of terrorist ideology”. There are a number of self review systems (eg www.360safe.org.uk[376]) that will support a school in assessing their wider online safety policy and practice.
All school traffic may typically be forced through such services, and run against mandatory lists of content which are considered to be blocked, illegal or seen in some way as harmful. School administrators, pupils and parents do not know which sites are ‘flagged’ and blocked. The content of the keyword library lists is not published either. School staff and even the help centre staff for the software may not know what words trigger a reaction. When we talked to one company in 2019, we were told that the watchword lists came encrypted from the IWF and are installed and ‘shipped out’ to schools even without the company knowing exactly what its update contains.
The Department for Education’s statutory guidance ‘Keeping Children Safe in Education’[377] obliges schools and colleges in England to “ensure appropriate filters and appropriate monitoring systems are in place. Children should not be able to access harmful or inappropriate material from the school or colleges IT system” however, schools will need to “be careful that “over blocking” does not lead to unreasonable restrictions as to what children can be taught with regards to online teaching and safeguarding.” But what happens in practice does not seem to meet these broad aspirations.
The UK Safer Internet Centre, says, recognising that no monitoring can guarantee to be 100% effective, schools should be satisfied that their monitoring strategy or system (including keywords if using technical monitoring services) at least covers the following content:
The UK Safer Internet Centre guidance states, Monitoring systems require capable and competent staff to sufficiently manage them, together with the support and knowledge of the entire staff. Monitoring systems are there to safeguard children and the responsibility therefore lies with the school leadership/governors and Designated Safeguarding Lead.
But the UK Safer Internet Centre guidance takes no position on the monitoring beyond the school hours and location. No clear legal basis is given, no mention of privacy rights and data protection law or communications law. Everyone knows it happens but few want to talk about it.
84% of 1,004 parents we polled through Survation in 2018[378] agreed that children and guardians should be informed how this monitoring activity works. (We didn’t ask if it should happen at all or not.)
Children want to understand how their data is processed and restore power imbalances, outlined for example in, The Internet on our own Terms: how children and young people deliberated about their digital rights (Jan 2017) (Research by Coleman, S., Pothong, K., Vallejos Perez, E., and Koene, A. supported by 5Rights, ESRC, Horizon, University of Leeds and University of Nottingham).
We believe some providers’ current practices breach current Data Protection Directive (Article 15), the Human Rights Act 1998, the Charter of Fundamental Rights Articles 7 and 8, and the UNCRC Article 16 (children have a right to privacy, and the law should protect them from attack on their way of life, reputation, families and home). It shows contempt for the balance between children’s dignity and rights, and legitimate interest to process children’s sensitive personal data. This is particularly applicable outside school hours and premises and in lockdown.
Article 5 of Directive 2002/58 is devoted to the confidentiality of communications; Article
5(1) provides: ‘Member States shall ensure the confidentiality of communications and the related traffic data [...] shall prohibit listening, tapping, storage or other kinds of interception or surveillance of communications and the related traffic data by persons other than users, without the consent of the users concerned...’ and, that the ‘law’ must, in effect, be ‘adequately accessible and foreseeable, that is, formulated with sufficient precision to enable the individual – if need be with appropriate advice – to regulate his conduct’, to ‘foresee its consequences for him[379]’, ‘to foresee, to a degree that is reasonable in the circumstances, the consequences which a given action may entail’.[380]
Interception of communications transmitted via a private telecommunications system was
found not in accordance with the law in Taylor-Sabori v the United Kingdom. In Rotaru v. Romania the ECtHR found a violation of Article 8 of the ECHR due to the lack of limitation on collection and archiving surveillance information. The depth of privacy invasion caused by current policy is not proportionate to the aim pursued, it is not necessary in a democratic society, (Leander v Sweden, ECtHR March 1987) or proportionate to a school’s responsibility of personal time outside school hours. The Internet screen recording and disclosure to an unnamed list of third parties at provider companies, inside a Multi Academy Trust, or to outside authorities including police without transparency to the child is disproportionate and without any limitation or safeguards, similar to Peck vs United Kingdom when video disclosures by the Council were not accompanied by sufficient safeguards and they therefore constituted a disproportionate interference with rights under Article 8.
There is further interference with Article 11 of the Charter of Fundamental Rights on Freedom of Expression, “to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”
The Children’s Commissioner for England believes we are failing in our fundamental responsibility as adults to give children the tools to be agents of their own lives.[381] This processing is a gross example of this, despite packaging the surveillance of their activity as in a child’s best interest because not all of their actions may have foreseeable consequences, especially if any misinterpretation is made.
David Meyer writing Control Shift: How technology affects you and your rights in 2017, quoted Ray Corrigan’s criticism of mass surveillance and false positives; “It is statistically impossible for total population surveillance to be an effective tool for catching terrorists. Even if your magic terrorist-catching machine has a false positive rate of 1 in 1,000 – and no security technology comes anywhere near this – every time you asked kit for suspects in the UK it would flag 60,000 innocent people.”
The limitations of filtering technology are clearly set out by Professor Phippen in the 2017 book, Invisibly Blighted, The Digital Erosion of Childhood,[382] including the inability to prevent or identify embedded imagery (for example of a social media page), peer-to-peer systems, personal networks, and simply those who are determined wto work around them. But lack of guidance and democratic discussion of monitoring children is even more concerning when it comes to interference with privacy, rights, and harm.
“The recent draft statutory guidance on safeguarding by the Department for Education (2015) defines an expectation that schools have monitoring in place and governing body is responsible for it to be ‘appropriate.’
[Schools] need to have appropriate filters and monitoring systems, so that no child can access harmful content via the school’s IT systems and concerns can be spotted quickly. [Department for Education, 2015]
However there seems to be no guidance on what appropriate means aside from further guidance to ensure ‘unreasonable restrictions’ are not placed on what can be taught.”
The UN Special Rapporteur’s 2014 report on children’s rights and freedom of expression stated: “The result of vague and broad definitions of harmful information, for example in determining how to set Internet filters, can prevent children from gaining access to information that can support them to make informed choices, including honest, objective and age-appropriate information about issues such as sex education and drug use. This may exacerbate rather than diminish children’s vulnerability to risk.
We have taken a lot of time to research and understand the nuances of different company systems and policies by talking to the providers, asking 400 schools for their policies, and we have collected anecdotal evidence from parents and school staff who have contacted us or spoken to us at events with their concerns. We are grateful to the companies that took time to engage with us as each functions differently.
Safeguarding purposes can be wrapped into one label, but perceived by some parents and staff as a mixture of safeguarding in the best interests of the child, and as surveillance designed to be punitive, in an environment where children are seen as potentially suspicious at all times and conflated with the Prevent programme.
In the case of Digital Rights Ireland (2014) the court ruling underpinned the importance of respecting the essence of fundamental rights and freedoms in online activity. The ECJ deemed the directive, requiring Internet Service Providers (ISPs) to store telecommunications data in order to facilitate the prevention and prosecution of crime, was legitimate in its aims of fighting serious crime, but did not pass the proportionality test applied to evaluate the appropriateness of the measures undertaken to achieve the goal.
We believe the same necessity and proportionality tests are not met in this mass monitoring. There is no clear legal basis for the everyday level of intrusion into private and family life that is made possible for a school, for third party providers, and authorities through some of these services, and their use in practice with children and young people in particular outside school hours. The chilling effect[383] of constant surveillance is already documented.
Questions need to be asked by regulators how any AI profiling children or that is used to identify content to raise, is accountable and to whom? How do staff know whether it is inappropriately censoring content? Who is training the system and underpinned by whose ethics? Why are there no reports in the public domain about how and what types of content it automatically filters or information available to children and parents? Further it matters where companies process sensitive data abroad such as in the U.S. especially since the Privacy Shield is now invalid.
National standards should document at minimum how a school monitoring strategy meets the following as regards data protection principles, of Data protection by design-and-default, Data minimisation, Accuracy, Age appropriate design, BYOD (Bring Your Own Device), Data retention, Multi language support, Profiling, Reporting, Transparency and Fair processing to families including all of the purposes such as Prevent.
Responses from 1,004 parents surveyed by Survation online between February 17-20, 2018[384]
Parents were asked about getting consent for this process. They were not asked *if* they support the use of Internet Monitoring software. Parents were asked only, if their school uses it to the best of their knowledge and if so, if parents should be asked for consent to do this and whether they believe they, and children, should be informed what the watchlists contain.
Given the low levels of trust indicated in the question of third-parties parents felt they would trust to use their child's data appropriately, and the high percentages that want to be informed of the consequences of keywords being searched for, then we think numbers on consent probably indicate some who disagree it should happen at all. i.e some parents did not agree that consent should be required for Internet web monitoring, because they do not agree it should happen at all — with or without — consent. More research needs done. It is clear is that parents believe the keywords and consequences should be transparent and foreseeable. This amendment prejudices that, embedding current practice.
Would you say that the current amount of control you have over which apps and online services your child is signed up to by the school (your child’s digital footprint) is: |
|
Sufficient |
50% |
Insufficient |
28% |
Don’t know |
22% |
Parents were asked, does your child’s school use Internet Monitoring and keylogging software (software that records children’s Internet use at school)? Many do not know.
The school uses this |
The school does not use this |
Don’t know |
46% |
17% |
37% |
To your knowledge, is the Internet Monitoring software used by the school able to do any of the following? Please select all that apply. |
|
Record children’s images through a webcam |
14 % |
Capture the screen as seen by the child |
25 % |
Log children’s Internet search terms and create flags based on keywords |
55% |
None of the above |
5% |
Don’t know |
28% |
Key logging software at school captures children’s Internet search terms. Some systems flag
up to 20,000 different words and phrases. To what extent do you agree or disagree with the following statements? Today’s opaque practices are considered to be unacceptable.
Agree |
Neither agree nor disagree |
Disagree |
Don’t know |
|
Parents should be informed of which keywords get flagged |
84% |
11% |
4% |
1% |
Parents should be informed of what the consequences are if these keywords are searched for |
86% |
11% |
2% |
1% |
Children should be informed of which keywords get flagged |
69% |
18% |
12% |
2% |
Children should be informed of what the consequences are if these keywords are searched for |
86% |
11% |
2% |
1% |
On a scale from 0 to 10, to what extent do you agree or disagree that schools should get parental consent to be able to use each of the following? We believe this ‘disagree that consent should be asked for’ hides some who feel monitoring should not happen at all.
0-4 (disagree) |
5 |
6-10 (agree) |
|
Internet Monitoring software that can be run remotely at home in the evenings or during school holidays |
32% |
15% |
53% |
Internet Monitoring software that can record children’s Internet screen use in school at all times |
22% |
14% |
64% |
Internet Monitoring software that can record children’s images through the webcam |
34% |
13% |
53% |
46% were not offered any choice at all to have these technologies imposed on their
children. And while 54% said they were asked for consent we know that this is not a valid
consent process, because there is no age-appropriate explanation given how it works, what data is collected, viewed or sent to whom and its consequences, and parents and children are often required to agree or feel compelled to accept, because there is no choice.
Freedom-of-Information requests collected between December 2016 and April 2017 from 190 schools in England and 30 schools in Northern ireland by Biometrics-in-Schools founder, Pippa King,show 50% of schools that replied, enable web monitoring on BYOD (Bring your own [private mobile] device).
Evidence from 4,507 of 6,950 schools using the SWGfL tools who carried out e-safety self-reviews, using the 360 Degree Safe tool in analysis carried out by Andy Phippen, Plymouth University[385], shows that school staff are not equipped to deal with or challenge the outcomes from these technologies.
“However perhaps even more concerning is that the two weakest aspects are those upon which a school would be most reliant on understanding the nature of data protection and safeguarding within the school setting. If both staff and governor knowledge are poor (and in both cases averages are below ‘basic’ practice, indicating that a large number of establishments do not have either in place) there is little likelihood that the complex issues around data protection or safeguarding are well understood, and an effective challenge to senior management on these matters certainly cannot exist.”
For the preparation of this Report we also asked IT staff in 35 schools about the use of these systems and the home-school agreements used. These are school-pupil-parent agreements which are ostensibly “consensual” contracts on how the child will use the Internet in and often outside school, with regards acceptable behaviours. They often contain a single line which says, ‘I understand my online use will be monitored.’ There is rarely an explanation given what this means, how much personal data will be collected, or the surveillance that ensues. There is no explanation given of the consequences of these policies in an age-appropriate way for a child. Any consent based processing is invalid.
Parents and children sign them as part of school admissions, with some who may prefer not to, not wanting to be seen as “difficult” before their child has even begun attending the school.
The information available about the way children’s data is collected and shared under the Prevent programme is opaque. The lack of transparency undermines any attempt to justify the way the programme operates in line with human rights standards.
We made Freedom of Information requests to Safeguarding Boards at Local Authority level and each denied any information on the statistics of volume of data processed about children citing security exemptions. We saw no bespoke privacy notice for the Channel referrals, those that offered us a privacy notice pointed us to their generic Local Authority notice which is not about Prevent. We asked to what extent data may be passed on to others including NHS staff, Social care staff, Directors of Children's and Adults' Services, Schools, further education colleges and universities Youth Offending Services, Chairs of Local Safeguarding Children Boards and Safeguarding Adults Boards Local Authority Safeguarding Managers and Home Office Immigration, Border Force, Housing, Prisons, or Probation?
But we were told in each case, “the Council holds the information and believes that this information is exempt under section 24 (1) – on grounds of National Security. Section 24(1) of the Act states:
24(1) Information which does not fall within section 23(1) is exempt information if exemption from section 1(1) (b) is required for the purpose of safeguarding national security.”
Not one wanted to be transparently accountable for Prevent programme child referrals.
According to CRIN (Child Rights International Network) , “Since 2015, schools, child-care services, local government, police and health and social care have had a legal obligation to “have due regard to the need to prevent people from being drawn into terrorism”. This obligation requires teachers and professionals working with children to “identify children at risk” and “intervene as appropriate”. This broad and ill-defined duty leaves open what might be “an appropriate intervention”, but one option open to schools is to refer children they are concerned about to the Channel programme, which screens individuals referred to it and, where it considers appropriate, intervenes to prevent the individual being drawn into terrorism.”
In 2015/16 according to the Prevent Programme statistical bulletin[386] a total of 7,631 individuals were subject to a referral due to concerns that they were vulnerable to being drawn into terrorism. The education sector made the most referrals (2,539) accounting for 33%, followed by the police (2,377) accounting for 31% of referrals.
In 2015/16, of 7,631 individuals referred, the majority (4,274; 56%) were aged 20 years or under. Individuals 2015/16 from the education sector had the youngest median age (14).
Of the 7,631 individuals referred in 2015/16: 2,766 (36%) left the process requiring no further action 3,793 (50 %) were signposted to alternative services, 1,072 (14%) were deemed suitable, through preliminary assessment, to be discussed at a Channel panel.
Published information is relatively clear about which groups of children are most affected by Prevent according to CRIN (Child Rights International Network)
“Between March 2014 and March 2016, 3,105 people under the age of 18 were referred to Channel across England and Wales - accounting for 48 percent of all referrals during the period. Among these children, certain minority religious and ethnic groups have been disproportionately targeted by these measures. Nearly 40 percent of the children referred to Channel were recorded as Muslim in the figures and more than a quarter were recorded as being ethnically Asian.”
Rights Watch (UK) and Liberty are concerned that, despite broad policy statements of compliance with data protection and privacy rights, the operation of the Prevent strategy and the Channel programme on the ground does not demonstrate due respect for personal information and privacy.
“From the case studies considered by RW(UK) in its 2016 report, ‘Preventing Education?’, it appears local authorities, schools, and police authorities may be operating some system of data collection and sharing which records a child’s interaction with the Prevent strategy or the Channel programme. This could include formal referrals, informal information and events such as a police visit to a child’s home. RW(UK) and Liberty have significant concerns about the rigour and compliance of such a system of data collection with both the specific requirements of current data protection laws and the Human Rights Act”.
Preventing terrorism is of vital importance to our society, but the Prevent strategy is not fit for purpose. It has led to discrimination and infringement of the human rights of children. Cautiousness amongst teaching staff and other professionals - who do not have experience of counter-terror policing - has led to over-referral in practice. This is exacerbated by concerns about failure to meet Ofsted’s standards.
Ofsted inspections are tasked with monitoring a school’s compliance with the Prevent Duty. Criticism of Prevent increased following the introduction of this top-down statutory duty, with former MI6 Chief, Baroness Manningham-Buller arguing “I cannot see how legislation can really govern hearts, minds and free speech”.
CRIN (Child Rights International Network) told us, “The Channel programme operates through panels, led by a representative of the police (the Channel Police Practitioner) and can include professionals from education, social work, immigration, housing and health services. Each panel is formed at the discretion of individual local authorities, and so their size and make up vary significantly across the country. Yet despite a certain degree of clarity about who might be involved in this process, there is a chronic lack of transparency about the way that information is collected and fed into this process, including within schools.”
CRIN has used freedom of information requests in an attempt to find out the ways that
schools are using filtering and monitoring programs to detect signs of “radicalisation” in students. “CRIN submitted 61 requests to schools across a London Borough to ask what filtering and monitoring programs were installed on school ICT equipment for the purposes of detecting signs of “radicalisation”, information about how the software worked and how many students had been flagged up by the software. None of the schools provided detailed information and a common response was that their filtering software was operated by a public-private partnership that is not subject to the Freedom of Information Act.”
CRIN also told us that, “without a clear picture of what information schools, or private companies working on behalf of schools, are collecting and where it is held, it is impossible to assess the adequacy of mechanisms to protect the data of children. The outsourcing of services to the private sector has also has extended the number of bodies holding children’s data, while limiting the potential for independent scrutiny.”
Rights Watch UK and Liberty, say there is a lack of reliable information about the rules public authorities follow (if any) in collecting, collating, and sharing personal information relating to Prevent. The absence of a clear statutory basis in the law which created the Prevent duty means that public authorities may be relying on other powers to carry out data collection, collation, and sharing. But there is minimal publicly-available information about the scale and scope of data collection activities carried out by educational institutions, local authorities, and how those activities are constrained.
The Government has not produced any evidence to support the ‘conveyor belt’ theory that the expression of extreme or radical view leads to violent criminality. This is a concern drawn out by Parliament’s Joint Committee on Human Rights which concluded in its July 2016 report that: “it is by no means proven or agreed that religious conservatism, in itself, correlates with support for violent jihadism.”
The extension of the programme to cover “non-violent extremism” means that the strategy takes the government far away from responding to criminal behaviour. Sir David Omand, architect of the original Prevent strategy, argues that “[o]nce you get into being accused of policing different ways of living and ‘thought crime’ over controversial areas such as foreign policy you enter a difficult area.” Shutting down a vague notion of non-violent debate – particularly in schools – risks pushing contentious, difficult, or even merely awkward discussion underground, away from moderate voices or challenging questions.
Labelling certain cultural, social, religious or political subjects “extreme” can also create conflict and mistrust between schools and pupils. The Muslim Council of Britain, following extensive engagement with communities across the UK, reports: “a widespread concern that Muslims are singled out as potential extremists”, with particular concerns expressed about the requirement for Muslims to pass “subjective and discriminatory counter-extremism litmus tests, as a condition of engagement”.
In July 2020, work by Rights Watch UK exposed a secret data sharing agreement between further education colleges, Salford University of Manchester, the Department for Education and Greater Manchester Police. to tip off universities when an incoming student had been referred to Channel, part of the Prevent programme. Although the university claimed the data would not be shared to inform admissions decisions, it had no plans to publish these agreements or have independent oversight of this process agreed with the Department for Education.
Under the data sharing agreement[387] information would be sent from the further education college to the university with details of the Channel referral, ethnicity, religious belief, information concerning the individual’s health, wellbeing and mental health and any support provided by the college and other agencies including the police and social services.
Documents show that it is the colleges working group that pressed to be able to share such information between them, and the GMP had reservations about the legal basis. The ICO also raised concerns about its lawfulness where the necessity test was not met. The Data Protection Impact Assessment failed to include any assessment of the risks to rights and freedoms of the individuals affected, focussing instead entirely on data security.
This data sharing arrangement appears to be unusual practice, and further research via FOI has not found similar arrangements elsewhere.
The university is also unusual in its Prevent duty guidance compared with other universities in that it explicits associates ‘the rule of law’ with an obligation to wear an ID badge. (Salford University, 2020)[388]
At the time of writing we await further information on the role of the Department for Education in this agreement.
“The world in thirty years is going to be unrecognizably datamined and it’s going to be really fun to watch. What does that mean for education?” asked then CEO of Knewton, Jose Ferreira at the White House US Datapalooza in 2012.
School staff are rarely qualified to understand the due diligence needed in emerging technologies and fail to carry out even the most basic of data protection impact assessments when using children’s sensitive data at scale with new technologies. Some schools have received very little training on even the basic standards of data protection and privacy law, and it is hard for them to further understand the further laws that apply to the processing for example of biometric data or communications data.
While there is often debate on the risks to AI product development in education from privacy and ethics, there must be much more time for understanding given to the risks to children whose classroom experience, interactions, learning pathways, development and how they are perceived by staff as a result is all shaped by the expansion of computer-driven decision making and big data analytics that turn the human development in all its complexity and uniqueness of individuals, into simplified dashboards for comparison of norms and outliers.
“In educational systems that emphasize development and, for example, social competences, formative assessment might be higher on the list. As a result, there is a risk that AI might be used to scale up bad pedagogical practices.” (Tuomi, 2018)
“AI is also used to diagnose student attention. emotion, and conversation dynamics in computer supported learning environments, for example for course development and management/ in an attempt to generate optimal groups for collaborative tasks, and to recognise patterns that predict student drop out. (Nkambou at al, 2018; Rose et al 2018).” “It is often assumed that AI systems enable new levels of personalisation and diversity for information systems. Much of this personalisation however, results from fine-grained categorisation that puts users into predefined classes. Although these systems may be able to efficiently simulate personalisation, they do not necessarily support deeper levels of diversity. AI systems can be excellent predictive machines, but this strength may be an important weakness in domains where learning and development are important.” (Craglia. Ed. et al, 2018)
Where AI learning algorithms are based on historical data, they see the world as a repetition of the past but do not see its biases, outdated cultural norms and the consequences of modelling the world as it was for the development of the people who will shape the world as they want it to be, and one that may be better freed from the discrimination; racism, ableism, sexism embodied in that past data as well as the design of its decision making based on it.
It is ironic that those who seem themselves as tech evangelists and may be most vocal about bringing about a better world through tech solutions, are in fact supporting the protection of the status quo and its existing power structures, built on data not fit for purpose.
AI algorithms have limitations. The platform Edgenuity responded to stories[389] in the summer of 2020 that students in the US could game the automated scoring system and get top marks, by simply including lots of keywords in nonsense-style written online answers, by saying that “does not use algorithms to supplant teacher scoring, only to provide scoring guidance to teachers. Teachers have the ability to accept or overrule suggested scores. Tests are never scored by AI and a student’s course grade is never determined by AI.”
The Edgenuity website states that if activities include keywords that are used for determining a system-assigned score, the student will earn a 0% if none of the keywords are included in the response, and will earn 100% if at least one keyword is included in the response.
While it also says ‘Educators can always override system grades by manually assigning a grade to the activity’ that would seem to rather defeat the purpose of asking the machine to do it.
The counterpoint should not be however, that each of us must accept constant surveillance in order to donate our fresh or highly accurate data for the public good or for the growth of new tech products based on data from the real time users.
If the product cannot offer safe, tested, pedagogically proven support without using children’s activity, their human labour, as part of its product development process, should we not question whether it is suitable for use in education at all?
Products that use artificial intelligence and require the continuous or ongoing processing pupil data for their own company purposes, to continue to train the machine learning algorithms for example,[390] cannot legitimately do so on the basis of a school’s public task. The school does not require that the company keep training its product or develop a new product, that’s a decision for the company. That choice makes a company a data controller.
Where school children have no genuinely freely given choice whether to use a product in the classroom or for homework, in particular where schools make it compulsory, or simply due to the coercive nature of the power imbalance between the school as a public authority and a child, consent cannot be a freely given choice, so consent[391] will rarely be available as a valid lawful basis for such pupil data processing either.
Our research shows that schools are not qualified to understand the due diligence needed in some of these emerging technologies. For example, one academy trust using the product CENTURYTech replied in response to our research in 2019[392] that, “No personal (pupil or staff) data is used to train CENTURY’s artificially intelligent algorithms. All personal data is wholly and entirely owned by the data subject and CENTURY uses this data only to create secure logins for each user and to display students’ learning data to educators,” clearly contradicting how the product works.
The school had not carried out any data protection impact assessment before the product introduction despite the company collecting sensitive, and special category data such as pupil premium, ethnicity, English as an additional language and special educational needs[393], and made it compulsory to use by all pupils.[394]
Another academy trust that had similarly carried out no impact assessment[395] stated in response to a freedom of information request that, “CENTURY uses anonymised (and not personal) data generated through use of its platform to improve its recommendation algorithms which seek to improve students’ learning outcomes.”
When a child uses the platform the company collects personal data, including IP address, name and date of birth, gender, and special category data together with behavioural data collected from the interactions with the tool every two seconds. The separation and anonymisation process is of itself processing personal data. While the company sets out consent in the company terms and conditions as its legal basis for processing, as we have found, and as found in recent regulatory enforcement[396] and court rulings in Sweden and France regards schools and the legal basis of consent, using the basis of consent in a school for such routine tasks is invalid due to the nature of the relationship between a school and pupils.
The question is whether a product that is still in development or using children’s data for its ongoing, or new product development, regardless of whether it is AI or any other development can do so lawfully at all, other than for minor and non-substantive tasks or security enhancement; in particular where where the school, not the individual, makes the decision to use the product.
In the 2020 COVID-19 pandemic, the CEO said at the Cog-X event, she had “onboarded hundreds of thousands” more children and she explained how their data is used to train the AI. So we have asked the company to explain how its AI training, past and present, was and is done lawfully.
“You have to train an AI. And you use teachers and parents and schools to do that. It takes years to train an AI. That work’s been done. […] ongoing every second of every single day you’re taking teacher, parent, and student interactions, taking their feedback, you’re AB testing” “refining the AI”…. “the AI doesn’t learn about ethnicity by itself.”[397]
We asked Century for comment, and quote their reply in full.
To clarify, we don’t use any private data in the AI.
Best wishes,
CENTURY
The University Technical College in Leeds used a biometric entry system with a touch reader to identify each student pre-Covid. Students used their fingerprint to enter the building, pay for their lunches and to register themselves in classes. It was compulsory for all students. (See Protection of Freedoms Act 2012 why this was unlawful.)
During COVID-19 the college suspended the use of fingerprint readers, switching instead to a card system, but in addition it introduced facial detection, combined with thermal cameras as part of its COVID-19 measures. If at the building entrance, the camera detects a person and measures a higher than set temperature, the system takes a photograph and sends that to the Headteacher to unambiguously identify them.
On June 26, 2020 the French Supreme Court[398] ruled that thermal cameras processing body temperature in schools was without a lawful basis under Article 9 of the GDPR. They ordered the removal of the cameras and to suspend the processing of these sensitive data. We are aware that this ruling does not apply within the UK jurisdiction, but it demonstrates an assessment made on the basis of the GDPR which does apply to the UK, and any UK assessment should find similarly.
As regards fingerprinting for registration, and access, having achieved the same purpose through an alternative means demonstrates that the use of biometric data is overly invasive in practice at the UTC Leeds, and such processing does not meet the tests of necessity and proportionality.
In any case, consent is an unlawful basis for the processing of biometric data from children and even any adult learners at the point of collection, due to the power imbalance in a school or employment setting, and in this case of fingerprints a clear breach of the Protection of Freedoms Act 2012 Ch2. which requires consent, and consent cannot be a tick box exercise but genuine freely informed choice, with an available alternative to use without detriment.
In August 2019, the Swedish Data Protection Authority, found that attendance verification is an obligation governed by public law about school operations (similarly to the UK), and attendance reporting is of significant importance for a pupil. There is therefore a legal basis for administering pupils' attendance at school, but the use of facial recognition technology lacked an explicit law to support carrying out the task through the processing of sensitive personal data, or in any other way that violates privacy, and using a new facial recognition technology to do so.
For attendance verification, the pupil is in such a state of dependence upon the school, that there is a significant imbalance of power. Therefore, any UK practice should find similarly as the Swedish DPA, that consent cannot constitute a lawful basis for the processing of biometric data such as this for the purposes of attendance registration in UK schools.
In 2019 Southgate et al. set out a simple explanation of immersive virtual reality (IVR) in a report by the University of Newcastle commissioned by the Australian Department of Education and noted most literature on immersive virtual reality in schools is primarily descriptive.
“The technology using a head mounted display (HMD) has only been widely available since 2014. IVR replaces the world with an artificial or simulated reality. The head mounted display blocks out the world so that the user can be immersed in the artificial world. Different IVR technology create different levels of immersion and feelings of ‘being there’ in the artificial/simulated environment. Experiences range from looking around, to those where the user has limited navigation and interaction, to highly immersive environments where a user can freely manipulate, navigate, interact and create a customised experience.
Southgate points out that, ”early findings from the Australian VR School Study, which involved embedding highly IVR (networked Oculus Rifts) in STEM high school classrooms, highlighted a number of practical, ethical, safety and gender concerns. The research team, which included teachers, developed a health and safety screening protocol for parents/carers and students and produced resources to educate students on cybersickness.”
Dr. Zoetanya Sujon, Programme Director, Communications and Media programme at LCC, University of the Arts London presented some of the current state of the use of VR headsets in her research explained in Playful dataveillance: Unpacking Google’s Edtech strategy through the Pioneer project.[399]
AS Tracking was launched in 2015 as a pioneering pastoral care tool.[400] AS Tracking claims to work by tracking the steering biases which are developing in the mind of a student.
“By tracking a student’s steering biases, we are able to identify when a student is developing a pattern of thinking which increases their mental health risks. Crucially, we can do this at a much earlier point in time than waiting for the mental health problem to develop and become visible. By identifying these risks earlier, we can then guide schools to act earlier and more precisely. By acting earlier and more precisely, schools can make smaller, targeted, responses, which cost less but which have a significant impact on reducing the risks for the student.”[401]
AS Tracking may be used to track whole school populations twice a year from the age of 8-18. AS Tracking costs a school with 1,200 pupils up to £25,500 a year. STEER have also launched AST Detect, for which a secondary school of any size can track all pupils for £5,600, and a primary for £2,400.
According to STEER's own figures, the psychological biases it tests for are linked to risks of self-harm, bullying and not coping with pressure in 82% of cases. (Manthorpe, 2019)[402]
The company co-founder Dr Walker is a reviewer for UNESCO's forthcoming publication chapter, Policies for AI in Education. In addition to his extensive psychological training and qualifications, Dr Walker was ordained as an Anglican priest in 1997 and has also set out the implications of steering cognition for Christian spirituality.[403]
The company claims that “AS Tracking is able to identify pupils’ hidden risks without asking intrusive or suggestive questions. Often these risks may be undetected even by teachers' expert professional judgment, or by parent or pupil feedback.”[404]
Our concerns include that there is no transparent way that children, staff or parents can therefore independently validate these company claims, and that it is excessive for a school to “curate a unique 10 year record of a child’s social-emotional development, monitoring their wellbeing through adolescence.” We feel that there are risks that such a test twice a year is inadequate for supporting children’s welfare needs while schools may feel lulled into the sense of security that using such a tool as a tangible measure is appropriate, as being seen to do something outwardly visible. Children’s needs may be over or under identified and it would be hard to control for that.
The company disagrees, and responded:
It's not correct to infer that schools may overly rely on AS Tracking and therefore be lulled into a false sense of security and miss pupil concerns for three reasons.
- First, throughout our training and documentation to schools, we make absolutely explicit that our data provides one ADDITIONAL piece of their jigsaw of information about a pupil. It must not replace their wider data collection, teacher observations and professional judgement.
- Second, all teachers using AS Tracking will complete a 1 day training course to understand the data and how to use it appropriately alongside their professional judgement. Additionally, there are numerous tutorials to guide school staff to interpret the data and over 100 guidance resources in the platform library covering all aspects of AS Tracking.
- Third, the school is then supervised over the next one or two years by a trained STEER consultant to reach high standards of practice and ethical implementation, before we will accredit the school as a STEER AS Tracking accredited school. One standard is data privacy; a second is integration with their wider welfare systems.
We are, therefore, robustly committed to improving the quality of school pastoral care through a responsible, rigorous training and accreditation programme which enhances the school's overall evidence-based pastoral tracking allowing a school to demonstrate compliance with key inspection criteria. AS Tracking does not reduce pupil social emotional assessment to two assessments per year as you imply.
(We'd also note that schools are schools: pupils cannot be 'tested' on any measure, academic or emotional, too frequently as it would hinder the time they have to learn. Twice/yearly AS Tracking assessments is a balance. Steering biases have been shown to be relatively stable over the short/medium term, and do not need to be measured every month.)
Some of the data that the software collects from the school information management system suggest increased risk factors for additional care of mental health without any further processing, and any school using such systems would need to have a good grasp of how the systems interpreted such data above and beyond their own staff, to know how it worked.
The Academies Enterprise Trust told us that their trial with 835 pupils, was a pilot for potential roll-out to all 34,000 students and 4,000 staff[405] in 2019. As regards data processed, the school provided the following information:
“The AS Tracking assessment will not run without the essential data fields. Our essential fields are as follows: date of birth, anon_name code, gender, school year, MIS_pupil_ID, passwords
Our optional additional data fields are as follows: NEW Pupil, boarding status, house name, form group, form tutor, campus, SEND_ flag or status, AEN_flag or status EAL, Pupil Premium, Looked after child, MidYIS_Band, CAT_Band, Bursary_Funded, passport, nationality, top academic or gifted, heavily committed, Safeguarding_Flag, current welfare plan, recently bereaved, anon_usernames. Additionally, school may choose up to three custom data fields to look at any culturally relevant trends for the setting.The optional data fields provide vital data for the analysis of trends within specific cohorts from the assessment results.” [filters for the cohort trend analysis of the assessment results —AS Steer]
While the company is keen to recruit children in schools it might also be seen by some as inappropriate to seek to continue a private relationship with a child when they leave school, particularly with those it has identified as vulnerable.
The Operations Director at STEER told us,
“we curate students' tracking data over several years in order to pass it to them as an asset they own at the age of 16. Indeed, this is the very core of STEER's mission to empower young people. Psychological literature has shown that resilience in later life is strongly dependent on understanding your own personal psychological developmental journey.
We invest heavily in training them to understand it so they can benefit with increased self awareness entering adult life. In this regard, we are highly unusual amongst tech companies, who typically mine user data to sell it with zero transparency. In fact, STEER is fully transparent about the nature of its data, our data model, use and, ultimately, the data itself, at the age students themselves will be trained to understand it.
We are, as far as we know it, the only company in the UK attempting to curate and pass a personal developmental narrative to each young person, and we are proud to be attempting to support young people at a time of huge strain and fragility in our post-school mental health systems.”
At the time of writing (summer 2020) the privacy policy is unavailable to access from either the Google Play Store[406] or Apple Store, to the company website[407] so we are unable to analyse it.
It states on the STEER website information for parents, that ‘Data is not presented in a format suitable for parents or pupils. Where parents or pupils request to see their data, it must be in the context of dialogue, and not released without context.”[408]
We were grateful to STEER for clarifying this statement.
“The only question is around the timing by which we give access to this data to students. All students can request their data via a SAR at any time, but widely accepted professional guidance would hold that psychological data should be shared with a subject only when they are mature enough to understand it. As an accredited member of the British Psychological Society, STEER's co-founder Dr Simon Walker, and Dr Jo Walker a specialist in adolescent self-regulation, have used informed judgement, in conversation with teachers, parents, and wider psychological professionals, to decide the appropriate age for this.”
We contacted the company with an invitation to its governance board[409] for comment and appreciate the time taken for their input. They also asked us to include this statement which we are happy to do:
Statement
“Data privacy and security are STEER’s absolute priority. STEER technologies seek to empower young people, teachers and parents by providing precise insights into pupils' well being, which complement the existing pastoral tools schools already have at their disposal. STEER has significant concerns about the use of AI in profiling students and rejects its use in our live tools for that reason. Our researchers only used machine learning to analyse historic scientific research study data as a form of statistical analysis, which is an accepted technique used often by scientists to understand data patterns.
Update: In February 2021, defenddigitalme was told by the Office of the Information Commissioner that (the ICO) had "made enquiries with STEER, and....found that it is likely that STEER and the schools using their services are in contravention of the UK General Data Protection Regulations (UK GDPR) or Data Protection Act 2018 (DPA18). I can confirm that this finding means that I partially uphold the concerns you raised around Article 5(1)(a), Article 5(1)(b), Article 9, and Article 35 of the UK GDPR."
"I would like to assure you that the decision not to pursue this to formal investigation has not been taken lightly, but we are confident that in this instance, the ICO are able to work with the organisation to achieve compliance voluntarily – I have asked to be provided with updated evidence and documentation in six months’ time, and our decision about whether compliance has been achieved, and therefore whether to launch a further investigation, will be reviewed at this point."
By far the most common failed presumption of edTech companies is the definition that anonymous data is information that is not linked to the name or identity of a user. When agreeing to the Term and Conditions, companies may claim that registrants ‘grant us the right to use this anonymous information for our own purposes, such as the preparation of statistical reports or to improve and change the content of our products.’[410] The companies fail to grasp that a lawful basis for such anonymisation of itself must first exist, and that means between the data subjects (about whom the data are) and the company, not the school. For this reason such processing often appears to lack a lawful basis.
The GDPR also sets out the rules (at Article 6(4)) on factors a controller must take into account to assess whether a new processing purpose is compatible with the purpose for which the data were initially collected. Where such processing is not based on consent, or on Union or Member State law relating to matters specified in Article 23 (general article on restrictions relating to the protection of national security, criminal investigations etc.), the following factors should be taken into account in order to determine compatibility:
any link between the original and proposed new purposes;the context in which data have been collected (in particular the relationship between subjects and the controller);
the nature of the data (particularly whether they are sensitive data or criminal offence data);the possible consequences of the proposed processing; andthe existence of safeguards (including encryption or pseudonymisation).
Recital 50 indicates that further processing for archiving purposes in the public interest, for scientific and historical research purposes or for statistical purposes should be
considered as compatible processing, and while research per se is not defined, it is clear what it is not; used for individual interventions.
This throws into question use of national pupil data, where the data is used as a direct and obvious intervention such as Home Office use for immigration enforcement.
We suggest that the number of growing uses in which identifying data is used to profile and identity sets of characteristics, to profile and predict with a view to intervention again with flawed concepts of anonymity is a bad thing.
Under s66 of the Digital Economy Act it is a criminal offence to disclose any personal information from administrative data for research purposes. Such activity would already be an offence under s.55 Data Protection Act 1998 if undertaken without the data controller’s consent. (Mourby at al. 2018)[411]
Since 2013 the Visible Classroom project used audio recordings from UK schools’ lesson, “As part of our [Nesta] technology in education programme trialing different types of digital technology in schools and exploring its potential for learning”.[412]
The pilot evaluation involved ten primary schools in London and the West Midlands. Participating schools received training and followed the approach over two terms in the school year 2013–14. Further funding was awarded to proceed with further projects in 2015. The children were from years 3 to 6 in primary schools[413] generally aged 7-11.
Recruitment began in Summer 2016 for the trial stage, and the intervention started in late 2017
“After the study completes in August 2018, schools in the control group will be offered the choice of either the intervention (at no cost) or a £1,000 payment from the EEF.”
The intervention was delivered in collaboration by three partners. SSAT responsible for recruiting schools, checking that schools are using the technology, and supporting them to do so. AiMedia which provides the technological platform for the captioning of lessons, provides the verbatim lesson transcripts recorded on a teacher’s phone, and populates the data dashboard with teaching analytics.
Staff at the University of Melbourne, Australia, deliver the training package to participating teachers and mentors, conducts the in-depth coding of lesson transcripts, and generates the tailored feedback reports. Assessment of the project was carried out by a further two partners, NatCen and later, the Behavioural Insights Unit.
"The program was first piloted across mainstream schools in the United Kingdom in 10 schools covering 35 primary teachers, serving primarily disadvantaged students, and was funded by the Education Endowment Foundation. In addition to the live captioning and transcripts, a range of tools were developed to provide teachers with feedback about their practice and impact on students, including a dashboard showing, in real-time, the proportion of teacher talk to student talk, number and type of questions, and their talk speed. The pilot study concluded that the Visible Classroom approach is feasible and has the potential to make a significant impact on teacher practice and benefiting student learning. Following the successful UK trial, Visible Classroom was launched commercially worldwide”.
As Nesta described in 2015[414]:
“Based on this work with teachers and students, Ai-Media UK have been able to develop ‘The Visible Classroom’ further into a refined product for supporting teacher professional development. What was a new technology not tried in schools in this format before, has become a product that can be rolled out to schools.”
The project also extracted data from the Department for Education National Pupil Database in order to examine the impact of the programme on reading and mathematics attainment, measured using KS2 SAT performance in Reading and Maths. The initial intervention was designed as a two-armed randomised controlled trial, involving 140 primary schools:
“Though some individual pupils within those schools may have left the school, we will still be able to collect their data via the NPD.”[415]
Some pupils “were randomised into the trial prior to confirmation that opt-out forms had been distributed”, and where it was too late to offer families opt out, since, “the Year 6 cohort had by this point already moved on to secondary.”[416]
None of the significant bodies involved in the project appear to identify this in the report as problematic for the lawful basis for their own activities or for the protection of the rights of the child.
They do not appear to have published any ethical concerns about whether trialling “what was a new technology not tried in schools in this format before,” was something they should do at all.
However at the time of writing we are still trying to obtain the ethics review from the University of Melbourne which was refused in a Freedom of Information request made in the summer of 2019. At the time of writing one year later, the appeal is pending a response from the Office of the Victorian Information Commissioner (Australia).
The commercial product is now used by UK schools, and others around the world.
Linking together existing data gives more opportunities to research areas of interest that may not otherwise be possible. Researchers say that it allows more sophisticated research questions to be tackled to give a greater depth of understanding. It also poses greater risks to privacy. The ADR addresses this by using a trusted third party centre to do the data linkage and then only giving the researchers access to deidentified linked data, not the original datasets and in safe settings..
The ADR enables education data access to public interest researchers only, unlike the DfE that gives access to commercial businesses for private purposes, think tanks and others. And again, in contrast with DfE the ADR does not distribute data to the user location but requires that all research is undertaken in its safe setting facilities which is secure, data is deidentified after linkage and is checked before researchers can take information away from the safe setting following the 5-safes model.[417]
Access to de-identified administrative data is sought after by researchers, facilitating a more comprehensive picture of UK society than is possible without it. Unlike survey data, for example, which is limited to those who choose to take part, administrative datasets include information on everyone who comes into contact with government and public services.
The All Years Dataset: Linking children’s health and education data for England. This 18-month programme of work aims to create a de-identified, research-ready dataset that links longitudinal education records for all children in England with NHS hospitalisation records from 2001 onwards
The Predicting school readiness using linked data project is funded by ADR Wales via its core grant from the Economic & Social Research Council (ESRC) as an ADR UK partner.[418]
Growing Up in England[419]: Linking Census and education data for children. Around two million records have been matched, producing a significant sample size for analysis. ONS has linked 2011 Census records to pupil attainment data from a bespoke extract of DfE data named the All Education Dataset for England (AEDE) to create a useful resource for research. Five years of attainment information for a cohort of children who were in Key Stage 4 and Key Stage 5 in the 2010/2011 academic year is included in the data, up to the year 2014/2015, from local-authority maintained schools in England.
No other data source has this level of insight they say on their website, as the The Growing up in England (GUIE) dataset has, on children who are vulnerable or of concern by virtue of their circumstances; for example, children caring for others, children with a disability or ill-health and children from workless families.[420]
The Administrative Data Research UK network is a partnership run by the Economic and Social Research Council (ESRC), part of UK Research and Innovation.ADR UK is made up of three national partnerships – ADR Scotland, ADR Wales, and ADR Northern Ireland – coordinated by a UK-wide Strategic Hub, and the Office for National Statistics (ONS).
The one thing that the ADR and the Department for Education have in common while distributing access to pupil adta, is that neither has ensured the people who the data are about, have been told.
A number of initiatives incentivise schools to provide their pupil’s data for research purposes. These may or may not include direct classroom interventions. In our research both academics and government arms length bodies have refused to provide copies of data protection impact assessments, ethics approvals or balancing tests when legally required. There is a parental assumption that where children will be affected directly, parents should be asked for consent to participate however this is not always the case, and opt out may be offered instead. In some projects although data may not be processed, the child will not be removed from the intervention.
Programmes such as the Department for Education led Maths Hubs coordinated by the National Centre for Excellence in the Teaching of Mathematics (NCTEM) enable match funding of up to £2000 for schools in a Teacher Research Group linked to one of 35 Maths Hubs and maths mastery specialists. Power Maths from Pearson has been especially designed to fit its curriculum.
The NCTEM is funded by the Department for Education (DfE) and managed by Tribal Education in partnership with Mathematics in Education and Industry (MEI). Tribal Group was awarded the contract, initially in partnership with the University of Plymouth, and now managed by a consortium. It was initially asked to deliver the NCETM with Government funding of £15 million until 2009, extended to 2011, to 2016, to 2019, and again to 2022.
The results of a new Institute for Effective Education study have just been published, involving a West Sussex school.[421] Approximately 90 Year 8 pupils participated in the study (in the intervention group there were 44 pupils and in the control group 46 pupils).
The report introduction says that, “it is important [to] remember that these are small-scale projects, often carried out in one school, so it is not possible to generalise their findings. In fact, the main benefit of the Innovation Evaluation projects may be in the process, rather than the findings.”
What oversight is there of trials that have no or a negative effect? How when trials are carried out during regular lesson hours, is the impact of these non-standard curriculum interventions measured and how is the ethics of a non-equitable education justifiable to children and their families? As the research protocol says, ‘no curricular choice is innocent’. “In an ideal world all participating groups should read the same book at the same time to ensure that there is equity. This may, however, be cost prohibitive, since a single set of class texts is approximately £180-£200.”
The results of the study showed an effect size of -0.49, suggesting that the intervention had a medium negative impact overall. Contrary to expectations the lowest attaining pupils (those who had a reading age lower than their actual age) actually made greater progress in the control group than in the intervention group, with ten pupils in the control group moving to their expected reading age while only one in the intervention group did. Two classes in the study did seem to make greater progress. One was an intervention class and one was a control class. Both were taught by more experienced reading teachers, and, if measured in this way (against the less experienced reading teachers) had a positive effect size of +0.69.
There were a number of limitations to the study. Given the complexity of reading as a process, the report suggests that it is possible that there may be a delayed effect on the pupils who were ‘word rich’. A twenty-minute multiple choice test involving only single sentences is a blunt tool for the measurement of complex reading and comprehension skills and may provide at best only a snapshot. The cognitive benefits of extended narrative reading may not be so easily measured.
Costs per pupil were approximately £28 (additional books and testing).
Jen Persson includes this case study as a parent whose child was involved in this randomised controlled trial. After trying initially to resolve my concerns through the school and directly with the Behavioural Insights Team in 2019, I submitted a complaint to the ICO of which we gave the BIT advance notice. I am grateful for their contributions to this study.
The spokesperson for BIT told me that following an assessment by the ICO, in which it reviewed various documents including the privacy notice and data protection impact assessment, the ICO determined that there had been no contravention of data protection law, and no further action was required.
So to be clear from the outset of this case study: the ICO did not carry out a formal investigation, but its assessment of the evidence provided was that there was no contravention of Data Protection law. One purpose of this case study is therefore to illustrate why Data Protection law alone can be inadequate to protect children’s rights and freedoms.
Can it be right, that an organisation, no matter how ethical or well-intentioned, can carry out a trial involving a child, without explicit parental consent? Research ethics alone would appear to prohibit any behavioural intervention study being carried out on children during regular lesson time without their or their parent’s informed consent, or at the very least, a choice to opt-out of the intervention —in addition to opt out of its data sharing. That organisations can also therefore extract children’s personal confidential data including religion from schools, at scale, without parental consent is of secondary concern.
The background
The Behavioural Insights Team (or BIT) as it is sometimes referred to, commonly known as “the Nudge Unit” is jointly owned by the UK Cabinet Office, Nesta, and its own employees.
It has been carrying out an interventional ‘Social Cohesion’ study involving some sixty UK schools since 2018, using school children’s personal data— including special category data —directly involving the children themselves. This is just one such project being carried out by BIT in normal school lesson time, as part of Personal, Social, Health and Economic (PSHE) education.
“The Social Cohesion Project aims to increase social cohesion and boost integration through a reduction in prejudice and increased openness in schools. BIT designed an evidence-
based curriculum that is delivered through PSHE classes in secondary schools to students
aged 11-16 which covers topics such as empathy and constructive disagreement. BIT is conducting an evaluation of the impact of the curriculum and personal data is required to be able to do this (e.g. by way of the collection of survey data and data held by school information management systems). A rigorous evaluation is a crucial aspect of BIT’s work in order to be able to determine what works. The project was commissioned by the Ministry of Housing, Communities, and Local Government (MHCLG). Schools were offered up to £1,000 for participation to cover administrative costs which most schools taking part did not take up.”
The personal data to be collected in addition to the special category data of religion and ethnicity, included other data that are not special category classifications in data protection terms, but may be considered sensitive such as information about Free School Meals and behaviour data on exclusions.
Opt out of ‘data sharing’, but not from the study itself
A spokesperson for BIT explained to me, “Pupil names and contact information were not collected as a deliberate data minimisation control and in accordance with good research ethics. Parents and students were permitted to opt-out of all [data] processing in respect of which BIT was the controller at any time during the Study. Any other processing was undertaken by the school as an independent controller. In relation to the intervention itself, this was an evidence-based curriculum delivered as part of participating schools’ PSHE lessons. It is not generally possible for students to opt out of school lessons as far as we understand. However, they could at all times opt out of data processing in connection with the evaluation of the curriculum.”
The project involved transfers of three different types of personal data about children from schools to BIT:
Behavioural Insights Ltd (the legal name of BIT) became the data controller of this data after it was collected from schools. The Behavioural Insights Team is an organisation that was set up to apply ‘nudge theory’ (i.e. behavioural economics and psychology) in public policy.
In the case of my child, who was in a class in one of the 60 schools involved in the Social Cohesion study – and despite my objection – we were not permitted to opt out of the intervention. We were offered only an opt-out from the data sharing aspects of the trial, in line with BIT’s instructions to schools not to collect consent.
In practice, what this means is that if the family of any child did not get the notice – or if they did not respond with an objection – their child’s personal data was extracted by default.
Lack of transparency
The project was “reviewed by the Ethics Committee of the University of Lisbon's School of Economics and Management, which is our research partner in this project”. Since it was not stated in the privacy notice, I wanted to know whether data was being processed by the research partner, or by the U.S. research group OpenMind, on which the study’s classroom materials appeared to be based.
I therefore asked the school for a copy of the study’s Data Protection Impact Assessment, Legitimate Interests Balancing Test, and Research Ethics paperwork. The school did not have copies of these itself and couldn't give me a copy, even though they had agreed to the trial.
I next asked the BIT for copies, but they refused to provide me with them.
The BIT’s response at this time in 2019, was less cooperative than I hoped for or would have expected. ”We understand that you still have concerns about the study. However, BIT's position has been explained in our previous emails with you and we don’t have anything to add over and above what we have already explained. Therefore, we do not think that a meeting or call would be productive.”
Given that it is in part owned by the Cabinet Office, I again asked for the documents via FOI, but the BIT again declined saying,
“Behavioural Insights Ltd is a private company and therefore the FOIA 2000 does not apply to us. We are not a “public authority” as defined in section 3 of the FOIA 2000.”
I asked the U.S. based OpenMind project more about what the intervention would entail, and they responded, “BIT has adapted OpenMind's materials for use in its social cohesion project. BIT's materials are based on OpenMind's ideas, but they are original and distinct works. Please direct further inquiries to BIT regarding this project.”
So I was back to square one. While there is no explicit requirement in law to publish these specific documents, there are broad Data Protection obligations on fairness and transparency – and one might have expected that faced with an explicit request for more information, that both research and psychological ethical standards would have suggested more openness would be appropriate to promote parental and public trust.
Why should any legal guardian or competent child be refused all of the facts? That any such project could involve thousands of children, be designed to attempt to modify their behaviour or beliefs and parents have no choice over their children’s participation, still astounds me.
The legal basis for data processing
The ICO did not carry out an investigation but their assessment found, “that there had been no contravention of data protection law, and no further action was required.”
Processing of special categories of personal data requires a lawful basis under Article 6 of the GDPR and a condition for processing under Article 9. The BIT says they are using 6(f) “legitimate interests”
Article 6(f) processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.
and relying on Article 9(j).
(j) processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) based on Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject.
In my opinion this condition is not met, because this is not a remote research project but a live intervention, over weeks of activity and surveys, in particular where the data subject is a child. In my opinion this data processing does not meet a necessity test for processing special category data which is a high bar.
It appears to confuse processing that is necessary for their stated purpose with processing which is only necessary because of their chosen method of pursuing that purpose.
In my opinion it does not ‘respect the essence of the right to data protection’ because that essence is that special category data should generally be processed with consent except for reasons that make it impossible (like life and death situations of ‘vital interests’) or for statistical research where there is no direct contact with the data subjects. Whereas in these circumstances it should have been perfectly possible to ask parents for consent.
The BIT’s response to this was, "It may have been possible, but not desirable. We did not consider consent to be an appropriate lawful basis to rely on for this project as relying on consent (which has a standard of affirmative action i.e. opt-in) would be very likely to threaten BIT’s ability to carry out a proper evaluation of the PSHE curriculum as sample sizes would be too low and may be unrepresentative, meaning that we would be unable to provide robust evidence to MHCLG about what works. This is standard research practice.”
If this is standard research practice, I suggest the standards need raised. I also wondered if they could have done the research in a less invasive way, and not at individual level.
BIT’s response was, “the robust evaluation of this type of project requires collection of individual level data in order to compare the impact on individuals who received the new curriculum with individuals who didn’t receive it. This was done in a way to minimise the privacy impact on children (for example, names and contact details were not collected and the completion of surveys in PSHE lessons was not mandatory for any child).”
The BIT has no relationship with families at all. Most families are unlikely to know what the BIT is as an organisation or have heard of the University of Lisbon's School of Economics and Management, “our research partner in this project”. (There is no information in the public domain what this partnership entails but this month the BIT spokesperson confirmed that as only the research ethics partner, it was not involved in data sharing.)
It was not clear at the time because the Privacy Notice says, “BIT may disclose your information to third parties in connection with the purposes of processing your personal data to deliver a randomised controlled trial (RCT) to test evidence-based interventions relating to social cohesion and integration; and to evaluate the impact and effectiveness of the interventions.These third parties may include suppliers, research assistants and subcontractors helping us to administer the study.”
In my opinion such processing by any of these outside third-parties is not “reasonably expected” by parents when they send their child to school.
And children with uninterested parents will have had their rights to privacy and family life less protected than they may have expected, and in a way that was lower than my children have.
Insert Fig 1. What kind of information do we collect?
In summary
The spokesperson suggested to me that it was, “not appropriate for you to make findings in your summary as to whether BIT's procedures and documentation (e.g. BIT's privacy notice and how this was communicated with parents and children) were "adequate" or not.”
In my experience as a parent, the entire process has been inadequate —not least because my child and classmates were subjected to a psychological study designed to influence behaviour, that was carried out during their regular lesson time —and I and my fellow parents were not asked for our explicit and freely given consent, nor were we permitted to withdraw them from the study.
The complaint procedure was also inadequate and provided no remedy from either an ethical or data protection perspective. Other parents in other schools may not have had the same experience, and be of a different opinion.
The ICO’s initial assessment determined there had been no contravention of data protection law, and no further action was required, without taking into account other relevant factors. I believe it is wholly unsatisfactory that third party organisations can carry out trials in schools, from which we cannot opt out, and we are not able to fully understand what the expected “impact on individuals who received the new curriculum” would be.
I would welcome further discussion with established research bodies on this point. The Behavioural Insights Unit spokesperson expressed an “objection to [you] including it in your report in its current form, or at all, and it should be removed from the report before publication.” However, I believe it is in the public interest to raise questions of research ethics that impose interventions on children in state education without parental consent.
And I believe it is equally in the public interest to raise questions of whether all children’s rights are adequately protected in state education subject to trials, through current data protection enforcement practice.
I am grateful to the BIT spokesperson who reviewed this case study several times and based on whose input I made significant changes to include their contributions. I would still like to see a copy of the research ethics approval.
Poor practice can be systemic and supported by poor technology design or legacy fields. It can also be unique to a school’s own policy and practice.
For every child however, anything which is attached to their record “throughout their school career” and sent to national pupil databases, now means labelled for life. Children can be labelled by data ascribed by an administrator that reflects a parent’s not a child’s choice.
We believe these decisions are not given the due weight they require and could have severe unintended consequences, where data is now used in predictive machine modelling.
Ethnicity was found in 2016 to have been ascribed by school staff in some counties, even to the point of overriding parental and pupil choices to refuse to provide the optional information as recommended to schools by Brighton and Hove City Council ahead of the autumn school census. This option was changed in system providers' design to remove fields 'ascribed by' as a result.
In autumn 2016, during the debacle over collection of country of birth and nationality school staff raised concerns[422], that Heads in the Brighton and Hove area had been told by the council to ascribe ethnicity where it was not provided by the parents.
As school had sent out the message to parents, saying, “Where an ethnicity has not been stated or refused, the Head teacher has the right to ‘ascribe' an ethnicity to your child.”
In reply to FOI sent to the council, we were told[423], “the local authority did include a model letter as part of recent guidance sent to schools about ethnicity data collection. The letter did explain that the Head teacher could ascribe ethnicity under certain circumstances. The guidance we gave to schools about ascription was based on that provided by Kent County Council. There is a category in SIMS where schools can identify if ethnicity is ascribed and so this guidance supports schools to do this appropri ately.”
This Kent guidance in turn, referenced 15 year old Department for Education recommendations as its own source.
“4) Ascription of an ethnic background by the school In some cases, a school may anticipate that the response from a parent will not be forthcoming. The school may consider whether it would be appropriate to undertake ascription of an ethnic background. Ascription by the school should only be undertaken as a last resort.”
A number of safeguards were suggested, such as sending a letter asking for the data and that the letter must tell parents that they have the right to refuse to provide this information but that they must formally notify the school of this refusal.
Schools should “record the appropriate value of the “source code” in addition to the new ethnic category. The “source code” can take the following values:
(Guidance For Local Authorities on schools' collection and recording data on pupils’ ethnic background (in compliance with the Data Protection Act and the 2001 national population Census January 2003)”
Despite its acknowledgement “as a last resort” it is unclear why or for what purpose such practice would be necessary at all. Ethnicity should always be optional in the school census.
The Kent guidance acknowledged and was at lengths to emphasis that, “Ethnic background data is regarded as sensitive personal data and any parent or pupil has the right to refuse to provide this information. If a parent or pupil has refused to provide this information schools must not record an ethnic background for that pupil.” But rather than treat no response as refusal, it was deemed acceptable for the school to assume one.
Both the Department for Education and Capita SIMS, the service provider system in question, provided defenddigitalme the assurance that this field drop down choice had been removed from the system, and that it was no longer possible for “ascribed by” school options to be indicated in the information management system.
How many may have been ascribed incorrectly across England since 2000 is unknown.
Pupils that are unaware a national pupil record exists about them, are unable to verify the accuracy of their own data or get corrections.
As Ben Williamson, wrote in 2018, data about a child can have significant effects on their life. Schools are responsible for ensuring the use of edTech today does not negatively impact their future. To rush tech solutions, is a reaction, not a response.
“Of course, the ethical issues of datafication of education are considerable and fairly well rehearsed. An interesting one is the ethics of data quality–a topic discussed by Neil Selwyn at the recent Learning Analytics and Knowledge (LAK) conference. There are significant potential consequences of poor data in learning analytics platforms. In other spaces, such as healthcare and military drones, the consequences of poor data quality can lead to disastrous, even fatal, effects. Poor quality datafication of education may not be quite so drastic, but it has the potential to significantly disrupt students’ education by leading to mismeasurement of their progress, misdiagnosis of their problems, or by diverting them on to the ‘wrong’ personalized pathways.“
There has never been any data audit of England’s National Pupil Data released in the public domain. In 2015 the author made a Subject Access Request to The Department for Education to see a copy of her own children’s national pupil record. It was declined.[424] A summary SAR was met in 2018 but the Department failed to provide all information held and could not confirm the third parties to whom personal data had been distributed since 2012.
How can errors be identified or corrected?
By comparison in Wales, the government permits and fulfils subject access requests.
In the 2017-18 Welsh Government guidance on the school census in Wales pointed out that the default setting for school census data, would indicate that a child had in the past been a Looked After Child. The error had been identified because a parent made a Subject Access Request about their child which the Welsh government fulfilled, revealing that every child in 2010 had been wrongly recorded as having been in-care at some point in the past.
In February 2018, Infosecurity Magazine[425] reported that the ICO has launched an investigation after it emerged that CCTV camera feeds from several Blackpool schools ended up on a US website.The live streams from Highfield Leadership Academy, St Mary’s Catholic Academy and Christ the King Catholic Academy were broadcast publicly after an apparent security blunder by Eric Wright Facilities Management, which serves all three schools. They were reportedly taken offline within an hour, but question marks remain as to how the video images ended up being broadcast in the first place.
Capita the major supplier of MIS systems to21,000+ schools had a repeated problem in 2017 and 2018 but seemed to carry on business as usual. Its after effects was a significant imposition on schools IT staff time and capacity without any redress.[426] The error had apparently happened in late 2017 and wasn't fixed in July 2018.
The problem affected the common transfer files (CTF) – which are used to transfer children’s information between primary and secondary schools for moving pupils and other ad-hoc transfers. Where schools had imported a CTF for pupils joining the school that included parents or other contacts with a name that matched exactly to a contact record already in the database, the applicant may have been linked incorrectly to this person and some data may have been incorrectly changed.
One school had identified 100 records that had been affected and, although Capita have released patches for schools to install to stop the problem happening again and to identify which files have been corrupted, it appeared that all infected files had to be fixed manually.
The data protection issues engaged included data (in)accuracy (linking the wrong child/family to the wrong pre-existing child/family records), data retention, and any potential breach from sharing the wrong record with the wrong individual, due to the mix up of different families’ records.
In the words of one school IT manager,[427]"Just got this notification. Are they kidding? From the new "Customer Success Team"(!) The same bug that they introduced over a year ago is still being shipped? Or reheated in some way? Is this some sort of celebration of the introduction of GDPR by "spaffing all their customers up the wall"? We hold off every release till the last possible moment, but now they've got us. And this at the time of year when we are importing hundreds of new pupils. Absolutely incredible. They should be facing a class action now. And their decrepit cash cow "product" should be booted out of every school. Unbelievable.”
Capita basically got to shrug its shoulders and say, oops, we did it again, without impunity. And then landed another major government contract. Local government reporting of contract spending is not granular enough to enable us to have researched how much is spent by academy schools, and maintained schools and other educational settings what the spend is from the public purse. But schools are powerless to refuse to pay for costs incurred even by Capita’s mistakes.
There was speculation in the sector that the problem and fix had been withheld until after Capita was awarded a further Department for Education contract, announced on the same day as the problem in July 2018[428].
Capita was selected by the Department for Education’s Standards and Testing Agency (STA) to manage the administration, processing and support for all primary school national curriculum assessment (NCA) tests in England. The new six-year contract, covering the 2020 to 2024 test cycles, was worth approximately £109 million.
A former local authority education worker who illegally shared personal information about schoolchildren and their parents was prosecuted by the Information Commissioner’s Office in February 2018.[429] The staff member was employed at the time as an apprentice in the schools admissions department of Southwark Council and prosecuted for taking a screenshot of a council spreadsheet concerning children and their eligibility for free school meals before sending it to the estranged parent of one of the pupils via Snapchat. The image included the names, addresses, dates of birth and National Insurance numbers of 37 pupils and their parents. She also sent a copy of a school admission record relating to another child.
Personal data created and collected for the purposes of a child’s education should not be used for non-educational and punitive purposes. The most egregious example of this that the Schools Minister and successive Secretaries of State have found acceptable since 2015 is the abuse of public and professional trust of school records for Home Office immigration enforcement and its strategic aim to ‘create a hostile environment’ for migrant children and their families in schools. We await clarification from the Home Office of the outcomes from data processed for these purposes.[430]
The data don’t tell you what you think they tell you. Linkage of data is deeply problematic in ways that most policymakers fail to see or seem to have accepted. The harm this can cause should be apparent from the exams 2020 fiasco. If you don’t understand the historic input data, and the decision choices made using that data, you are highly likely to be creating a ‘mutant’. The data that are being put together about children from their education records simply do not carry the accuracy or significance that they are attributed, when linked to other data or viewed out of context whether by other government departments or by local government.
The linkage of Key Stage scores in Kent, with data broker data is an example of data that do not fit together and yet are being used to create even larger local datasets for decision making.[431]
We are not only at risk of overfitting data to a model statistically to make it appear to fit more of the data than we should, but in how we are fitting our expectations of what the outcomes should be from using data that policy makers have made up and taken out of context.
We mention Higher Education only insofar as applicants are often still children aged under 18, or on its cusp, when much of their personal data is collected by institutions as part of the applications process, and is then combined with data collected and created during their time in Higher Education.
We have mapped only some of the key activities that require data processing from applicants and students in the transition year between compulsory education at age 18 and Higher Education.
The data industry that now operates in the Higher Education sector is vast and includes student surveillance 24/7, across campus, across the student whole experience from academic attainment and predictive analytics assessing ‘risk factors’ for drop out, to a lifetime of alumni fundraiser calls. Everyday data collections sit behind the infrastructure for building access and attendance registration and administration using fingerprint readers for the interface with printer and cashless payment systems. Universities surveil social media for reputational risk, online activity for the Prevent programme and the Student Loans Company scans every social media for signs of fraud. Learning and attendance data are repurposed for tasks the data were never designed for in predictive analytics and even mental health risk indicators. Voice activated support services [432] may take an inadequate approach to biometric data.
We did not research the widespread use of plagiarism software in Universities, its data processing or effects. However this area of work needs addressed from various rights’ angles not least to respect cultural norms. A diverse student population will inevitably reflect different approaches to research and reference to others’ work may and written work without appropriate credit, may not be intentionally submitted to be misleading, signal guilt or lack of integrity. What automated systems may pick up as plagiarism may also reflect poor referencing, diverse learning approaches, respect for the authority of others’ work, unfamiliarity with the boundaries of collaboration versus intentional ‘cheating’ by sharing work, and limited writing experience or experience in English as an additional language.
Students are treated by default as potential essay cheats by its plagiarism detection algorithm. The psychological and stress implications of being under constant surveillance deserves more attention. Contract cheating services and getting others to write your work in ‘essay mills’ are not new, but the speed and scale of services thanks to online access has changed universities' approach to dealing with them and investing in services to combat rather than solve the issues.
“Universities are investing in Turnitin to automate plagiarism and ghostwriting detection as a way of combating academic fraud. The problem of essay mills that politicians are now fixated upon is the ideal market opportunity for Turnitin to grow its business and its authority over student writing even further. In so doing, it also risks standardizing students’ writing practices to conform to the rules of the algorithm–ultimately contributing to the algorithmic governance, and even ‘robotization’, of academic writing.
“The real problem is that universities are being motivated to invest in these robotized, data-crunching edtech products for multiple complex reasons. As universities have to seek larger student enrolments for their financial security, algorithmic services become efficient ways of handling huge numbers of student assignments. They satisfy government demands for action to be taken to raise standards, boost student performance, and preserve academic integrity. But automated software is a weak, robotic, and error-prone substitute for the long-term development of trusting pedagogic relationships between teachers and students.” (Williamson 2019)
Here we consider only five separate areas of data collection and processing within Higher Education
Sarah Chesney, Director of Flourish Learning Ltd, spoke to us in 2019 about the challenges of digital coercion and implications for the datafication of young people, often without consent, in Higher Education. There are two areas within higher education need scrutiny first in her view:
Activities in both the above areas offer potential benefits to students and their institutions and have been discussed within the sector. For students and financially challenged institutions it’s anticipated that learning analytics will be of value:
And
Within the sector, data-gathering has become a default, habitual process. There is, however, an urgent need to examine the cost of the benefits, not just financially, but also cost in terms of individual privacy and data security. This is important because, at the time of publication, there is a ‘dearth’ of evidence ‘in the literature and rigorous, empirical, replicable studies’ making ‘it difficult yet to justify any grand claims for the impact of learning analytics’ (Jisc, 2016). This is concerning given talk of utilising the ‘fine grained details of the students’ clickstream’ (Jisc, 2016) and the quantity and granularity of student data currently being collected and stored.
This collection and storage is happening through:
“There is enormous potential for accident and abuse in the creation of a permanent, central data-minable repository of everything that every student ever writes. Moreover, this database is completely outside the ownership of [the university] and that magnifies the problem.” (Cochrane, 2006 in Blackburn, 2017)
These are all areas which need robust regulation and oversight, something which is currently lacking. A small number of universities are piloting information and consent procedures to comply with the impending EU GDPR, trying to be more transparent about what happens to student data. However, these don’t address the challenge of ‘digital coercion’ at the point of enrolment at university.
We need to consider: ‘Should students have the ability to opt-out of having their data collected?’ (Stiles, 2012, p33).
The student population must start to challenge whether the smart campus serves them well, or serves them up as data mining repositories and who this benefits most.
At national level, the launch of the Office for Students, has been followed by new legislation and data management and distribution systems used in the collection and analysis of applicant and student data, as far as administration by HESA and introduction of analytics.
The Higher Education Statistics Agency Limited (HESA) is the body responsible for collecting and disseminating information about higher education in the UK and the Designated Data Body for England (see www.hesa.ac.uk/about for more information). HESA is a Controller of your information.[433] HESA also acts as an information hub to provide information on higher education.
It has a vast set of purposes for re-use which go far beyond what students would expect. Your HESA information may be used for example to audit claims to public funding and student finance, and to detect and prevent fraud. This may include sharing your information with other Controllers (for example the Student Loans Company, and Pearson Education).
Where HESA and organisations use HESA information this may include linking named or pseudonymised HESA information to other information for research purposes. Examples include linking to:
Where HESA provides information from your HESA information to third parties, the permitted uses of the information by a third party may include linking HESA information to other information held by the third party. Permission for such use is considered on a case by case basis.
If you attended a school or college in England or Wales linked data may be disclosed to the last school or college you attended (or its successor body) and to Ofsted or Estyn in the exercise of their official authority to enable them to assess the outcomes of secondary education.
The Graduate Outcomes survey is a national survey capturing information about the activities and perspectives of graduates 15 months after they finish their studies. All eligible graduates who complete a higher education course are asked to take part in the survey and your HESA information may be transferred to countries outside the European Union.
UCAS’s survey of their 2015 UK applicants, with 37,000 responses, showed the majority of respondents were happy for their data to be shared for research purposes where there is a clear public benefit. The majority of applicants (90 per cent) however agreed with the statement that they should be asked before their personal data was provided, over twenty times more than disagreed with that statement (4 per cent). Interestingly a further 8% suggested that they would rather share no data at all with UCAS and not apply, than have it shared. In our own discussions with under 35s on the use of their data, it is often those who already feel most marginalised, the BAME and stigmatised, who are in the group most likely to want to maintain control over their data and may be the very minority that ‘inclusion data’ seeks to understand but will exclude as a side effect of the way it is collected.
The Higher Education and Research Act created a legislative gateway for secure access to students’ data for public interest research But also overrides the consent process and applicants autonomy over their own privacy they had to choose who else can access it.
Esteemed academic researchers seem oblivious or willingly to want ignore that they override young people’s dignity and self-will in doing so, and with limited safeguards.
Trust in use of their confidential health data was affected by understanding data security, anonymisation , having autonomy and control, knowing who will have access, maintaining records accuracy, how will people be kept informed of changes, who will regulate the database, and how people will be protected from prejudice and discrimination.
The report concluded: "These questions and concerns must be addressed by policy makers, regulators, developers and engineers before progressing with the design, development and implementation [… ]and linking of any databases.”
A risk of the metrics based data focus is a loss of contextual nuance that experienced researchers identify and communicate, but will be lost in its practical transfer to prospective students.
When BEIS talks about destinations data it is rare to hear thought given to the data that shows a range of jobs that students take straight from Uni with unknown purpose. For many the desire to start to pay back student debt means jobs that have little to do with their course skills or future earning potential. First jobs can be a reflection of the cost of student debt rather than ‘course’ value measurement. They can bring huge satisfaction, but they are short term. For how long into the future may HMRC data be fed back to their universities? At what point is post-graduate employment seen to be caused or correlated to their course and worth the cost of collection?
Under these arrangements there seems to be no time limit on transfers over time. So whether you start work at Centrica or the Central Escorts Service it seems more than likely HMRC will know, even if you opted to leave it off the ‘name of employer’ question in the Destination of Leavers from Higher Education (DLHE) survey. A further survey, the Longitudinal DLHE, follows up a sample of respondents three and a half years later. The response rate to this is about 40%. There is a massive market of outsourced insight companies supporting its collection. It appears to offer zero benefit for the post graduate themselves.
The risk of this ever increasing focus on destinations data drives the concept of which students not only do which courses, but which students “should” do which courses. A focus on the marketisation of education, treats people only as a human resource for the economic productivity of the State, discouraging curious but broke thinkers and artists who will challenge it, including the creatives, the humanities students, and the poets. A lifetime of educational achievement is now measured through the lens of the Treasury. What kind of world will they grow up in, if all of education and aspirations are only given value measured by what LEO says?
Local Student Information Analytics at Higher Education institutions, may use student characteristics and behaviours in dashboards, business intelligence and key performance indicators. The types of data these analytics software may include, range from[434] applicants and students’ personal data from general registration, use of Virtual Learning Environment interactions, processing financial data, alumni data, course data and interactions with facilities, such as libraries and e-books. It may also be used for analysis of workforce data held by the university.
“The rise of education data science as an infrastructure for big data analytics in education”, wrote Ben Williamson in Big Data in Education (2017)[435], “raises significant issues about data privacy, protection and the responsible use of student data. In a recent news article it was reported that the company Blackboard had conducted a massive data mining exercise on student data from 70,000 courses hosted on its learning management platform.”
Jisc are seeking[436] to ‘make the market’ for UK learning analytics, working with Civitas Learning.
Jisc[437] considers this an example where solely automated profiling [of young adults and other students] is positive, and would be “a breach of privacy if all such notifications were reported to tutors for review; students have expressed a fear that such reports might influence their marks” and assumes that human intervention is rare and therefore a good thing.
In 2017 Jisc reported that it was, “currently working with 50 universities in the UK to set up a national learning analytics service for higher and further education. This is the first time learning analytics has been deployed at a national level anywhere in the world, creating a unique opportunity for the UK to lead the world in the development of learning analytics.”[438]
The learning analytics solution was being developed, “in collaboration with commercial suppliers and the sector.”
There is little information on university websites, who these commercial suppliers are. The JISC R&D project[439] run over 4 years and one month is due to end in August 2018, has had a budget of over £1 million.
The most commonly used commercial products in UK universities from our research, include Blackboard, Civitas, and JISC, or Qlic, stREAM, Starfish, and Xray.
Statistics and statements that are given as measures of success, would seem self-evident in some analytics systems and it is hard to assess the value add independently. “Strong link with retention: less than a quarter of students with a low average engagement progressed to the second year, whereas over 90% of students with good or high average engagement did so.”[440]
Academics at three universities have approached us at defenddigitalme in 2019, independently, concerned about data accuracy, bias, and security, because hundreds of academic staff can have access to students’ personal data.There is concern over the risk of discrimination through the racial profiling of the entire student population, with weak ‘consent’ model that students feel pressured to accept but little understand or simply see as the norm and do not question.
The analytics system may profile university students’ using the previous five years of attainment data to ‘predict’ vulnerability to future ‘low achievement’ using factors that range from failing a module to country of birth, gender, age, ethnicity - and many other factors recorded and available to all academic staff to see. Staff can use filters to pull up ‘at risk’ students by ethnicity or country of birth for example, and see student names, personal email, student ID, and have the ability to edit. Staff are told to tailor their approach to students accordingly.
“In one of the most prominent learning analytics initiatives in the UK, Nottingham Trent University (NTU) has implemented an institution-wide dashboard to enhance the academic experience of its 28,000 students by facilitating dialogue between students and staff. The specific goals were: to enhance retention, to increase a sense of belonging to a community, and to improve attainment. In a survey of first year students, 27% said that they had changed their behaviour in response to data on the dashboard. Some students did more academic activities, e.g. independent learning, although this is not measured in the dashboard. Others competed to have the highest engagement score. There is a possibility of bias however here, because the dashboard may help those students who are already articulate and engaged”.[441]
The proposed new TEF was accompanied by an Open Data competition[442] for tech companies and coders to create apps to help prospective students decide where to apply. The practical reality appears however that there is now access given to staff to very personal and in-depth automated profiles than would previously have been available.
The JISC response to the Article 29 Working Party guidance on profiling, called for softer restrictions on automated profiling than the GDPR and guidance suggest, saying, “the technology also enables a wide range of lesser interventions (personalised reading lists, reminders when a student is falling behind their peers, etc.). We would consider it a breach of privacy if all such notifications were reported to tutors for review; students have expressed a fear that such reports might influence their marks.
“Nonetheless, by failing to explain which automated decisions have "significant effect", the draft guidelines are likely to result in universities, concerned about data protection compliance, inserting human inspection into even these low-risk processes and decisions.
Jisc has recommended that learning analytics be carried out under the Article 6(1)(f) legitimate interests regime, ensuring that such activities always protect students’ individual rights and freedoms.”
The question then becomes how do institutions protect students’ individual rights and freedoms?
While these analytics systems may function within the law, they certainly overstep privacy boundaries that were once clearer between student and staff. Where once a member of staff would have known that student A needed reasonable adjustments of extra time for assignments or exams, now the staff member can know exactly why, for what condition or circumstances, and there is no explicit consent process to share details with an indefinite number of staff.
Processes could and should include this during the admissions process. But they should first review whether the depth of detail today is necessary and proportionate, or whether a return to more data minimisation, higher level information, would meet the student’s needs. More detail can always be entrusted on request or on a need-to-know circumstances. Policies and practices need adjusted to meet data privacy and data protection by design and default.
We have not tried to do UK-wide analysis, but can offer case studies of some universities that replied to Freedom of Information requests.
There is no publicly published transparency how such software works, what balancing test is done between the legitimate interests of a university and individual rights, or how the university assesses whether or not the impact of the automated decisions have "significant effect".
The methods behind this data mining and purposes to which it is put, need further scrutiny and a judgement made whether or not the analytics methods, “behind Knewton including their “sophisticated algorithms to recommend the perfect activity for each student constantly” combined with psychometric information and social media traces to produce a cloud of data on each individual used as a quantitative record for data mining and predictive analytics – constitutes profiling and automated decision making, which should not routinely concern a child. As a parent with some understanding of data mining and predictive analytics I believe it is unlikely that their complexity will be transparent to teachers and truly open to human review, for error and implications. On this basis, these software are not suitable for use with children in schools, yet that is the direct of travel.
“Though much of its activity has previously been confined to Higher education, by partnering with Pearson, the two companies aspire to take learning analytics and adaptive learning systems into the schools market.” [Williamson]
Young adults must have a right to obscurity, and not feel obliged to be surveilled simply by participating in education or find that their personal data are repurposed for other uses by default, such as general monitoring to infer mental health[443].
Many people voiced concerns on Twitter when 2019 Office for Students’ projects were announced that their plans were invasive, and in particular people were critical JISC’s social media post on the plan talked of ‘data harvesting.’’[444] In June 2019, Paul Feldman the CEO of Jisc said during a talk on Higher Education data analytics at the University of Buckingham, that the extent of the Jisc data surveillance, tracking and profiling of students meant that,
“we [JISC] can track them around campus, though that gets quite freaky and they object to some of that.”(end quote)
a) the data items feeding analytics, any profiling, and/or used in prediction
b) what the prediction purposes / aims are
c) training the software (such as AI and machine learning) by the provider using the data provided via your organisation.
The University of Bath responded “Only within the Technology Enhanced Learning (TEL) this year, for University staff, there will be a link for basic TEL help desk information from Computer Services into Powerbi to determine faqs and trends in support needs. This information is for University use only and not provided to third parties.”
Derby university is using student data to investigate if machine learning can assist in learning analytics - individual student data is anonymised. 30–40k students' data are at the time of receipt used in the project, data from the last 3 years, however this is not at an individual level.
The University of Gloucestershire had done some exploratory work as part of Jisc’s Labs but using anonymised data and without leading to any operational system. The datasets held in the Jisc Data Learning hub go from 2015 until the present.
And at Winchester they were trialling the use of machine learning for photo upload approval in 2019. “Currently we are training the system to learn from previous rejected photos and then we will move onto testing the handling of photo upload approvals within the next 3 months.”
In 2019, while not exhaustive as some universities did not reply or failed to provide yes or no, Brunel, Chichester, Durham, the Institute of Education University of London, Leicester, Leeds Trinity University, Oxford Brookes University, Portsmouth, Roehampton, University and the University of Winchester and University of Wales Trinity St David looked like the places to go in 2019 if you valued privacy. They all said they were not using any data analytics software.
Of sixty universities, thirty two said they are using data analytics software, and 14 were not; while a further 14 are unknown because they did not respond, rejected the request or was an unclear reply.
Northumbria University[445] provided 97 pages about their pilot use of learning analytics software from Civitas Learning International in 2017. That material did not include information about the use of algorithms because, “Northumbria University does not hold a copy of any algorithms being used in this pilot.” They also withheld some information, applying an exemption because the University considers that the release of such information at this stage of the pilot could prejudice its own commercial interests.
Phase one of the pilot in summer 2017 involved ‘ingesting sources into Civitas system’ (SITS and Blackboard) to train the models, and then training on the predictive variables.
From the start of Phase 2, November 2017, live data were ‘to be ingested nightly’.[446]
A privacy impact assessment was not carried out.
Dr Ben Williamson, a Lecturer in Education in the faculty of Social Science, at Stirling University, contributed his views about data standards in Higher Education, and why there needs to be more transparency of their purpose.
“Although it is not yet clear how ISO 21001 will be enacted, the fact that 44 member countries are participating in development of the standard suggests it is likely to have wide international impact in years to come. It could become a ‘recipe for reality’ in education that will shape organizations and practices to conform to its rules and prescriptions. It could have the effect of further standardizing schools, universities and even education ministries. It could configure people into standard form, with practices and behaviours shaped as much as possible to ‘deliver’ standardized services and standardized learning outcomes. As such, it is an important example of why invisible standards need to be brought into public visibility.
“As an adaptation of ISO 9000 standards, then, ISO 21001 makes data gathering on customer satisfaction into an essential requirement of educational organizations. Of course, there is nothing especially unique about this. In the UK, a new Higher Education infrastructure for student data collection[447] is already being built that requires universities to comply with ISO 9001 quality management standards. (Williamson, 2018c) As with ISO 21001 prioritizing students’ requirements and satisfaction, the UK data infrastructure starts from the principle of putting ‘students at the centre of the system.’ It will enable data to flow from HE institutions for centralized analysis to produce competitor rankings on many metrics, and is supposed to make Higher Education data more usable for customers (students) as well as for policymakers and institutions themselves.”
“However, the dominance within the learning analytics market by corporations and companies like Pearson and Knewton raise significant issues about data ownership and control in education. This further raises the question, who owns big data?”
The concept of ownership of personal data can be particularly unhelpful where data is related to or contributed by more than one person.
Where GDPR is very explicit however, is on the existing principle of data purpose limitation and compatibility and reasonable expectations. Many reuses of student data go well beyond these.
Baroness Wolf summed up the Higher Education Act 2017 by saying: “The Bill went through the Commons almost unnoticed, in my view, but the turnout today shows that many Lords here feel that it is a very important Bill. A free country needs its higher education institutions to be free, too. Amendments to the Bill are vital and well worth fighting for.”
In summary, the Higher Education Act was seen as a power grab but not only as discussed by policy makers, at the institutional level, — but at the individual level. It enabled the compromise of applicants’ privacy without permission. And it did so in two ways for two sets of purposes.
The first, was intended for public benefit to enable access by accredited researchers in safe settings to de-identified data for public interest research.
Until 2017 UCAS applicants had a layered consent model to agree who can use their data and for what purposes and can opt out from some purposes. That went.
The Secretary of State is now in charge of the definition of ‘approved’ users and ‘research’ which is open to change at will over time without safeguards of transparency or oversight.
Under the umbrella of de-identified data research uses, the aim of using these data is to enable an understanding of the provision and deficit of skills in graduates (and apprentices in the Technical and Further Education Act, which replicates all these weaknesses but ten times over as there is even less wording round its use) and be able to join school records through their further or higher education through to what they earn (and bring in to the Treasury through tax) and what they cost (through what goes out from the Treasury through DWP).
As regards data use it was the Small Business, Enterprise and Employment Act 2015 that included powers to pass back HMRC data to educational establishments about their former students and led to the creation of the LEO dataset. The risks we see include that by telling the alumni offices exactly what people are earning, this process could expose alumni to development offices targeting without any consent or transparency. And it will potentially expose every future generation to whatever use today's or any future Secretary of State’s define as ‘qualifying research’.
The Higher Education and Research Act, and part 3 of the Technical and Further Education Act removed protective barriers between the different data silos within the sector. The Digital Economy Act removed the vertical protective barrier. These paved the way for the permissive pathway of the Digital Economy to sweep up all applicants and students data for all uses across all departments and beyond.
According to Jisc, a recent survey claimed that 78% of students would be happy to have learning data collected if it improved their grades. However there needs to be a frank discussion about why 22% did not want their data collected and the nuance of those opinions. The full details of the survey have not been shared so it is difficult to draw a conclusion about whether those ‘happy’ to have their data collected were made fully aware of the potential consequences.
Currently there seems to be a lack of a meaningful alternative to the contract offered by universities to students. It’s a stark case of accepting the university’s full data processing terms and conditions, or don’t enrol. That’s not a consent process.
Since 2013, equality monitoring data from students in Higher Education have been passed on from UCAS and universities to HESA, the Higher Education Statistics Agency Ltd. HESA passes these sensitive and personal confidential information on to other funding bodies. The data now sit on a named basis in multiple national databases, including the National Pupil Database at the Department for Education, where it may be linked with hundreds of other pieces of personal confidential information about each individual, in their lifetime record.
For many worthy reasons, many organisations advocate for the collection of religion or sexual orientation among equality monitoring data. But any risk of loss, theft, leak or misuse could be devastating to individuals and communities. Risk of damage or distress is high.
There can be no justifiable necessity for distribution of named data in this way that is proportionate to the risks to fundamental rights and freedoms.
Students have been misled up and down the country by their universities for nearly a decade through poor privacy notices, and those we interviewed on campus in Brighton in 2018, did not know these databases exist. Organisations make no real effort to tell applicants how their personal data are distributed, and abuse a tick-box consent process to fudge their legal responsibilities and accountability.
There must be robust action taken to make this data, and every use of students’ personal data on and off campus, safe, lawful, trusted, and transparent. Collection and distribution like this should be unthinkable. We want to see ten changes in this data management process
Organisations which are monitoring students’ social media are currently on the rise but it still appears the exception that universities will monitor individual students for interventions. In 2017, the University of Buckingham began keeping tabs on students’ social media posts to check whether they are showing signs of mental health problems, such as anxiety or depression.[448]
Even if aged eighteen and adult, students are in the position of wanting to fit in and meet an institution’s expectations. Can consent truly be said to be freely given in these circumstances?
Content on a student’s timeline is collaborative, and not only about the student themselves. It is potentially discriminatory too by the dint of inclusion or the inverse, by choosing to not participate. Would a university be able to deny a duty of care by saying a student chose not to participate? While legitimate interests include safeguarding, this processing needs be balanced with necessary and proportionate use of personal data not only of the students but their friends and contacts, which are not provided to a university for this purpose.
Monitoring for the purposes of reputational risk is much more widespread and every university that responded routinely uses commercial social media monitoring software, to look for stories and mentions of the institution. While saying they do not routinely monitor student social media, it is still surveillance and may result in censorship where a student or staff member is found to be critical, even if an institution wants to claim “there are no exceptions to this policy.[449]”
At Aberdeen University, “over the past few years monitoring has occurred for mentions of the University on a daily basis. No ongoing data is collated; however, in terms of targeted monitoring of our students the figure would be zero. No individual data has been retained from our use of the Microsoft Social Engagement Tool. During its operation, returned results from keyword searches (e.g. “University of Aberdeen”) were capped at 10,000 per month by licence.” Between 2017 and 2019 the University made use of the Microsoft Social Engagement Tool service to identify public mentions of the University, and any specific tagging of the University, in public social posts. This was carried out to identify any potential reputational risks and to engage directly with those engaging publicly with the University through mentions rather than direct tagging or messaging. This was conducted on a keyword search (e.g. “University of Aberdeen”) with occasional geographical filtering to support marketing of location specific events. Our monitoring included rudimentary sentiment analysis across all our social media posts as well as supporting student recruitment through basic top-level reporting. In addition, public discussion around University events were monitored for improvement purposes. This work in now done manually. No reporting to third parties has taken place. The University does not specifically target student accounts when conducting monitoring exercises. As an institution we conduct very basic monitoring of public brand mentions across the primary social media platforms. In addition, we respond to comments posted in our feed in response to posts we publish. There is no specific targeted monitoring of any user or user group. Platforms monitored are Facebook, twitter, Instagram, LinkedIn.”
Anglia Ruskin use Pulsar “for social media monitoring of ARU content and public competitor content and sentiment towards ARU. Social media monitoring is also used by the Risk & Compliance Team and Corporate Marketing. In Reactive Fraud investigations, the Risk and Compliance Team may use any social media which are publicly accessible.”[450]
Bath University “does not carry out specific monitoring of students’ social media and does not use social media monitoring for investigations or for the purpose of Prevent.” However, “The University of Bath’s Communications department monitors Twitter, Facebook and Instagram for mention of the University and uses a list of predetermined keywords to monitor any reference to the University made by users of Twitter and Instagram. This is carried out by the University’s Social Media Manager [and using Sprout Social] on an ad hoc basis during the working day.” “The Department of Communications has not passed social media information to a third-party.”
Birmingham commercial sensitivity
Brunel University London “do not monitor students’ social media.” They have a comprehensive policy for staff. “The policy is to outline the acceptable use of Social Media within Brunel University London. It should be clear that policy is not immutable: in particular, in a field such as this, where emerging technology is interwoven with emerging law, we must be able to react to changes. In the formulation and continuous reformulation of policy, we must be guided by advice from within Brunel University London and beyond, taking due consideration of legal precedent, and having due regard to the practices and experiences of our colleagues in other institutions. Social networking in both a business and personal environment can have detrimental effects if not used correctly. This document outlines how the use of social networking should be carried out to safeguard the University.”[451]
Cranfield University does general brand tracking through a social media listening platform (Sprout Social) and monitoring of closed student Facebook groups created by the University for each student cohort. “General brand tracking means monitoring public social media of mentions of the University and related themes (such as digital aviation).” The social media that may be in scope include any public ones; “Facebook, Instagram, Twitter, Youtube accounts or channels; public blog sites and forums.” “The University does not routinely retain data from social media monitoring. Any data, such as cookies, will be deleted in line with Cranfield University retention schedule, which for the most part, means data will be deleted after seven years in line with the statute of limitations. Cranfield University does not have a specific policy on the deletion of data obtained from social networking sites. If a request is received from an individual for the deletion of their data, then it will be considered as a potential right to be forgotten request and the content reviewed and removed as appropriate.” There is no data protection impact assessment; or privacy impact assessment relating to intelligence gathered from social media platforms.
The University of East Anglia similarly monitors Twitter, Facebook, Instagram, LinkedIn, YouTube and Reddit, in Admissions, Communications and Marketing departments for reputational risk. There are no policies on data protection, and data is retained indefinitely.
Glasgow University also uses TalkWalker. “Social listening is used in the Communications and Public Affairs Office to monitor the performance of the University's social media channels, to help manage 'crisis' situations and to identify stories from a variety of media or social media outlets that we might then share on our channels.”[452]
Greenwich[453] like others, “uses a media monitoring software to monitor mentions of the University of Greenwich in the media”, and say, “this is not intelligence gathering about individuals.” [454] “We would however investigate complaints which are made to us in relation to social media accounts, either our own or other people’s accounts.” “As part of the University’s broad monitoring of media mentions of the University of Greenwich we use a software tool – Cision.” “The Communications & Recruitment Directorate undertakes media monitoring
of mentions of the University of Greenwich.” But we note that unlike others the monitoring ap[ears to go further, “We have firewall software which conducts web filtering and monitoring against a range of concerns, not exclusively Prevent.”
Heriot Watt in Scotland complies with the Good practice guide for Scottish Higher Education Institutions which was prepared by the Higher Education Prevent Working Group in June 2015.
The University of Hertfordshire does not specifically monitor students’ social media output. But does use “Orlo, a social management platform, which monitors and reports on inbox sentiment (messages and posts that we have been tagged in)” and “Pulsar for general online engagement with our brand.”
Huddersfiled also use the social media listening tool, Pulsar, to look for stories and mentions of the institution on social media, we do not routinely monitor student social media.
Leeds Trinity University monitors its own social media accounts in order to check for general mentions/tags of the University and answer questions from applicants, parents, students etc in relation to University services.
Liverpool John Moore’s took a neither confirm nor deny approach to the question, Does your University conduct social media monitoring exclusively for the purposes of the Prevent programme? LJMU can neither confirm nor deny the operational scope of tasks carried out in compliance with the Prevent duty. No inference can be taken from this refusal that the information you have requested does or does not exist. Section 24 provides refusal of information in relation to the safeguarding of national security and allows public authorities exemption from the requirements of Section 1(1)a if confirming or denying would have an undesirable effect on national security. Section 24 is a qualified exemption and this requires the University to apply the Public Interest Test. The University acknowledges that the public has a legitimate interest in being aware of and understanding any possible terrorist activities within Higher Education Institutions, and what actions are being taken to combat such activity. It also recognises that disclosure of the information could provide the general public with reassurances that the monitoring of criminals is conducted appropriately. It has, however, taken the decision to provide a neither confirm nor deny response in relation to your request, as to provide a response could indicate the effectiveness of the University’s monitoring activities and prejudice any possible investigation. If a terrorist group were aware of the effectiveness of LJMU’s strategy under the Prevent Duty, or that an investigation was under way, it could lead them to alter their activity and, as a result, impede any monitoring or investigation.”
Oxford Brookes University policy states, “although the University respects and appreciates the value of personal privacy, its IT systems are provided for academic and business purposes and users should have no expectation of privacy when using the University’s IT facilities.” “The University has the right to monitor all usage of the IT, communications and computer systems at any time and without notice. The University may inspect, lock, block, scan, clone or remove any computer or drive or information at any time at its sole discretion. Examples of specific circumstances where the University may choose to monitor are:
Manchester Metropolitan University uses Hootsuite specifically for social media. Their media monitoring system, supplied by Meltwater, also monitors public social media channels
(including blogs and forums) for mention of the University. Social media monitoring is used by the Social Media Team, via Hootsuite to keep a track and respond to direct queries; and when the university’s social media accounts are tagged in posts. The Meltwater media monitoring
system is used to supplement Hootsuite as it also scans public blogs and forums. Social media monitoring is also used for community management purposes, to respond and engage with key stakeholders.”
Manchester University has also, “since June 2018, used Hootsuite Enterprise to conduct social media monitoring, publishing and reporting tasks. Monitoring activities take the form of analysing the performance of and audience engagement attached to University content, including organic activity and paid advertising. Also, since 2018, the University has used software called Brandwatch to monitor social media sentiment attached to the University brand and online content. The contract for the software expires in June 2020. There is no specific policy re. social media in the context of Prevent. There is reference in the IT Acceptable Use Policy to the Counter-Terrorism & Security Act (2015), which enables us to investigate where there is evidence of misuse, but we do not monitor per se. No member of staff is permitted as a matter of routine to monitor or investigate an individual’s use of social media. However, where there are legitimate grounds to suspect unacceptable use, the University may monitor or investigate the use of social media where that information is in the public domain. Where excessive or inappropriate use of social media is suspected, the University may take further action, such as commence a formal investigation in accordance with its disciplinary policies and procedures.When concerns are raised in any context, we may check social media (e.g. welfare concerns), as we might with any safeguarding issue, most commonly Facebook and Twitter.”
Nottingham University does not have a specific policy on the monitoring or use of social media in investigations for the purposes of the Prevent programme. Instead, the University requires students to adhere to the Social Media Policy for Students (https://www.nottingham.ac.uk/governance/...), educates students on responsible use of social media, and expects them to be personally responsible for what they communicate. Where complaints of alleged misuse of social media are received, the University will view material as part of a disciplinary investigation, where this is publicly available and the security settings of the individual permit the viewing of the material.
Queen Margaret University in Edinburgh “subscribes to a media monitoring service provided by Kantar Media. This service provides a summary of instances that refer to the University across print, broadcast and online news sources. This service also provides a summary of comments made via Twitter that include the University’s Twitter handle, and as such only returns comments where the person tweeting has intended for their comment to be visible to the University. Members of staff may investigate comments made on social media where such comments have been specifically drawn to the attention of the University as part of the complaints procedure or relevant disciplinary procedure.” There are no available policies.
At the Royal Agricultural University, There is no specific guidance on social media monitoring as part of the RAU's PREVENT policy. The University will investigate as appropriate where a complaint is raised, particularly if the complaint is around online behaviours. The Royal Agricultural University does not conduct social media intelligence gathering oruse software and/or hardware to conduct social network / social media monitoring and/or in relation to sentiment analysis on social media.
At St. Andrews,[456]in limited circumstances, University regulation (the ICT Regulations) provides that if a student is suspected of making inappropriate use of University ICT facilities then an investigation to establish if University facilities were/were not misused could take place. Inappropriate use includes any activity that would breach legislation and/or activities connected with bringing harm towards others such as bullying and harassment. In such instances, social media accounts would not be monitored – any investigation would look to verify the claims made. In the event that the University was notified of a complaint or concerns under the Prevent programme, where social media may be involved, then this would be dealt with accordingly on a case by case basis. No records are held of such an activity being carried out.
At the University of Stirling where 20% of 14,000+ students are from overseas[457] with 120+ nationalities represented on campus, social media monitoring is carried out on a daily basis using Hootsuite around general keywords related to the university brand and for reputational/PR/brand management purposes. Social media in scope, include Facebook, Twitter, LinkedIn, Instagram, and Weibo. “Social media monitoring is carried out by the central social media team on a daily basis around general keywords related to the university brand and for reputational/PR/brand management purpose. We do not monitor individuals so there is no need to have this activity authorised. If any content is gathered which includes social media account names, then this is deleted as soon as the data is no longer needed. There is no process for deletion, as we do not monitor individuals personal data and information.”
Sussex University[458] “does not undertake routine, pro-active monitoring of students’ use of social media. However, the University has Regulations governing matters such as acceptable use of the University’s IT systems and conduct which may be considered as major misconduct such as offensive, harassing and abusive behaviour. As such, a student’s use of social media may form part of student discipline cases if inappropriate use is brought to the attention of the Student Discipline Officer. In addition, the University has recently approved a Content Filtering Policy which is due for publication shortly. This policy sets out the principles for support of teaching, research and business activities of University of Sussex whilst protecting users, networks and computers from hostile or unwanted network traffic and illegal or potentially harmful content.”
Swansea
At Warwick, No, the University’s Communications team does not gather intelligence on
the University’s students over social media, either overtly or covertly. The Communications team monitor theUniversity’s social media channels (Twitter, Facebook, Instagram, LinkedIn and YouTube), reviewing the comments and threads that follow the University’s posts, and conversations that the University is directly tagged into. The key words and phrases used change frequently and are varied but some examples are COVID-19, Brexit, and the British Science Fair on campus. The team also monitor results that include the University’s name – for example,‘Warwick Uni’ and derivatives of that phrase. This is in line with how most organisations with a social media presence will operate.
University of Worcester “do not monitor social media, we only view it in the context of specific incidents. The University does not undertake either overt or covert gathering of social media. The University only views social media in relation to an incident. It is not ongoing monitoring and we do not store the data. Sometimes we receive a complaint about a students' use of social media, often with a link to a post, this is then discussed with the student and if necessary the Student Disciplinary Code followed.” The policy provided was overdue review in 2015.
York University “monitors direct and indirect mentions on social media (e.g. Twitter, Facebook, Instagram and LinkedIn) for media relations purposes. In some cases the University may hold social media data where it has been submitted by a reporting party in relation to a misconduct investigation. The University may seek in these cases to verify the accuracy of submitted information. The University uses Hootsuite to monitor, respond to and schedule social media content.” There are no available policies.
While the vast majority of universities would only monitor a student’s social media after a triggering event, the existing breadth of general monitoring shows how routine social media surveillance has become. It takes little imagination to see scope creep rapidly transform from reactive reputational management to active monitoring individuals whose actions online may bring the university into disrepute. We have already heard of individual cases where academics will be challenged by University governing bodies for private comments on political matters.
The role of the Student Loans Company (SLC) is to administer student finance on behalf of the UK Government and the Devolved Administrations. The SLC is issued with an Annual Performance and Resource Agreement for the year setting out priorities, objectives and performance measures for the company. One of the key objectives each year is to minimise, detect and deter fraud.
Social media surveillance across all platforms has been normalised to seek clues that might rightly or wrongly indicate that students are lying who tell universities and the SLC that they are in an estranged family and are without student support.
Counter Fraud Teams have access to a number of social media sites including Facebook, Twitter, LinkedIn, Instagram and My Space. In January 2019 [459]however, there was no definitive list available from the SLC of which platforms may be monitored, even though such sites are used as and when necessary as part of an investigation.
The SLC’s Privacy Notice was still being updated to include specific reference to the potential processing of publicly available Open Source information as then CEO called it, for fraud prevention and detection purposes when we enquired.
But more shocking is that the SLC Counter Fraud Service also had no policy covering the processing of social media data of applicants/students, despite doing so routinely and having been heavily criticised five months[460] earlier by the Education Select Committee accused of ‘KGB tactics’ for assessing eligibility,[461] when the affected populations under scrutiny may be vulnerable and need support.
The charity Stand Alone[462] commonly works with LGBT+ students who have been rejected by family, abuse survivors, students who have been rejected by new step-parents after re-marriage or those who have different morals, values and beliefs to their immigrant parents. All have no entitlement to corporate parenting of any kind from a local authority or other agency. They may have been missed by the care system, or the care system didn’t have a remit to intervene. (Bland, 2020)
Without policy, guidance, transparency obligations or oversight perhaps it is little wonder that onlookers in autumn 2018 saw an institution gone rogue, reckless with the implications of its actions on young people’s lives.
Social media surveillance fundamentally rides roughshod over Data Protection law, ignores reasonable expectations and purpose limitation, ethics and human rights, and in this case, lacking in any sense of proportion and cost/benefit given the tiny numbers involved. The SLC attitude was rightly condemned by senior MPs as “sinister, KGB knock-on-the-door” tactics. The SLC denied funding to students, and caused harm while their investigations lasted months of the amount in question, as small as a £70 Christmas one-off cash gift from an estranged parent.
Using student / applicants’ personal data on social media is not lawful for just any purpose, despite Christian Brodie, the SLC chairman’s claims, that it regarded Facebook accounts as a public source of information, and, “they must expect that will be looked at.” It is also likely that the SLC not only “looked at” those profiles, but collected data from them, and made inferences from them, creating new data for which they became the data controller.
Against this secretive backdrop the Student Loans Company was given more legislative data powers in 2018.
The negative Statutory Instrument (Higher Education and Research Act (Cooperation and Information sharing) regulations 2018 (SI 607/2018)[463]was rushed in over three weeks in May, and lists the Student Loans Company as a relevant person (body) among the dozen listed, for the purposes of section 63 of the regulations, relevant to the Higher Education and Research Act 2017.
The Explanatory Memorandum of the Regulation had stated that consultation took place widely within DfE and with HEFCE during Autumn 2017 to ascertain the bodies that HEFCE had historically shared information with and any new types of information sharing which would be needed by the OfS in future.
“The purpose of the information sharing, and whether it would be primarily for OfS functions or the functions of the other body was then determined through a further round of consultations within both the Department for Education and the bodies themselves between January and March 2018. “
We asked therefore for the minutes from those meetings. We have since obtained some redacted minutes and notes[464] from the meetings in the lead up to the formulation of the Regulations. The amount of redaction around HMRC and its data sharing purposes, should be cause alone for many more questions to be asked. But what is also revealing, is that the discussion notes clarify “whether it would be primarily for OfS functions or the functions of the other body.”
The “KGB tactics” outed in the Student Loans surveillance model demands urgent review for its implications of data sharing and enforcement policy across government for the purposes of fraud detection and that far greater attention is paid than to date, when awarding such bodies additional powers.
Since the Chair of Ofqual suggested to the Education Select Committee that future exam mechanisms as a response to COVID-19 might include “using online tests” if the exams cannot be done, then it is timely to address proctoring.
A number of media reports and social media posts reveal how invasive their use can be including requirements on young people showing their lap/groin area to camera in remote monitoring and students reporting that talking to themselves during the exam, can result in a no grade fail.
We have not researched this area in the English state education system, so we are not aware of systems in use routinely, although we may just be unaware of practice rather than it not happening.
As part of the background research for this report, we did however meet with staff from Proctorio in 2018 and were then quite excited by their product and claims that they were all about preserving privacy while protecting academic integrity.
“Validating student identities, securing exam content, and preventing academic dishonesty are necessary in order to increase the prestige and value of degrees and certificates acquired online. The challenge for institutions and third-party exam monitors is how to protect student privacy and provide data security while protecting academic integrity.”
But we have been greatly disappointed since, by their adversarial response to criticism and to students who did not want to be subject to their surveillance product in practice.
Mike Olsen, who is the CEO of the US-based Proctorio, has apologised for publicly posting a student’s chat logs during an argument on the website Reddit and has since deleted the posts, saying that he and Proctorio “take privacy very seriously” in July 2020.
An ANU student and a leader of the group No Proctorio at ANU, reported in the Guardian Australia saying that Olsen’s posting of support logs was “really, freakishly disrespectful”.[465] She said it was hypocritical of Proctorio to say they valued student’s privacy when the CEO of the company was posting student chat logs online. “I think it is very concerning and definitely a privacy issue,” she said. “It undermines their claims.”
There are competing methods of remote proctoring or invigilation that offer contrasting examples of student privacy standards and data security safeguards, and here is what Mike told us in 2018.
“One is online exam monitoring, when a student is watched during a live remote session and personal information is gathered to confirm the identity of the student prior to the exam, credit card information is gathered for payment, and the student is both observed and recorded for the duration of the exam.
“Another method of remote proctoring leverages artificial intelligence and facial detection technology to monitor specific behaviors during the exam session and provide a suspicion rating and exam recording at the conclusion of the exam.
“Live proctoring offers several challenges to student privacy and data security that can be overcome by an automated software solution. As currently practiced, live proctoring relies entirely on personally identifiable information for both identifying the student and proctoring. The data used by live proctoring services is accessible by employees of the third-party vendor. These uncredentialed employees also have access to corresponding financial data and exam recordings. The institutions that rely on third-party proctoring services do not have exclusive control over student data and individual students may be forced to agree to have their data used for commercial purposes when consenting to exam monitoring.
“Automated proctoring using artificial intelligence to monitor student activity does not rely on human proctors to either validate identity or determine academic integrity. Consequently, personally identifiable information is not shared with the third-party monitoring service and remains under the exclusive and proper control of the testing institution. Data security can similarly be assured through zero-knowledge encryption that not only secures data, but also, renders it inaccessible to employees of the third-party proctoring solution. The actual review of exam footage is completed by authorized individuals within the institution such as a teaching assistant or instructor, similar to an in-person testing situation. Institutions already possess personally identifiable information and no further sharing of sensitive information is necessary. Students are validated within the existing institutional learning management system (LMS) using single sign-on technology. Finally, students need not be billed directly for each use of automated proctoring. Institutions that pay for the services and either bill students directly or indirectly through course fees or tuition further remove any personally identifiable information from being shared with third-party vendors.”
Dutch students accused universities of violating their privacy in Spring 2020 by using the monitoring software to prevent cheating in exams being conducted online because of the coronavirus.[466]
The Central Student Council of UvA, lost a legal challenge, arguing that UvA students should have the right to refuse to use Proctorio during online exams and that Proctorio infringed on their personal privacy. "The conclusion is that the data processing by the UvA is based on the basis of Article 6 (1) (e) of the GDPR, and that the processing meets the due care requirements set by the GDPR. The UvA's conduct is therefore not unlawful." (Section 4.17 of Summary Proceedings, Case #: C / 13/684665 / KG ZA 20-481 of the Court of Amsterdam).
The case rested on a lawfulness of data processing rather than a human rights basis. It is one thing to operate on a lawful basis, and other on a consensual one. Surely students that do not want to use such software because of how it makes them feel under exam conditions, should be given an alternative so that their performance is not impinged due to the institutional tool of choice?
Separately, in the summer of 2020 aspiring UK barristers claimed they were forced to urinate in bottles and buckets in front of their laptops while taking their online bar exams remotely, when their exams were proctored remotely.[467]
“Bar student Tian Juin See said he had to urinate into a bottle while maintaining eye contact with his laptop screen so he would not be accused of cheating.” (BBC, 2020)
This lack of human dignity afforded to individuals when treated as suspicious-by-default is shocking.
And another example of automated data processing with significant harmful effects for exam related processing, was widely reported in 2016, when the government wrongly deported more than 7,000 foreign students after falsely accusing them of cheating in English language tests.
“The foreign students were targeted by the Home Office after an investigation by the BBC’s Panorama in 2014 exposed systematic cheating at some colleges where candidates sat the Test of English for International Communication (TOEIC).
“The test is one of several that overseas students can sit to prove their English language proficiency, a visa requirement. After the Panorama broadcast, the government asked the US-based company which runs the test to analyse sound files to investigate whether studies had been enlisting proxies to sit the tests for them. (The Independent)[468]
The Financial Times reported a 2014 accusation against an individual, Rahman, that “rested on a single audio clip that Educational Testing Service, the New Jersey-based company that set the test, turned up in a trawl of its records. ETS said the clip showed that someone other than Rahman took the spoken element of the exam, which also includes reading, writing and listening sections.
“The Home Office forced Anglia Ruskin University, where he had been studying, to throw him off his course. The accusation has cost him a total of £25,000, he says, in wasted tuition fees and in legal fees to fight the Home Office’s efforts to send him home in disgrace. “At the time I had the accusation, I was 24,” Rahman says. “Now I’m 29. That’s the prime time of someone’s life to navigate where you go. At that time, I’ve been struggling to cross my name out of this allegation. At the time we’re supposed to make our life, we’re struggling with this.””[469]
Automated surveillance and automated decisions without adequate protections for human rights, safeguards from harm or that offer accessible and meaningful routes of redress can have lifelong consequences for young people. Whether lessons have yet been learned from the TOEIC experience by the UK government appears doubtful in the aftermath of the exams 2020 fiasco.
Thousands of students were affected by the TOEIC policy failures and still suffer the every day consequences.
England June 2016 |
Northern Ireland October 2016 |
Scotland March 2017 |
Wales March 2017 |
|
Volume of data |
21,230,000[470] |
327,122 |
1,265,501 |
1,034,907 |
Year begun |
1998 |
2006 |
2007 |
2003 |
Pupil level |
yes |
yes |
yes |
yes |
Named |
yes |
yes |
no |
yes |
Sensitive data |
yes, incl. indicators for children of parents in services, ethnicity, nationality, and looked after children. Not religion. |
yes, incl. religion, ethnicity and indicators for children of parents in services |
yes incl. ethnicity , language, nationality, in care, and indicators for children of parents in services |
yes incl. ethnicity , language, nationality, in care, and indicators for children of parents in services |
Subject Access permitted to your own record |
no |
TBC |
TBC |
yes |
Third party access is restricted to those with accredited training or access only with oversight in safe setting. |
no. “Applicants must complete license agreement, terms and conditions.” |
TBC |
no |
No. “Applicants must complete a Data Access Agreement (DAA) and a Security Aspects Letter (SAL)” |
Distributed to third parties |
yes |
yes. |
yes — in most cases, researchers make use of the eDRIS facilities which have been security accredited by the Scottish Government. (Trusted third party model) |
yes |
Identifying data distributed to commercial third parties for re-use |
yes, including journalists, data analytics companies, charities, for turning into print and online school comparison tools, think tanks. |
no |
no |
yes but limited to FFT for analytics and resale to schools and local authorities, see ‘Third party distribution’ below. |
Small number suppression |
“There is no suppression applied to data extracts from the NPD before release. Instead, Requesters are required to sign up to strict terms & conditions covering the confidentiality and handling of data, security arrangements, retention and use of the data. These include that no individual will be identified in published data. The Daily Telegraph requested pupil-level data and so suppression was not applicable.” [source] |
Typically the Department suppresses information of fewer than five pupils for any data that is deemed as potentially identifiable. Any figures greater than five can also be “counter-suppressed” to avoid working out a suppressed figure by subtracting it from a total. |
Our general policy is that numbers under five are suppressed however other numbers may also be suppressed where it is felt necessary to do so to reduce risk of identification of individuals. Where applicants require access to disclosive data for research, different processes apply which restrict the use and sharing of the data. |
Individual level data is fully anonymised in most cases. Disclosure control is applied to all aggregate data, most commonly by not publishing any actual counts less than 5 or any counts that can be derived to be less than 5 from other data in the dataset to be released. |
Audit of recipients |
no |
no |
We have not conducted any audits. We work with partners to encourage the use of recognised safe havens, and this is reflected in our application processes. |
no |
Typical release length |
between 6 months to 3 years depending on the aims of the project, is often extended |
not known |
not known |
Most cases when one calendar year has passed |
Indefinite retention |
yes |
yes |
yes |
yes |
Data collection sources |
School Census is every third Thursday in each term (three times a year) and added to around 20 other data sources to create the National Pupil Database |
Information is collected by the Department on the Friday of the first full week in October every year |
School/Pupil Census from 2007 onwards, internally known as “SCH”. Attendance, Absence and Exclusions from 2006 onwards, internally known as “AAE”. Achievement of Curriculum for Excellence from 2016 onwards, internally known as “TJ” (Teacher Judgements) |
School/Pupil Census from 2007 |
Public transparency of third party data distribution in any online register that children and parents can view |
yes, since 2013. Includes publication of volume of police and Home Office use for immigration enforcement, since Dec.2017. |
no |
no |
no |
Third party distribution |
https://www.gov.uk/government/publications/dfe-external-data-shares |
Unknown. “Analytical Services Unit within the Department of Education (NI) does not keep a record of every single request for data, although it is known that no data has been supplied to the Home Office or Police in this time period.” [source: FOI WDTK] |
http://defenddigitalme.com/wp-content/uploads/2018/03/Scotland_datasharing.pdf |
http://defenddigitalme.com/wp-content/uploads/2018/03/Wales_Datasharing.pdf |
General school census information |
http://defenddigitalme.com/wp-content/uploads/2018/03/Schools-Census-information.pdf |
http://defenddigitalme.com/wp-content/uploads/2018/03/Scotland_pupil_data.pdf |
||
FOI reference or other sources of fact |