News / Blog

AI and Society Forum event summary: Algorithmic oppression in our education system.

On October 31st we had the privilege of contributing to a session at the event AI and Society Forum convened by Careful Trouble.

We gave a whistlestop introductory tour to some of the top level issues in education and algorithmic decision-making in examples of technology used in classrooms and universities, and beyond, as the boundaries of education have encroached increasingly into private and family life with the adoption of technology by educational settings.

We shared slides as prompts that you can download here below, and a handout of references that we made or wanted to make in the talk, and the infographic we referred to. It shows a mock up of a typical day-in-the-life of an 11-year old at state secondary school in England today and the data flowing into, across and out of state education systems, including the invisible thousands of others and third parties involved in the lifetime of a datafied child beyond this, after data leaves the settings.

Tracey Gyateng presented some of the initial findings from the work of Data, Tech and Black Communities to explain edTech and empower local communities with how people can respond to any unwanted effects, risks and harms from the use of these every day and emerging technologies.

In the Bletchley Park AI Summit we have not seen any mentions of the law, or legal expertise, and we feel this is an important gap that needs filled in the public interest. We therefore invited Will Perry of Monckton Chambers to join us, contributing a very brief  independent summary of some of the ways in which people can use the law without lawyers in everyday exercise of their data rights under UK Data Protection law and the GDPR, before the kinds of work that lead to collective or civil society action that we at Defend Digital Me for example, might engage with individuals or communities on to support them in collective complaints or judicial review type action.

We tried “unconference-style” to shape the session around discussion which brought in Q&A on different parts of the talking points. It is an area that needs far greater democratic debate and involvement of affected communities; pupils, students, parents and families, teachers and institutions across the sector.

The tagline ‘refuse, retract, resist’ draws on the successful Against Borders for Children campaign that between 2016-2018 boycotted the new collection of country-of-birth and nationality data in the school census that was intended to be handed over to the Home Office for their immigration related purposes, not in support of the aims of education, and that did not respect children and families data rights. The State is not always aligned with children’s best interests. But changing that together is possible.

We are immensely grateful to the Careful Trouble team who gave us the space, appreciative that this meant others who may have wanted to present could not. We hope that together in future there is more opportunity at the national and government-led events for the people most affected by technologies to be those invited to speak about them, and that those of us talking about the systemic issues and policy will have less to solve. Today, that still seems like a far off challenge. But it is one that in education, society must not give up on. Corny as it sounds, our future depends on it.

Reflections post event

Artificial Intelligence in its multitudes of technology may be better defined by its aims and outcomes, not an umbrella term at all. Too often when it comes to thinking on AI and education it is about some imagined future and there is low awareness of how and where AI is already embedded in common practices. AI ‘safety’ may be the theme at Bletchley this week, but what that means for children is contested. AI is already in many so-called “Safety tech” products profiling children as ‘extremist’ or ‘terrorist’; or in apps and platforms using highly sensitive data for suggesting predictions and interventions in mental health or used in child protection; or in apps that gather huge amounts of teacher inputs and by “ranking and spanking” behaviour, suggest changes in teacher responses. That’s before we get to what one might think of as real teaching and learning platforms with excessive data gathering every two seconds from every mouse click from every user, used as a data resource to “train” the AI without pupils’ knowledge. Many UK schools have adopted AI without even knowing it, in soon-to-be categorised by the EU, “high-risk” biometric tools in schools.

Who lies behind the production and adoption of these tools and why? The political-social-economic aspects of who gets a say to influence and permitted interference of millions of children’s lives every day is often hidden behind the assumption that everyone sees the purpose or value of education in the same way. It’s not the case. And the algorithmic underpinnings of individualisation and quantification of education away from collective and collaborative learning, and more towards control, embedding ‘norms’ and identifying learners who are ‘outliers’, have distinct authoritarian historical leanings, and is the growing direction of travel as tech moves towards more use of bodily data and interference with future behaviours in interventions based on data-based “predictions”. We are in effect engaging millions of children in product trials without consent. Is this ‘interference by ‘others’ outside the educational setting, what families expect when they send a child to school?

The misuse of power in education, and abuse of learners’ human rights, lack of democratic involvement agency and autonomy, and limited respect for legal obligations in risk-averse settings worried about reputational risk, is sped up at scale through algorithmic decisions. It is only made worse by the power imbalance that edTech companies and their CEOs can impose on schools through closed systems in proprietary products and commercial contracts, and that school and national authorities can impose on learners and families. It is companies that are steering what values are embedded in education through edTech and its execution, often ignoring questions of the law on data protection, procurement, and public equality duties, and introduced by staff in educational settings who have little training or expertise to tell marketing hype from bona-fide tools, and who must rely on terms and conditions that can be changed at will.

In the UK we seem to have enabled this and lost sight of the universal aims of education built on rights-respecting foundations. This is something worth working to restore, and ensuring education systems are shaping the future of society for the common good as well as economic visions in the public interest, above prioritising private gain for the few.

Article 26(2) Universal Declaration of Human Rights (1948)

Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.”

The messaging around challenges in AI and what safety and ethics means and how we achieve it always seems to ask the same question, “Given a set of possible directions for technology deployment, how might we aggregate, understand, and incorporate the conflicting values of overlapping groups of people?” But never seems to mention the world did this already, in agreeing universal human rights in 1948. Why some countries refuse to follow international law and standards is the real question that any new voluntary agreements won’t solve. In an increasingly automated world, we don’t need to frame this as a need for AI safety, so much as what safety for humans means, and for which humans.

And what are its costs that global AI debate is failing to address today? Higher Education students have shown in different countries around the world when it comes to exam proctoring that these include anxiety, being treated with suspicion by default, and gross intrusions into private and family life. There are hugely negative effects on trust, and the questions of academic experience and integrity of qualifications, as well as IP and delivery of teaching content, that we addressed in an event at Parliament in July 2023. Procurement systems are global, international, national and local processes. Local decisions in the UK incur demand and affect whole systems costs such as to climate, raw mineral environmental and community costs to children, including child labour, in the majority countries of the world, in order to create tech and tools to support learnings of the few in the “global North”. Do we continue or change this status quo and can we justify its costs to today’s generation? All this is before even considering the costs to the UK State sector of our teachers’ time and free labour paid by staff and pupils as they use systems to produce and clean the data, the resource that in turn is the raw material for a few product owners to mine and turn back into new products.

The debate around education is often about products not people, but also omits that it is deeply political in its nature, and inextricably bound to the broader issues of state sovereignty and democracy, disadvantage and marginalisation. Opportunities and benefits do exist but they are wildly inequitably distributed.

We rarely see independent evidence of efficacy or outcomes and pedagogical approaches in AI products or how to ensure equitable access. More rare still is UK debate if AI in some of its present forms and application should be used at all in education. Is it right to be building the future on datasets trained to identify patterns in historical data with its well-recognised discrimination and stereotypes that this generation challenge as outdated? We are profiling today’s children often using data from the past, yet those systems are used for making ‘predictions’ that affect different children and their as yet unknown futures.

Interventions today are often intended to control or “nudge” decision making to change their future, so questions of data and control in this context is not about an abstract interference with a right, but invisible interference in a life. Who gets to decide what that should look like? Or nudge children’s actions or thoughts, influencing behaviour and mental health in hidden ways? How could and should technology in education support in the aims of education in Article 29 of the UNCRC, towards the “development of the child’s personality, talents and mental and physical abilities to their fullest potential” and support human flourishing for everyone? In an increasingly automated world, what would a human rights respecting environment in education look like, and how do we achieve it?

To lean on the words of the Careful Trouble event theme, “Let’s make AI work for 8 billion people, not 8 billionaires,” there are over 8 million children in the UK in state education system today, plus students in Higher Education. They each need AI to work for and not against them, in and beyond educational settings.

Algorithmic oppression is very real in our education system. Change is not only possible, but should be a political imperative.

 

Jen Persson


Reference materials

Handout: AI & Society Forum 31.10.2023  DTBC DDM handout available to download here [.pdf] 722kB.

Slides: AI & Society Forum 31.10.2023 Algorithmic oppression in our education system Refuse. Retract. Resist. available to download here [.pdf] 2MB.

Some of the questions and topics we aimed to address included:

Empowering affected people
Understanding and defining what is EdTech? What is AI in education?
Where is AI commonly used in education in England today?
What are its purposes? Costs? Who does the tool serve? Whose interests are prioritised?

Rights and responsibilities under the law.
Who has what rights in the system? Who has what responsibilities?
What does privacy mean in an educational setting context and the power imbalance? Thinking more about “arbitrary interference” in lives, than “data” privacy.
Why and where data protection law can help uphold data rights but it may not always uphold privacy.
Why is consent problematic in an education context and what are the alternatives?
What are the aims of education and how do we uphold them?

Identifying problems and search for solutions:
What are the risks and harms at individual and community, national and international levels?
Using a single sector case study: Where harms have been identified in a narrow example of exam proctoring, what has been done in affected communities in different countries, and what has been effective in resistance and bringing about better solutions? How do we use what tools: including existing law, to prevent or mitigate those risks and harms at different points in the systems and processes for affected learners and their families?

Next steps: In an increasingly automated world, what would a human rights respecting environment in education look like, and how do we achieve it?