Panel I.05 — Navigating Techno-Futures in Education: Artificial Intelligence and/for Social Justice

Convenors Leonardo Piromalli (University of Rome “Sapienza”, Italy); Danilo Taglietti (University of Naples “Federico II”, Italy)

Keywords Artificial Intelligence, digitalization, Socio-Technical Dispositif, EdTech

 

We live in adventurous times. The reality we experience is being reshaped—perhaps, even before and beyond any kind of government policy. It has been a very short time since Artificial Intelligence (AI) has made its entrance into our daily practices: ChatGPT, GoogleBard, Midjourney, and DALL-E have staged new ways of doing almost everything—including education. While research on AI has a longer history, its sudden social adoption has paved the way for a general consensus about the disruption it represents. From AI-Era Governance (Dunleavy & Margetts, 2023) to AI-animated robots, going through AI-assisted learning systems, we have witnessed a flourishing of miraculous predictions about the technological solutions that AI will bring to every troublesome issue of contemporaneity. An ongoing “presentification” of perpetually looming techno-futures is thus produced, raising significant questions for social sciences (Mackenzie, 2015).

Technically speaking, what has brought AI to the forefront are Large Language Models: machine learning algorithms that enable a computer to observe data, build a model, and use it both as a hypothesis about the world and as problem-solving software (Russell & Norvig, 2021). Widening our gaze, we define AI as “any computational system which can sense its environment, think, learn and react in response (and cope with surprises) to such data-sensing [including] both robots and purely digital systems that employ [machine] learning methods” (Elliot, 2019, p. 3). Therefore, we consider AI as a socio-technical dispositif (Deleuze, 1991): an assemblage of disparate elements that makes things seeable, words sayable, and subjects’ conducts (mostly) governed.

For example, AI-processed learning analytics make it possible to predict outcomes, identify “at-risk” students and schools, and ultimately fabricate adaptive learning management systems that “personalise” learning around individual needs (Williamson et al., 2021). This could lead to improvements in the quality of education in overcrowded classrooms and the reduction in disparities in academic outcomes. On the other hand, some scholars talk about the emerging risk, for teachers and students, to become part of a new “cybertariat” (Burrell & Fourcade, 2023), unwittingly or unwillingly contributing data to AI companies.

All that glitters is not gold. Generally speaking, there remains ongoing debate regarding AI in education. Concerns have been raised about the powerful techno-economic machinery at play in its production. While EdTech companies allow faster progress in elaborating new technological solutions, critical scholars underline the effects of the reshaping of a policy-networking that encompasses—and blurs the borders among—entrepreneurs, policy-makers, and philanthropies (Ball et al., 2017). Similarly, the epistemic dimension of AI deployment raises discussions about the possibility of perpetuating forms of “extractive violence” and re-producing colonial relationships (McQuillan, 2023). Finally, machine learning educational assessment is seen as capable of deeply affecting educational subjectivities, freeing teachers and students from routine tasks or limiting their autonomy and critical thinking development (Zeide, 2020).

“Staying with [this] trouble” (Haraway, 2016), we invite theoretical, empirical, and reflexive contributions from scholars across all interested disciplines, as well as practitioners and experts, keen on exploring the forms of entanglement between education and AI, problematising their multiple relations with social justice.

 


Guidelines and abstracts submission