by Harshana Rambukwella
(formerly Open University of Sri Lanka)
The ‘digital divide’ and AIs
The term ‘digital divide’ emerged in the 1990s in the US to describe regional and class-based inequities in access to information technology (ICT) resources and later came to signify such inequities on a global scale. Such inequities range from access to devices, access to the internet, the speed and quality of internet access and the uneven spread of what is called ‘digital literacy’ – or the ability to use ICTs. During the COVID-19 pandemic and physical closure of schools and universities we witnessed the stark realities of this divide within Sri Lanka and how it amplified pre-existing inequities in the country’s education system. Students from disadvantaged social backgrounds – rural students, students from urban poor backgrounds and students in plantation communities, etc. – struggled with the shift to online learning. Policymakers and politicians attempted to provide a positive spin on this tragic reality. However, with students climbing trees, scrambling up hills in search of better mobile internet reception and teachers confounded and confused by the technology, the painful inequities of our education system were evident. The Sri Lankan experience ran counter to the hype about how technology can ‘disrupt’ conventional educational structures and potentially ‘democratize’ pedagogical practice. Technology’s ability to ‘disrupt’ or level the education playing field remains heavily overdetermined by social, class and regional inequalities and therefore claims about pedagogical innovation through technology need to be considered with caution.
In this piece I want to reflect on the implications of Generative Artificial Intelligence, sometimes referred to as chatbots, for higher education in Sri Lanka. Given the multidimensional crisis confronting higher education in the country – dwindling state resources for state universities, inability to attract and retain qualified staff, dysfunctional governance, just to name a few – something like AI has received relatively less attention. However, AI is fast becoming a reality across education systems across the world and this is no different in Sri Lanka. One of the ironies of AI’s rapid infiltration of education systems is that it can seemingly offer a progressive ‘disruption’ of education systems but in reality, have the opposite effect. In fact, the free access offered by AI (at least in its current iterations), the broadening of internet access, and their ease of use – requiring little or no specialized IT knowledge – seemingly offers ways of bridging the ‘digital divide’ I referred to above. But, as I attempt to illustrate below, AI might further amplify and exacerbate pre-existing inequities in education and very likely create new challenges that might further marginalize already disadvantaged learners.
AIs and equity in education
One area in which AI infiltration can have multiple equity-related implications in the Sri Lankan higher education context is in the use of English as a medium of instruction. Many private higher education institutions in the country already use English as the medium of instruction across all disciplines because of the social capital associated with the language and the flawed and reductive correlation established between English and ‘graduate employability’. Normalizing this flawed discourse further, state universities are also now attempting to institute English medium instruction – even in social sciences and humanities disciplines, which unlike the sciences, have traditionally used local languages. The standard of English among faculty and students in all disciplines, but more so in humanities and social sciences, is grossly inadequate for such a shift in medium in instruction and will therefore inevitably result in what can only be described as a crisis. Within such a scenario one can see the immediate attraction of Generative AI because it will allow both faculty and students an easily accessible and easy to use tool that can generate curriculum content and a wide range of student assignments. However, the use of AI also requires a command of English. The quality of the AI output is heavily reliant on the quality of prompt you provide it and therefore both faculty and students who are more proficient in English will be at an advantage. They will also be at an advantage in reading and vetting the content produced by the AI – whereas someone with low proficiency in English will be forced to use the content as is.
In addition to this explicit link between English, AI and equity there are a number of other issues that can have significant implications for higher education – if AI adoption becomes a widespread practice. AIs operate on what are called Large Language Models or LLMs. These are essentially massive databases of text (or what linguists would call corpora) which the AIs draw upon to produce content. It might be helpful here to illustrate how AIs produce seemingly ‘new’ or ‘original’ content. When an AI is given a word or a series of words or a set of sentences as a prompt, it draws upon this vast database to calculate the probability of what words or sentences will follow the words and sentences a user provides in the prompt.
This probability, in turn, is based on the content of the databases that the AI has access to. When this process is scaled up millions or billions of times the content the AI produces looks ‘new’ though, in fact, it is mimicking patterns it has identified in the databases it is accessing. What this mimicry means though is any biases, prejudices or omissions present in the databases are also being reproduced and possibly being amplified. Given that a vast amount of the content on the internet (which is the main source of data for AIs) is in English and is ideologically Anglo, US or Eurocentric the AI output will also reproduce these biases. Since multilingual content on the internet — particularly when it comes to globally marginal languages like Sinhala or Tamil (though Tamil has a much larger presence) – is extremely limited, speakers of such language are particularly disadvantaged. This also means that AI can potentially skew academic production – both what faculty provide as curriculum and what students produce as assignments – in ways that silence or erase local specificities.
One can argue that this is already the case because even in the social sciences and the humanities knowledge production remains heavily Anglo-Euro-centric and academics and institutions in the global south are forced to rely on such ‘first world’ material. However, what most faculty do is adapt such material to local conditions and realities. For instance, when a Sri Lankan sociologist draws on the work of Pierre Bourdieu, there is almost an inevitable compulsion to ‘read’ Bourdieu in a way that makes sense for the Sri Lankan context – or at least one hopes that is the case! However, if such content is generated by AI, the very experiential world it draws on – not just the theory – will have little or nothing to do with the Sri Lankan context. One might argue that this is an extreme example, but I contend it can be a very real scenario in the neo-liberal higher education environment that is increasingly becoming the norm in the country. In this scenario both the teacher and student are in a transactional relationship where the teacher is a ‘service provider’ and the student a ‘customer’ and both are concerned with the ‘end product’ – a qualification – rather than the process (educating a mind). AI generated content in this context – which has the trappings of ‘proper’ academic output such as formatting, the right generic sentences, a structure that resembles an academic paper – but has little original or context-specific material will be tremendously attractive to both teacher and student. One can also imagine scenarios where students might write a prompt in Sinhala or Tamil translate it into English (using online translation tools) feed it into an AI and then retranslate the AI output back into the local language. Leaving aside the ethical dilemma of whose product this is, one can imagine how derivative such an output might be. I also believe this is not a far-fetched situation because I know anecdotally that such things are happening in universities in Sri Lanka already.
AI, the market and knowledge inequities
By way of conclusion, I want to reflect on how AI is deeply implicated in global capital, the commodification of education and the reproduction of inequities in global knowledge production. While AI has appeared in the guise of a disruptive technology, it is no accident that it is some of the world’s most powerful tech companies that are funding and driving AI innovation. The potential of AI to replace a wide range of white-collar jobs – even in the creative industries where automation was previously unimaginable – is now becoming clear. In education, as I have already mentioned, AI can easily accelerate and intensify the transactional nature of commodified education models – where students essentially ‘buy’ a qualifications. Given that Sri Lanka is also mulling things like student loans, as the state increasing divests itself from education, AI provides a complementary technological transformation of the higher education space. The heavy emphasis on digitalization in the controversial National Education Policy Framework (NEPF) can also be seen as part of this larger trend. AI will also most likely (though this is not a given) integrate Sri Lanka even more tightly into an uneven and inequitable knowledge economy, where in a dark irony much of the intellectual labour in building and ‘training’ these AIs will come from places like Sri Lanka but in time will also result in the loss of employment as the AIs reach a level of ‘maturity’ and ability to self-perpetuate. In addition to coding the actual AIs they also need ‘training’. A recent insidious example of this was how Open AI, the company behind ChatGPT, used cheap Kenyan IT labour (paid less than 2 US$ per hour) to label data considered ‘toxic’ (racist, hateful, SGBV content, etc.) so that ChatGPT could be trained to recognize such ‘toxic’ content. Reportedly, this experience left the Kenyan IT workers severely traumatized – a situation where workers in the global south now have to contend cleaning the first world’s digital garbage in addition to historically being the recipients of toxic industrial waste products.
The way AI is being ‘sold’ in higher education today is to regard it as a tool which is not a replacement for human labour but something that can potentially enhance human interaction – for instance, by freeing up faculty time from routine tasks by assisting with grading, developing rubrics and even developing curriculum content so that faculty have time for a deeper and richer engagement with students. But the industrial scale at which these technologies are infiltrating higher education suggests an altogether different reality. Particularly in education markets like Sri Lanka with weak regulation and oversight, where, for instance, highly dubious online instruction-based courses have proliferated post-pandemic, the attraction of substituting human labour with AIs for profit-driven education investors is obvious. At another more fundamental level AI is creating a situation where a kind of highly skewed ‘global subconscious’ – based on the flawed data which shapes these LLMs – might begin reproducing itself at scale if education institutions in the global south begin to rely more and more on these technologies. For Sri Lankan higher education this might seem a distant reality at the moment. But if the IMF-crafted austerity programme that is aggressively reshaping Sri Lankan society at the moment has taught us anything, it should be the power and reach of global capital – potentially accelerated and amplified by the industrial scale of AI.