Reading with Machines:
An AI reading workshop
“ChatGPT is less a magical wish-granting machine than an interpretive sparring partner”
Ian Bogost, Generative Art Is Stupid And that’s how it should be
In April 2024, I was invited to lead a workshop with students on MA Graphic Communication Design at Central Saint Martins. The workshop emerged from a recognition that digital technologies and AI are reshaping how our students engage with academic texts. AI tools are continually promoted as more ‘efficient’ ways to read, write and communicate. Yet, while our students are clearly users of AI tools, they can be uncritical consumers of their outputs.
As a tutor, I frequently find myself reading texts, that while grammatically correct, are only performatively critical. While students claim to be ‘delving’ into ‘multifaceted’ topics and ‘underscoring’ the critical need for X, Y , Z (these are all words that Chat GPT uses excessively), their writing suggests that they are reading in ways that favour skimming over deep engagement.
Through a series of mapping exercises, we explored how understanding the way that machines read can transform our own reading practices. Students learned not just about AI, but how to use AI tools as partners in reading and interpreting texts.
Understanding Through Diagramming
We began with a fundamental question: how do we read? Students created visual diagrams of their reading processes, focussing on reading for the course. The diagrams revealed incredible variation in process along with layers of complexity that the students hadn't previously recognised. It really demonstrated that there is no ‘one way’ to read.
From Process to Network
The introduction of concepts from Bruno Latour's actor-network theory shifted students' perspective again. They worked back over their diagrams, visualising the ‘actors’ in their reading ‘system.’ Their processes transformed from sequences of activities into complex networks of interactions between actors that included tutors, data centres, translation apps, AI tools, screenshots, pets, beverages, corporations, furniture, the weather and the climate.
‘Tools’ as Actors
A significant moment in the workshop came when students turned their actor-network maps back on the tools themselves. Asking who owns this tool? What was it trained on? Whose labour made it possible? reframed AI not as a neutral analytical instrument but as an actor with its own conditions of production. It was a welcome space to discuss LLMs, built from enormous quantities of text, much of it taken without explicit consent, by a private corporation with specific commercial interests. For a few students, this seemed to shift something. Understanding that an AI's interpretation of a text is not objective analysis but a product of particular decisions, data and power relations seemed to change how they held the outputs of these‘tools.’
Machines as Analytical Partners
A key revelation came when we compared human and machine reading processes. Understanding the differences and parallels helped students consider the potential pitfalls in using AI to replace their analytical work. It also highlighted the ways that AI tools might complement it. Mirroring the way machines process information, students designed prompts to break down texts into different forms of analysable pattern.
Mapping texts
These ideas proved particularly powerful as students applied them to analyse a text. They spent an hour visualising From Writing to Prompting: AI as Zeitgeist-Machine by Boris Groys as a network of actors and relationships at different scales. Their diagrams were guided by collaboration with Chat GPT, using their own questions and carefully designed prompts.
Show and tell
A closing discussion of the diagrams revealed as many different interpretations of the brief and text as there were students. A small selection are featured below. The diagrams further emphasised that there is no one way to read or interpret a text. The different prompting strategies used by the students allowed us to ‘see’ the text from multiple analytical perspectives.
So what?
While there were many insights during the course of the workshop, there were three key ones for me:
AI as an Analytical Tool: Students found that AI tools often identified patterns and relationships within texts that they had missed in their own reading process, inspiring them to explore new ways of analysing the content.
Text as Data: Treating texts as data sources — as AI would — suspended students' anxieties about 'getting' the text. This gave them permission to explore it, and they did, from many different angles.
Theory-grounded prompting: Prompting strategies were grounded in theoretical ideas, creating both a framework for exploration and a way to 'sense-check' the AI's interpretation. This was a new approach to prompting for many students.
From Process to Method: Students moved from blindly using AI tools to adopting aspects of machine reading in their own practice. Crucially, this included questioning the tools themselves, understanding AI as an actor shaped by interests and conditions of production shifted how students held its outputs. They developed new analytical strategies that combined machine logic with their own curiosity, interpretation and critical judgement, with really interesting results.
Now what?
Rather than using AI to replace their own reading and analysis, students explored ways to enhance it, offering them new strategies for their reading practices. As students better understood how machines broke down and processed text, they were able to design prompts and use AI tools to more effectively explore their own questions.
What the workshop revealed for me is that our students don't need us to teach them how to use AI tools — they're often more adept at this than we are. What they do need are ways of critically examining the role these tools play in their learning. As an educator, my role is increasingly about helping students contextualise and critically examine their existing practices rather than introducing new ones. Academic reading shouldn't be about choosing between human and machine approaches, but rather understanding how to read with machines in ways that enhance rather than replace human comprehension.
Through diagramming and actor-network mapping, we began to make some of these hybrid practices of reading and meaning-making visible and a little more open to interrogation. Learning to read with machines becomes not just a practical skill but a type of literacy that can support deeper engagement and active experimentation with theory and ideas.
What this workshop didn't do…
A three-hour workshop can open doors but it can't walk very far through them. So, I'd like to reflect on where this one stopped.
Working primarily with ChatGPT meant that the workshop took place largely inside the ecosystem it was examining. Students were encouraged to read with the machine more critically, but we didn't spend much time reading the machine itself as a cultural and political object — asking not just how it reads, but what worldview is embedded in the way it reads, and whose interests that serves. These questions were touched on in earlier discussion but became much harder to hold onto later in the workshop, when we were simultaneously relying on the tool to help us think.
Understanding how AI shapes our reading practices is necessary but not really sufficient. Ideally, students would also have space to ask: given all of this, what do I want to do? How they might use these tools differently, or redesign their practices around them, was addressed — but in a pragmatic rather than critical sense. There was no also real attention to how students might resist. That requires more time, and probably a different structure — one that returns to the questions raised here after students have had time to sit with them.
There's also a broader structural question that a single workshop can't answer. A workshop like this one can surface questions, but if students don't encounter those questions again — in the curriculum, in crits, in feedback — they tend to recede. Critical engagement with AI isn't a skill to be acquired once; it's a disposition that needs to be developed over time, across a curriculum, in relation to students' evolving practices.
What this workshop points toward, then, is less a better workshop and more a different kind of curricular thinking — one where questions about AI, reading, knowledge and power aren't siloed into a literacy session but woven into the fabric of how a course is taught. That's a much larger project, and one I'm increasingly convinced matters…
Reading with Machines was a three hour workshop developed for students on MA Graphic Communication Design at Central Saint Martins. It was delivered four times over two days on 25th and 26th April 2024.