Blogs

QLD Branch News: August 2025

By Bruce Addison posted 05-08-2025 11:06

  

Knowing and knowledge in the age of AI: Some counter cultural thinking

I used to float, now I just fall down
I used to know but I’m not sure now
What was I made for
What was I made for?

Billie Eilish (2023), What Was I Made For?

 

Recently, educators globally have been reflecting on artificial intelligence (AI) and its significance for our young people. Realistically, the thinking rests along a continuum ranging from a sense of emancipatory possibility through to an existential threat. Who knows where it will all land? There are so many claims and counterclaims, positive predictions, exciting possibilities, soothsaying and catastrophising that it is often difficult to see a through line. For this reason, I was reticent to comment on the spectre of AI. It does, however, have significant leadership implications for schools, and it cannot be ignored.

I am a teacher of the humanities and approach the technology through this prism. Others, of course, look at it through different lenses that provide different answers and different questions. No matter the lens or lenses used, AI has and will eventually generate significant disequilibrium; no one really knows where the new equilibrium will rest. My feeling is that at some stage it will result in significant disruption that is both predictable and unpredictable. Eilish’s question, ‘What was I made for?’ is a lament that can certainly be translated to our era of encroaching AI. I will return to Eilish’s question later with a positive response!

The market is working as markets work. As we all know, markets fail spectacularly often, with many unintended consequences. Supernormal profitability is what often occurs between market penetration and market failure — the allure can be all too enticing. The big transnational corporates have all foisted their version of AI on us through their various products and platforms without giving us the opportunity to opt in or out. It just arrived. Who knows what the outcomes will be? Interestingly, The Economist magazine published an article recently titled Why is AI so slow to spread? Economics can explain. The premise of the paper centred, in part, around the role human actors play in lowering the level of technological uptake, especially when threatened. This is significant, especially given the spectre of job losses across industries and professions so often ascribed to the full-blown adoption of AI. Unlike older generations, our young people generally do not have the experience or wisdom to be naturally cautious when using new technology. The availability and temptation to use it and incorporate it into their lives creates many challenges as well as so many exciting opportunities. It will be interesting to see what the pace of uptake is compared with the predictions emanating from the tech industry.

As educators, when faced with this reality as well as the spectre of significant disequilibrium, we must ask ourselves the question: should we be reactive or slow looking? Decision-making in the context of AI must be both. There must be an element of fast and reactive decision-making, given the onset and proliferation of the technology. If we miss or reject the richness of slow looking and considered decision-making, we will be doing ourselves and our students a disservice. Holding one’s nerve is just as important as the fast, reactive decision, given the pace of the change. Teaching the ethical issues as well as academic integrity issues associated with the use of AI takes time, and it cannot be rushed. The mantra should be ethical and academic integrity considerations first and embedded and considered usage second. First mover potential advantage might be enticing but could also be a fizzer. Also of great importance are the absolutes associated with child protection and cyber security. Quick or knee-jerk responses could have terrible consequences if any given piece of software was not examined through the appropriate child protection and cyber integrity lenses. There is a role for government intervention in the market to safeguard our children. Sadly, there is always a lag, especially when markets have behaved predictably in the name of market penetration and profit maximisation. The behaviour of the tech monoliths operating across national boundaries remains questionable at best. Legislative approaches across national boundaries will be sketchy and just in time at best. If great care is not taken, the consequences could be catastrophic. It is important to remember the adage that all that glitters is not gold.

When faced with technology that seemingly automates tasks — tasks that once might have been regarded as ‘higher order’ thinking skills — it is important to reflect carefully on how such ‘artificial intelligence’ is embedded in our classroom practice. Practice must pivot, and perhaps pivot dramatically, if this technology is to be harnessed in the interests of our students and for the betterment of society, if not civilisation.

It is a truism that the automation of skills that were once considered to be the realm of the deeply human pushes us to think differently about some familiar ideas. AI-generated responses can now produce very well-constructed output, especially when generated through highly targeted prompt engineering. Synthesis and evaluation were once considered to be the output of an acute and well-educated mind. The so-called higher order skills can now be performed by AI. Some older taxonomies had knowledge and understanding as a baseline lower order skill. Certainly, the history of Queensland’s syllabus development is a testament to this. To my mind, AI is pushing the importance of knowledge and understanding back onto centre stage — not everyone agrees. Knowing and understanding stuff, understanding stuff deeply, thoroughly and thoughtfully is what just might save us from foreseen and unforeseen chaos. We must treasure our ability to question and to give our young people the toolkits to ask informed questions carefully and creatively. It is what we choose to ‘know’ that is the issue. Armed with well-honed knowledge and the ability to apply this knowledge, as well as the ability to ask well-honed questions based on such knowledge, will be essential skillsets for people to flourish in the age of AI. Neuroscientists have told us for a long time that remembering stuff is important for our cognitive health. This message has not changed. Skill-infused visible thinking, based on hard-core knowledge and understanding, must be our gift to our students.

When focussing on all things educationally nourishing, my safe space is Parker Palmer’s (1998) brilliant classic The Courage to Teach: Exploring the Inner Landscape of a Teacher’s Life. It is an ‘oldie but a goldie’. In 2025, Palmer’s timely observations still provide both balm and a profoundly relevant thought compass. He notes:

Knowing is how we make community with the unavailable other, with the realities that would elude us without the connective tissue of knowledge. Knowing is a human way to seek relationship and, in the process, to have encounters and exchanges that will inevitably alter us. At its deepest reaches, knowing is always communal (pg. 55).

Knowing and knowledge are a communal binding agent. They strengthen, bond and unify. They are essential components of our common humanity. The opposite is also true — misinformation that might have been absorbed as ‘knowledge’. This can cause much damage if it is false, founded on nonsense, and designed to manipulate. Discernment is the key. Discernment does rely, in large part, on the thoughtful reaffirmation of knowledge and the skills associated with its skilful deconstruction. As the potential for outsourcing much of our synthesis and evaluation continues to become more of our lived reality — either knowingly or carelessly — to AI through skilful prompt engineering, great care must be taken to ensure that we know enough to recognise when things are plainly false, marginal or just wrong. Less-than-judicious outsourcing could lead to great harm. As Hannah Arendt (1973) recognised:

Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, for in their opinion fact depends entirely on the power of a man [person] who can fabricate it (pg. 50).

Arendt was looking back into the not-too-distant past when she wrote this. She was also throwing a helpful floodlight onto what then would have been viewed as ‘Neverland’ by her careful deconstruction of human behaviour.

Recently, D. Graham Burnett published a very timely and reassuring article in The New Yorker’s Weekend Essay (April 26, 2025) entitled Will The Humanities Survive Artificial Intelligence? It is worth quoting a section in full:

When discussing AI, one student noted: “I guess I just felt more and more hopeless,” he said. “I can’t figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.” He said he felt crushed.

Some heads nodded. But not all. Julia, a senior in the history department, jumped in. “Yeah, I know what you mean,” she began. “I had the same reaction – at first. But I kept thinking about what we read on Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast and incomprehensible, and then you realize your mind can grasp that vastness. That your consciousness, your inner life, is finite – and that makes you greater than what overwhelms you.”

She paused. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.”

The room fell quiet. Her point hung in the air.

And it hangs still, for me. Because this is the right answer. This is the astonishing dialectical power of the moment.

We have, in a real sense, reached a kind of “singularity” — but not the long-anticipated awakening of machine consciousness. Rather, what we’re entering is a new consciousness of ourselves. This is the pivot where we turn from anxiety and despair to an exhilarating sense of promise. These systems have the power to return us to ourselves in new ways.

Do they herald the end of “the humanities”? In one sense, absolutely. My colleagues fret about our inability to detect (reliably) whether a student has really written a paper. But flip around this faculty-lounge catastrophe and it’s something of a gift.

You can no longer make students do the reading and the writing. So what’s left? Only this: give them work they want to do. And help them want to do it. What, again, is education? The non-coercive rearranging of desire.

Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold — nobody will read them, and systems such as these will be able to generate them endlessly at the push of a button.

 

I agree wholeheartedly with Julia. Our me-ness is almost as unique as our DNA. To return to the question Eilish posed at the beginning of this short piece: ‘What was I made for?’ — perhaps the answer is our unique me-ness. Our me-ness is just so important. Civilisation as we know it relies on how we coalesce peacefully together in such a way that encompasses and respects our collective ‘me-nesses’. It springs from our nurturing, our experiences, our education and our willingness to engage or disengage. As educators, we have a significant role to play. The coming disequilibrium will give us scope for courageous, counter-cultural thought. The assessment (artefacts of learning) space is a good place to start. AI and its output must form part of our approaches to contemporary assessment practice. There will be a period of significant disequilibrium in this space too, whilst the whir of competing forces creates more static than we are used to. The very real reality is that the space where ‘teachers and students meet’, in the words of the Dutch educator Max van Manen (1991), will become much more complex. We, as educators, must appreciate this and be prepared to jettison, reconceptualise, think counter-culturally and take calculated risks in order for us to stay relevant and to serve our students for their futures — futures that are both exciting and extremely complex.

Let’s end with some verse to bookend the lyrics posed by Billie Eilish at the start of this piece:


I still float — because I choose to rise.
I still know — even when knowing shifts.
I was made to imagine more than machines can reach.

 

References

Arendt, H. (1973). The Origins of Totalitarianism. Harcourt Brace Jovanovich.

Burnett, D.G. (2025). Will The Humanities Survive Artificial Intelligence. April 26.

Palmer, P. (1998). The Courage to Teach: Exploring the Inner Landscape of a Teachers’ Life. John Wiley & Sons.

The Economist, Why is AI so slow to spread? Economics can explain. https://www.economist.com/finance-and-economics/2025/07/17/why-is-ai-so-slow-to-spread-economics-can-explain



Bruce Addison
ACEL QLD Branch President

0 comments
16 views

Permalink