04/22/2026 | News release | Archived content
Breadcrumbs List.
22 April 2026
Commentary: The agency many of us fear losing in the future is often already traded incrementally in the systems we accept today.
We recently wrapped up teaching a new course on responsible AI leadership, governance, and the ethics of algorithmic decision-making.
We expected debates about distant futures speculating on the risks and threats we might face. What surprised us was how quickly the conversation about the future described the present.
By the end of the course, our students weren't asking what AI might do to New Zealand in 20 years. They were debating the ethics and risks of what it is already doing - incrementally, without us being aware of the potential risks, and without our consent.
'I want to sounds smart and inspirational': How ministers use AI
In our final session, we used a future-facing narrative we created for our research. Set in 2040, the story's protagonist, Emma, is a young public servant who starts her day by stepping into an AI-steered virtual holo-deck to check in on the world.
Some students found the scene disturbing: "The early morning routine from home is scary. We should prevent this from happening."
We asked the class: "Isn't this already happening?"
While eye-tracking and behavioural monitoring technologies can help protect exam integrity, they can also shift the social baseline towards constant digital observation.
Many of us start our day by reaching for a phone. Is a check-in via a virtual holo-deck any worse than doomscrolling through algorithmically curated content on a small screen? We know its impact on mental health. Yet we let it happen today while fearing an imagined future.
In an assignment, our students applied AI ethics and governance frameworks to current New Zealand case studies from health, education, retail, and the creative sector. What they found suggests the agency many of us fear losing in the future is often already traded incrementally in the systems we accept today.
AI in the shadows
We often talk about AI as a distant overlord. Our students analysed it as a quiet, bureaucratic assistant, one that doesn't announce itself. Consider the AI scribe currently being piloted in New Zealand emergency departments, which will liberate doctors from paperwork and documentation, and free them up to spend more face-to-face care for patients.
But one student, using ISO/IEC 42001, AI Management System standard that discusses the trade-off between convenience and data privacy/accountability, as an analytical lens, highlighted the ethical tension underneath the convenience. We surrender sensitive health information in exchange for workflow efficiency, while questions about governance, sovereignty, and secondary use of data are still unresolved.
Another student examined the New Zealand Qualifications Authority's consideration of remote invigilation for NCEA exams. The student's argument was that in trying to improve access, we may normalise software surveillance inside private homes. While eye-tracking and behavioural monitoring technologies can help protect exam integrity, they can also shift the social baseline towards constant digital observation.
The erosion of creative agency
Several students explored current debates in arts and media, such as the New Zealand Book Awards disqualifying entries with AI-generated covers.
One student, writing about the music industry described a "dual alignment failure". We have tools that are technically brilliant at generating music, having been trained on the work of artists who were never asked for (or gave) consent. When a person studies 100 songs, they develop a voice, but when a model is trained on millions of voices, it can produce outputs that compete with the people whose work shaped it.
If cultural production becomes detached from attribution, consent, and fair value-sharing, we risk weakening the ecosystems that make creative work possible.
Safety at the expense of justice
The most realistic version of the scary future, students argued, may be the one justified in the name of public safety.
A critical review of the Foodstuffs North Island facial recognition trial revealed the sharp edge of the wedge; the technology scans every shopper to catch a few bad actors. Although the Privacy Commissioner found the trial complied with the Privacy Act, one student pointed out that compliance is not the same as legitimacy.
Another student argued that New Zealand's voluntary and light-touch government AI stance suggests we are prioritising flexibility over accountability. While the EU rolls out the AI Act with statutory teeth, New Zealand relies on high-trust models, but voluntary principles without enforcement are good intentions waiting to be exploited.
Reclaiming the present
The reaction to Emma's 2040 morning routine described above may be a fear of submission, of a human life directed by a machine.
But that submission is not a future event. It is found in the artist who cannot compete with an AI-based music generator; the patient who is recorded by an invisible scribe; the shopper whose face is mapped against a watchlist; and the teenager who starts each day inside an algorithmically ordered feed.
If we are uncomfortable with the thought of AI gaining agency in 2040, we need to spend more time on our buying choices and decisions, how consent is requested and respected, and who is accountable and for what.
Ethical AI leadership means interrogating the system. Whose values are embedded here? Who benefits, and who is harmed? And, most importantly, are we choosing this future, or are we just clicking accept?
The holo-deck is already here. The question is, can we still find the 'esc' key?
Shahper Richter is a senior lecturer in marketing at the University of Auckland Business School.
Alexander Richter is Professor of Information Systems in Wellington School of Business and Government at Te Herenga Waka-Victoria University of. Wellington.
Ishara Sudeeptha is a work integrated learning coordinator in the Wellington School of Business and Government at Te Herenga Waka, Victoria University of Wellington.
This article reflects the opinion of the author and not necessarily the views of Waipapa Taumata Rau University of Auckland.
This article was first published on Newsroom, 22 April
Margo White I Research communications editor
Mob 021 926 408
Email [email protected]