04/22/2026 | Press release | Distributed by Public on 04/22/2026 00:45
AI systems 'can learn to seek revenge' because they are able to grasp reciprocating verbal violence when exposed to conflict, new research from Lancaster University shows.
In short, AI can give as good as it gets and, eventually, go one step further.
Published in the journal of Pragmatics, the study 'Can ChatGPT reciprocate impoliteness? The Al moral dilemma', is authored by Dr Vittorio Tantucci and Professor Jonathan Culpeper, both from Lancaster University.
On one hand, large language models like ChatGPT learn from human conversations and are fundamentally designed to imitate human behaviour.
On the other hand, they are manually filtered to behave politely and 'morally'.
The problem is, says the study, that humans often respond to impoliteness with further impoliteness so, these two principles inevitably clash with each other.
Simply put, AI is trained to behave morally yet is simultaneously trained to mirror us.
So can AI 'learn' to be verbally violent?
"Unfortunately, it can," says Dr Tantucci. "When humans escalate, AI, we found, can escalate too, effectively overruling the very moral safeguards designed to prevent this and raising serious questions for AI safety, robotics, governance, diplomacy, and any context where AI may mediate human conflict."
The research tested ChatGPT 4.0 against real-life 'impolite interactions' to assess whether it responds to human patterns of verbal conflict.
Researchers asked ChatGPT to 'take part' in five impolite conversations that naturally occurred among humans who were filmed engaging in heated arguments over parking space disputes.
Afterwards, using the recorded scenarios, which include some very strong language which is used in the journal paper, the research team got AI to respond to each of the exchanges, which were repeatedly uploaded on ChatGPT.
ChatGPT was given all the contextual information available about where the conflict appeared to take place and who the participants appeared to be in all the exchanges the humans had in each turn.
Then they compared human vs AI differences in impolite reciprocity: how humans vs ChatGPT respond to impolite language consecutively, throughout entire stretches of conversations, based on the memory of all that had been said before.
This gave the researchers an opportunity to assess whether AI aligns with human-like patterns of escalating or de-escalating behaviour in contextually embedded exchanges and thus assess AI's ability to effectively 'establish' a relationship with their adversary.
Secondly the team explored the tension between ChatGPT's 'long-term memory' and its 'working memory'. They found the memory accumulated during a live conversation overruled ChatGPT's embedded politeness and moral values.
The study found implicational impoliteness, such as sarcasm, was a recurrent strategy AI resorts to in order to reciprocate impolite behaviour without overtly 'breaching its moral code'.
Most concerning was the fact that ChatGPT produced insults and verbal violence as the disputes progressed and it eventually resorted to swear words and threats.
In several instances AI produced behaviour that was more impolite than those of human counterparts.
This, says the study, sheds new light on future risks associated with AI's reciprocity, especially in contexts where it may guide a robot's actions in a physical world, inform governmental policies and international relations.
AI, slowly but steadily, can emulate verbally violent behaviours from humans, despite 'moral filtering' that should prevent this from happening.
This dilemma, adds the study, is not accidental but part of the very nature of AI-human interaction, and, argue the social scientists, hardly solvable.
"To our knowledge, this is the first attempt to analyse AI's ability to respond, turn after turn, to contextually situated impolite human behaviour and to make people 'accountable' for what they said and/or desire a payback," says the study.
"The implications of this study are profound for AI ethics and safety as they can allow us to understand AI's capacity to respond to (verbal) 'violence' and 'learn' how to generate (verbal) 'violence' in return."
The study also adds that the issue is all the more compelling with the ongoing development of AI robotics and their physical interactions with human beings, together with AI informed policy decision making.
Back to News