10/02/2025 | News release | Distributed by Public on 10/02/2025 15:09
Assistant Professor of Digital and Computational Studies Nascimento studies the intersection of digital technologies and ethics, with a current focus on how artificial intelligence (AI) systems might be deployed for the common good.
Among the issues raised was the use of robots or, more accurately, AI large language models to demonstrate human empathy, acting as online companions to help people with their mental health, among other things.
"Kids are already interacting with AI companions," noted Nascimento, "and one thing that some studies observed is that kids change their behavior because they learn to interact with those machines." This, he said, leads to a "very interesting question" about the impact such technologies can have on human socialization and how we interact with each other.
As people turn increasingly to AI models for help in areas where they traditionally would have spoken to a human being, explained Nascimento, it's important to consider wider questions about the concept of empathy and what it means when machines learn to demonstrate it. "Every time we outsource the … human relationship we are also outsourcing part of our humanity, and those are very intrinsic questions that our society will have to decide on."
Another consideration Nascimento discussed was the potential threat to privacy posed by AI models as they enter into a dialogue with a person, whether it's to offer medical advice, financial help, or any other kind of counseling. Robots, he stressed, are collecting data all the time, but who owns that data? "We really want to think very carefully as a society. What are the privacy guardrails that we want to impose when these systems are deployed?"