Queen's University

12/22/2025 | Press release | Distributed by Public on 12/23/2025 12:08

Research team unveils advancement in neural networks

Research team unveils advancement in neural networks

December 22, 2025

Share

Link copied to clipboard

Neural networks - powerful AI models inspired by the human brain - are a fundamental element of digital sovereignty and are reshaping our economy. Neural networks are powerful because they have the capacity to represent complex decision-making processes, images, texts, and more. However, current approaches require neural networks to be optimized for a specific data set in a way that limits their general applicability.

This is called overfitting and in the real world is the source of a host of problems that come from reduced generalizability, including unreliable algorithms for detecting threats and diagnosing diseases.

Now, researchers at Queen's University have introduced a new way to leverage data to train neural networks without reducing generalizability. The team's alternative approach, termed "sufficient training," was recently published in Nature Communications.

"We realized that the problem was optimization," says co-lead author Irina Babayan, a masters student in physics. "The leading approaches all start with some version of optimization but then use one trick or another to disrupt the optimizer. We thought: Why not start from something else?"

"It sounds like a paradox," adds co-lead author Hazhir Aliahmadi, a recent physics PhD recipient. "But sufficiently trained networks outperform optimally trained ones."

The key distinction the team made in inventing their approach was the differentiation between learning and memorization.

"When you're learning something new, you might start by memorizing facts. But the goal of learning isn't to memorize isolated facts. The goal is to get a broader understanding," says Dr. Aliahmadi. "We wanted to develop a training approach that moves toward neural network representations of 'learning' at this higher level."

The team's results show that their neural networks are better trained to encapsulate the 'gist' of an underlying dataset, rather than reproducing spurious measurement errors. The team anticipates this work will be valuable in many settings that rely on making decisions or predictions, such as health care, engineering, and finance.

"There's a lot of work in the social sciences showing that diverse teams reach better outcomes," says senior author Greg van Anders, an associate professor in the Department of Physics, Engineering Physics, and Astronomy. "This work shows that this intuition about people also holds for neural networks. We find that a diverse collection of neural networks substantially outperforms an individual network, or a non-diverse collection of networks."

This method takes advantage of the properties of 'emergence', where a collective or whole is more than the sum of its parts. This strength from diversity will fuel future applications of the team's work.

"Ultimately, we wanted to develop a tool that was useful for solving real-world problems," says Babayan.

"People who need to diagnose rare diseases, detect fraud, or value financial instruments don't have access to limitless data because data are prohibitively difficult or expensive to collect, or there are privacy limitations," says Dr. van Anders. "Sufficient training is tailor made for settings like that."

To arrange an interview, contact:

| Media Relations Officer | Queen's University

Queen's University published this content on December 22, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on December 23, 2025 at 18:08 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]