12/16/2025 | Press release | Distributed by Public on 12/16/2025 03:38
Researchers at TU Wien have discovered an unexpected connection between two very different areas of artificial intelligence: Large Language Models (LLMs) can help solve logical problems-without actually "understanding" them.
© Klaus Ranger
Florentina Voboril
Anyone who has spent hours struggling with a Sudoku puzzle knows the feeling: you're stuck until suddenly a small hint sets the entire solution in motion. Large Language Models (LLMs) such as ChatGPT can provide exactly this kind of hint-only for problems that are vastly more complex than any puzzle book.
Researchers at TU Wien found that such language models can help other programs solve logical tasks faster and even better. LLMs cannot actually follow these problems; they cannot execute the corresponding code themselves. Yet they recognize patterns that even experts have previously overlooked. This means that language models may become extremely useful in a domain where they were long considered to be of little help.
The research, recently published in the Journal of Artificial Intelligence Research, was conducted as part of the doctoral program iCAIML, which brings together different methods from artificial intelligence and machine learning.
"To understand why our discovery is so surprising, it helps to take a look at two completely different worlds of artificial intelligence," says Florentina Voboril from the Institute of Logic and Computation at TU Wien, who is currently working on her dissertation in the team of Prof. Stefan Szeider.
There are many logical tasks in which one must choose the best solution from a huge number of possibilities, all according to strict logical rules-for example, when deciding which number belongs in a Sudoku cell. There are computational tools that can solve such tasks extremely well today. The problem is formulated in a formal mathematical language; the computer then applies logical rules to reach a result in a transparent, rule-based way. This is known as symbolic AI. Shift schedules for industry, for example, are often generated in this way.
Large Language Models (LLMs) such as ChatGPT or Copilot, however, work in a completely different way. They are not based on fixed, pre-programmed rules; instead, their behavior emerges from the vast amount of data they were trained on. This allows them to generate language-but one cannot subsequently explain precisely why they produced a particular answer. This is known as sub-symbolic AI. This kind of AI has seen a dramatic rise in recent years, but it is generally considered unsuitable for strictly logical tasks.
"We examined how symbolic and sub-symbolic AI can be combined to make use of the strengths of both worlds," says Florentina Voboril. In symbolic AI, one often faces an overwhelming number of options from which to choose the best one: many ways to fill in a Sudoku, many possible chess moves, many ways to create shift schedules.
"Often you cannot simply try them all. That's why it's extremely helpful to have certain rules that eliminate parts of the search space from the start," Voboril explains. "Imagine we're trying to find the shortest path out of a maze. If I already know that certain parts of the maze are not connected to any exit, I can block off those areas and focus on the rest. That way, you find a better solution faster." Symbolic AI works very similarly: additional rules-known as streamliners-can sometimes help reach a result much more quickly.
At TU Wien, LLMs were now used to identify such streamliners. The code normally processed by symbolic AI is fed into an LLM. The LLM does not execute this code-it's not built for that. One might say: it cannot truly "understand" the problem. But it can propose additional rules that can be inserted into the code so that the specialized symbolic AI runs faster or produces better results.
"In this way, we were able to solve certain problems significantly faster than symbolic AI had been able to do so far. For one of these problems, we even set new world records-finding solutions that are better than any previously known ones," says Florentina Voboril.
This opens up an entirely new and surprising area for AI research: two branches of AI that have traditionally been considered separate become stronger and more powerful when used together. In the future, combinations of symbolic and sub-symbolic AI could not only crack research puzzles but also accelerate complex decisions in everyday life-from logistics and shift planning to healthcare.
F. Voboril, V. Ramaswamy, S. Szeider, Generating Streamlining Constraints with Large Language Models, Journal of Artificial Intelligence Research, 84 (2025)., opens an external URL in a new window
F. Voboril, V. Ramaswamy, S. Szeider, Balancing Latin Rectangles with LLM-Generated Streamliners, in the 31st International Conference on Principles and Practice of Constraint Programming (CP 2025), Glasgow, Scotland, opens an external URL in a new window
More about the doctoral program iCAIML: https://caiml.org/icaiml/, opens an external URL in a new window
Dipl.-Ing. Florentina Voboril
Institute for Logic and Computation / iCAIML
TU Wien
florentina.voboril@tuwien.ac.at