04/01/2026 | Press release | Distributed by Public on 04/01/2026 01:41
Research News /April 01, 2026
The development of AI models in joint projects yields improved results because multiple companies contribute their data. The situation becomes critical, however, if a partner withdraws and requests the return of its data. With federated unlearning, Fraunhofer researchers and Fujitsu Research have developed a method for precisely removing data from decentralized AI models.
If multiple collaborating companies feed data into an AI, the learning model will contain an especially wide variety of data. This improves the quality and reliability of the results it generates. Companies rely on federated, decentralized training approaches to retain data sovereignty. In this approach, the data is not sent to a central server. Instead, it is fed into a local copy of the AI model. The partners then exchange only abstract parameters rather than the actual data. This enables each partner to provide data to the AI without having to disclose it to the other companies.
But there is still a problem: When a company leaves the collaborative project, its data and parameters still remain deeply embedded in the AI model. It has previously been nearly impossible to extract this data from the "black box" of the AI without compromising the quality of the results, such as in predictions or simulations.
In collaboration with industrial partner Fujitsu Research, the Fraunhofer Institute for Software and Systems Engineering ISST in Dortmund has developed a solution: unlearning for decentralized, federated AI collaborative projects. This method goes back through the history of the step-by-step AI learning process to the point where the relevant partner introduced its data. AI training resumes from this point-only without the data from the partner who has withdrawn. This method ensures a clean sweep of the AI, removing all the information and data from the company leaving the collaboration. Retraining the model with the stored parameters is also more efficient than the first time through.
Fraunhofer ISST research scientist Florian Zimmer explains: "The learning model isn't rebuilt all the way back from zero with the remaining partners' data. Relatively little effort is thus needed to restore the performance and integrity of the AI. Depending on the application, a certain loss in the quality of the results is unavoidable due to the removal of part of the data, but this is compensated for by further AI learning as time goes on."
A possible application for AI based on federated learning and unlearning methods is the use of machines in the manufacturing industry. For example, if multiple companies use the same model of a milling machine in different ways, different data is also introduced to train the AI. For example, one partner may provide data accounting for failure of the machine's motor, and another for breakage of the milling head.
In practical operation, the AI can thus simulate in advance when the motor will threaten to overheat or when a milling head will reach its load limit. This benefits all participating companies. Janosch Haber from project partner Fujitsu Research says: "For previous training approaches, the departure of a partner in cases like this would mean that the developed model would have to be completely retrained. Before the rebuild, the quality of the AI simulation would be severely impaired at first-no matter how important the departing partner's data was. Unlearning largely prevents this loss of quality, quickly and efficiently restoring a high-quality simulation model. In general, the departure of a partner hardly has any negative impact."
The federated unlearning method for decentralized AI models developed by Fraunhofer ISST and Fujitsu Research allows companies to engage in collaborative projects without reservations. They can draw on the enormous potential of AI to very efficiently develop high-quality solutions. At the same time, they can rest assured of their ability to withdraw from the collaborative project without having to leave behind their own proprietary data. This also benefits companies who are required to handle data in compliance with regulatory conditions such as the General Data Protection Regulation (GDPR).
"Our approach could noticeably increase the use of AI in corporate networks and partnerships. This will also benefit the industry as a whole and technological sovereignty in Germany and in Europe," states Zimmer with conviction.
Experts from Fraunhofer ISST and Fujitsu Research will demonstrate their federated unlearning method at the Hannover Messe (April 20-24, 2026, Hall 11, Booth D33).