12/05/2025 | News release | Distributed by Public on 12/05/2025 11:07
Compression efficiency remains central to video coding research, directly impacting the delivery of high-quality video at lower bitrates - a key requirement for practically all applications from streaming services to real-time communications. The October 2025 meeting of the Joint Video Experts Team (JVET) marked a significant milestone in the ongoing evolution of video coding technology. The committee convened to review a diverse array of responses to Call for Evidence, a pivotal step aimed at propelling video coding technology beyond the capabilities of the current Versatile Video Coding (VVC) standard. This meeting not only highlighted the technical progress made in recent years but also underscored the collaborative spirit that drives innovation in the video coding community.
Call for Evidence (CfE) is designed to assess the potential of emerging technologies to improve compression efficiency by comparing proposals against VVC Test Model (VTM) using subjective human evaluation via Mean Opinion Score (MOS). The CfE process is essential for identifying improvement and readiness of new tools and approaches before formal proposals are solicited. The results of the CfE were unequivocally positive, with substantial gains demonstrated across multiple submissions. This success laid a foundation for the Call for Proposals, which now invites formal proposals for evaluation in January 2027. The long-standing and proven practice of transitioning from evidence gathering to proposal evaluation reflects JVET's commitment to a methodical, data-driven approach to standardization.
A key trend observed in the CfE submissions was the broad utilization of Enhanced Compression Model (ECM) tools. These tools originated during the exploration phase that began at the January 2021 JVET meeting, during which Qualcomm Technologies introduced proposal JVET-U0100. This proposal demonstrated an 11.5% gain in compression efficiency compared to VVC with random access, and it was presented just over six months after the VVC standard's completion. The proposal formed the basis for consequent exploration efforts, and its software was adopted as the reference model for ECM. ECM functions as a collaborative platform that enables the joint development of innovative coding tools.
Qualcomm submitted an ECM-based response to the Call for Evidence, in collaboration with several other companies actively engaged in the ongoing development of ECM. Beyond CfE response, Qualcomm contributed by releasing optimized software implementations that incorporate recent VTM and ECM coding tools (JVET-AN0271). This software release is intended to provide the research community and industry stakeholders with an enhanced platform for the efficient evaluation of coding techniques.
Exploration models have been instrumental in the advancement of previous standards. For instance, the journey toward developing VVC started when Qualcomm submitted contribution COM16-C.806 to the ITU-T Video Coding Experts Group (VCEG). This submission included software that achieved a 10.4% coding efficiency improvement over the HEVC Test Model (HM) in random-access configuration. Recognizing its significance, the expert's community adopted this software as the starting point for the Joint Exploration Model (JEM), which played a critical role in shaping VVC's development, serving as a testbed for new ideas and approaches, many of which would eventually be incorporated into the VVC standard. Seeing how collaborative exploration is driving accelerated development of new compression technologies encouraged us to bring our research and software to the standardization forum once again after VVC was finalized.
The video compression research community has devoted growing attention to integrating deep learning-based approaches, notably neural network intra prediction models and in-loop neural network filtering techniques. This includes adaptive neural network filters, whose parameters can be transmitted as part of the bitstream. These tools have been incorporated into ECM, delivering further significant gains in compression efficiency. The synergy between traditional algorithmic improvements and data driven machine learning approaches is shaping the future of video coding standards, promising even greater performance for emerging applications such as ultra-high-definition video, immersive media and low-latency streaming. Neural networks offer the potential to discover and model complex patterns in video data, enabling more efficient compression and better adaptation to diverse content types.
As video compression technology advances, neural networks are poised to lead the next major transformation by addressing limitations specific to the traditional hybrid codecs which are based on inter/intra prediction and transform. When used for still image compression, neural networks have already proven effective at handling complex areas like textures, focusing on aligning statistical properties instead of just reducing mean square error. They also make it simpler to incorporate metrics based on the Human Visual System during encoding. Thanks to their ability to be trained with specific datasets, neural networks offer efficient, customized compression for unique types of content, eliminating the need for static, hand-crafted algorithms.
Looking forward, ongoing cooperation between industry leaders, academic experts and standards organizations will be crucial for driving progress in video coding. The upcoming Call for Proposals review in January 2027 presents a chance to push technology even further, as innovative ideas and solutions undergo thorough testing and improvement. Qualcomm's role, including its proposals and software releases, demonstrates how industry participation can shape the future of standards development.