The TRUSTLLM framework and benchmark offer a comprehensive system for evaluating the trustworthiness of large language models across six key dimensions. This work reveals that while proprietary models generally exhibit higher trustworthiness, open-source models can also achieve strong performance in specific areas, highlighting challenges like 'over-alignment' and data leakage.
View blogA collaborative white paper coordinated by the Quantum Community Network comprehensively analyzes the current status and future perspectives of Quantum Artificial Intelligence, categorizing its potential into "Quantum for AI" and "AI for Quantum" applications. It proposes a strategic research and development agenda to bolster Europe's competitive position in this rapidly converging technological domain.
View blogA framework named KD-Net enables mono-modal medical image segmentation networks to leverage knowledge from multi-modal networks through generalized knowledge distillation. The approach boosts segmentation accuracy, achieving a Dice score of 71.67 for enhancing tumors on the BraTS 2018 dataset, outperforming a mono-modal baseline of 68.1.
View blogFrench researchers from Université Paris-Saclay and onepoint reveal fundamental relationships between pre-training depth and exploration capabilities in language models, introducing a novel "prioritized KL penalty" that significantly improves reinforcement learning fine-tuning efficiency by targeting critical decision points while maintaining model stability.
View blogA comprehensive survey systematically reviews and categorizes compression techniques for 3D Gaussian Splatting (3DGS), proposing a novel taxonomy of unstructured and structured methods to address the technology's substantial memory footprint. The analysis highlights that unstructured methods achieve significant FPS improvements (4x-7x faster) with good compression (up to 55x), while structured methods provide superior compression ratios (up to 188x) with comparable rendering speeds, charting a path for optimizing 3DGS for resource-constrained applications.
View blogResearchers from Kyung Hee University, Adobe Research, Chung-Ang University, and Télécom Paris introduce Iterative Implicit Neural Representations (I-INRs), a plug-and-play framework that enhances existing Implicit Neural Representations (INRs) by incorporating an iterative refinement process. The method effectively mitigates spectral bias and improves noise robustness, achieving superior signal reconstruction quality across various tasks with minimal additional computational overhead.
View blogResearchers at the University of Trento and Snap Inc. developed an object-agnostic framework for image animation that transfers motion from a driving video to a source image. The method introduces a first-order motion model that learns sparse keypoints alongside local affine transformations, enabling more accurate deformation modeling and producing highly realistic, coherent animations across diverse object categories, significantly outperforming prior methods.
View blog