EGVD: Event-Guided Video Diffusion Model for Physically Realistic Large-Motion Frame Interpolation

A novel framework leveraging event camera data with stable video diffusion models for high-quality frame interpolation in challenging scenarios

Ziran Zhang1,2 Xiaohui Li2,3 Yihao Liu2 Yujin Wang2 Yueting Chen1 Tianfan Xue4,2* Shi Guo2*
1Zhejiang University 2Shanghai AI Laboratory 3Shanghai Jiao Tong University 4The Chinese University of Hong Kong
*Corresponding authors

Results

Our approach significantly outperforms existing methods in handling large motion and challenging lighting conditions. The video demonstrates the superiority of EGVD in generating physically realistic intermediate frames, particularly in scenarios with complex motion patterns.

EGVD Teaser
Visual comparisons of our EGVD method against existing approaches for frame interpolation across diverse scenarios

Abstract

Video frame interpolation (VFI) in scenarios with large motion remains challenging due to motion ambiguity between frames. While event cameras can capture high temporal resolution motion information, existing event-based VFI methods struggle with limited training data and complex motion patterns. In this paper, we introduce Event-Guided Video Diffusion Model (EGVD), a novel framework that leverages the powerful priors of pre-trained stable video diffusion models alongside the precise temporal information from event cameras. Our approach features a Multi-modal Motion Condition Generator (MMCG) that effectively integrates RGB frames and event signals to guide the diffusion process, producing physically realistic intermediate frames. We employ a selective fine-tuning strategy that preserves spatial modeling capabilities while efficiently incorporating event-guided temporal information. We incorporate input-output normalization techniques inspired by recent advances in diffusion modeling to enhance training stability across varying noise levels. To improve generalization, we construct a comprehensive dataset combining both real and simulated event data across diverse scenarios. Extensive experiments on both real and simulated datasets demonstrate that EGVD significantly outperforms existing methods in handling large motion and challenging lighting conditions, achieving substantial improvements in perceptual quality metrics (27.4% better LPIPS on Prophesee and 24.1% on BSRGB) while maintaining competitive fidelity measures.

Method

EGVD Framework
Overview of our Event-Guided Video Diffusion Model (EGVD) framework

Our EGVD framework introduces a novel approach that leverages both event camera data and stable video diffusion models to achieve high-quality frame interpolation. The key component of our method is the Multi-modal Motion Condition Generator (MMCG), which effectively integrates RGB frames and event signals to guide the diffusion process.

Key Contributions

  • A novel Multi-Modal Motion Condition Generator (MMCG) that integrates event information into the SVD framework to improve the interpolation of large motions.
  • A two-stage training strategy that first trains the conditioning generator independently, followed by fine-tuning the SVD model to adapt to Event-VFI.
  • A diverse and comprehensive training dataset that combines real-world and synthetic event-RGB data, improving the generalization ability of our model.
  • Extensive experimental results demonstrate that our approach outperforms existing methods, particularly in large-motion and low-light scenarios.
@article{zhang2025egvd, title={EGVD: Event-Guided Video Diffusion Model for Physically Realistic Large-Motion Frame Interpolation}, author={Zhang, Ziran and Li, Xiaohui and Liu, Yihao and Wang, Yujin and Chen, Yueting and Xue, Tianfan and Guo, Shi}, journal={arXiv preprint arXiv:2503.20268}, year={2025} }