Event-assisted 12-stop HDR Imaging of Dynamic Scene

1Shanghai AI Laboratory, 2Zhejiang University, 3The Chinese University of Hong Kong, 4Huazhong University of Science and Technology
Teaser Image

Using event camera, our method present the first attempt at 12-stop HDR imaging for dynamic scenes.

Abstract

High dynamic range (HDR) imaging is a crucial task in computational photography, which captures details across diverse lighting conditions. Traditional HDR fusion methods face limitations in dynamic scenes with extreme exposure differences, as aligning low dynamic range (LDR) frames becomes challenging due to motion and brightness variation. In this work, we propose a novel 12-stop HDR imaging approach for dynamic scenes, leveraging a dual-camera system with an event camera and an RGB camera. The event camera provides temporally dense, high dynamic range signals that improve alignment between LDR frames with large exposure differences, reducing ghosting artifacts caused by motion. Also, a real-world finetuning strategy is proposed to increase the generalization of alignment module on real-world events. Additionally, we introduce a diffusion-based fusion module that incorporates image priors from pre-trained diffusion models to address artifacts in high-contrast regions and minimize errors from the alignment process. To support this work, we developed the ESHDR dataset, the first dataset for 12-stop HDR imaging with synchronized event signals, and validated our approach on both simulated and real-world data. Extensive experiments demonstrate that our method achieves state-of-the-art performance, successfully extending HDR imaging to 12 stops in dynamic scenes.

Main idea

Teaser Image How we push the boundaries of dynamic scene HDR imaging to capture the full 12-stop range:
  • Our method utilize event camera (temporally dense, ~140dB) and propose the explict event-assisted alignment module to align LDRs with extreme exposure differences. Also the proposed alignment module is finetuned using real-world interpolation datasets, which are easier to acquire and widely available, to brige the gap between simulated and real-world events
  • We propose a diffusion-based fusion module to merge aligned LDR images with significant exposure differences and generate HDR images. By leveraging a pre-trained latent stable diffusion model as an image prior, our fusion module mitigates the effects of alignment errors and occlusion boundaries, thereby achieving artifact-free HDR results.
Ablation Image
Ablation study on fusion and alignment modules by comparison of different configurations.

Visual Results

Ablation Image
Visual results on a synthetic image. To better illustrate both the darkest and brightest regions of a 12-stop HDR image, we apply two tone mapping styles (Deep and Vibrant) of the HDR tool (Photomatix).
Ablation Image
Visual results on real-captured image.
Ablation Image
Visual results on real-captured image.

BibTeX

@article{guo2024eventhdr,
      title={Event-assisted 12-stop HDR Imaging of Dynamic Scene}, 
      author={Shi Guo and Zixuan Chen and Ziran Zhang and Yutian Chen and Gangwei Xu and Tianfan Xue},
      journal={arXiv preprint arXiv:2412.14705},
      year={2024}
}