RESEARCH PAPERS

Time Lens: Event-Based Video Frame Interpolation

Time Lens: Event-Based Video Frame Interpolation

In this work, we introduce Time Lens, a novel method that leverages the advantages of both. We extensively evaluate our method on three synthetic and two real benchmarks where we show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods. We release a new large-scale dataset in highly dynamic scenarios, aimed at pushing the limits of existing methods.

Mamba-FETrack: Frame-Event Tracking via State Space Model

Mamba-FETrack: Frame-Event Tracking via State Space Model

This paper proposes a novel RGB-Event tracking framework, Mamba-FETrack, based on the State Space Model (SSM) to achieve high-performance tracking while effectively reducing computational costs and realizing more efficient tracking. Specifically, we adopt two modality-specific Mamba backbone networks to extract the features of RGB frames and Event streams.

Event-based Vision Contactless Fault Diagnosis With Neuromorphic Computing

Event-based Vision Contactless Fault Diagnosis With Neuromorphic Computing

This letter presents a novel dynamic vision enabled contactless cross-domain fault diagnosis method with neuromorphic computing. The event-based camera is adopted to capture the machine vibration states in the perspective of vision. A specially designed bio-inspired deep transfer spiking neural network (SNN) model is proposed for processing the event streams of visionary data, feature extraction and fault diagnosis.

Low-latency automotive vision with event cameras

Low-latency automotive vision with event cameras

Here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency.

Dsec: A stereo event camera dataset for driving scenarios

Dsec: A stereo event camera dataset for driving scenarios

We propose DSEC, a new dataset that contains demanding illumination conditions and provides a rich set of sensory data. DSEC offers data from a wide-baseline stereo setup of two color frame cameras and two high-resolution monochrome event cameras. In addition, we collect lidar data and RTK GPS measurements, both hardware synchronized with all camera data.

Don’t miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter