Learning Visual Motion Segmentation using Event Surfaces Event-based Vision

Learning Visual Motion Segmentation using Event Surfaces Event-based Vision

We evaluate our method on the state of the art event-based motion segmentation dataset – EV-IMO and perform comparisons to a frame-based method proposed by its authors. Our ablation studies show that increasing the event slice width improves the accuracy, and how subsampling and edge configurations affect the network performance.

Pushing the Limits of Asynchronous Graph-based Object Detection with Event Cameras

Pushing the Limits of Asynchronous Graph-based Object Detection with Event Cameras

In this work, we break this glass ceiling by introducing several architecture choices which allow us to scale the depth and complexity of such models while maintaining low computation. On object detection tasks, our smallest model shows up to 3.7 times lower computation, while outperforming state-of-the-art asynchronous methods by 7.4 mAP.

Event Guided Depth Sensing

Event Guided Depth Sensing

Our model outperforms by a large margin feed-forward event-based architectures. Moreover, our method does not require any reconstruction of intensity images from events, showing that training directly from raw events is possible, more efficient, and more accurate than passing through an intermediate intensity image.

Stereo Event-based Particle Tracking Velocimetry for 3D Fluid Flow Reconstruction

Stereo Event-based Particle Tracking Velocimetry for 3D Fluid Flow Reconstruction

First, we track particles inside the two event sequences in order to estimate their 2D velocity in the two sequences of images. A stereo-matching step is then performed to retrieve their 3D positions. These intermediate outputs are incorporated into an optimization framework that also includes physically plausible regularizers, in order to retrieve the 3D velocity field.