Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras

Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras

This work proposes a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene, which are otherwise invisible to event cameras. The results show that Noise2Image can robustly recover intensity images solely from noise events, providing a novel approach for capturing static scenes in event cameras, without additional hardware.

Object Detection with Spiking Neural Networks on Automotive Event Data

Object Detection with Spiking Neural Networks on Automotive Event Data

In this work, we propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications. Indeed, SNNs are more biologically realistic neural networks where neurons communicate using discrete and asynchronous spikes, a naturally energy-efficient and hardware friendly operating mode.

Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras

Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras

This paper introduces feedback control algorithms that automatically tune the bias parameters through two interacting methods: 1) An immediate, on-the-fly \textit{fast} adaptation of the refractory period, which sets the minimum interval between consecutive events, and 2) if the event rate exceeds the specified bounds even after changing the refractory period repeatedly.

Event-based Motion Segmentation with Spatio-Temporal Graph Cuts

Event-based Motion Segmentation with Spatio-Temporal Graph Cuts

We develop a method to identify independently moving objects acquired with an event-based camera, i.e., to solve the event-based motion segmentation problem. We cast the problem as an energy minimization one involving the fitting of multiple motion models. We jointly solve two subproblems, namely event cluster assignment (labeling) and motion model fitting.

Vista 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles

Vista 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles

Here, we present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles. Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras, enabling the rapid generation of novel viewpoints in simulation and thereby enriching the data available for policy learning with corner cases that are difficult to capture in the physical world.