MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty

MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty

Achieving level-5 driving automation in autonomous vehicles necessitates a robust semantic visual perception system capable of parsing data from different sensors across diverse conditions. However, existing semantic perception datasets often lack important non-camera modalities typically used in autonomous vehicles, or they do not exploit such modalities to aid and improve semantic annotations in challenging conditions. To address this, the research introduce MUSES, the MUlti-SEnsor Semantic perception dataset for driving in adverse conditions under increased uncertainty.

SGE: Structured Light System Based on Gray Code with an Event Camera

SGE: Structured Light System Based on Gray Code with an Event Camera

We introduce a novel method for measuring properties of periodic phenomena with an event camera, a device asynchronously reporting brightness changes at independently operating pixels. The approach assumes that for fast periodic phenomena, in any spatial window where it occurs, a very similar set of events is generated at the time difference corresponding to the frequency of the motion.

EE3P: Event-based Estimation of Periodic Phenomena Properties

EE3P: Event-based Estimation of Periodic Phenomena Properties

The paper introduces a novel method for measuring properties of periodic phenomena with an event camera, a device asynchronously reporting brightness changes at independently operating pixels. The approach assumes that for fast periodic phenomena, in any spatial window where it occurs, a very similar set of events is generated at the time difference corresponding to the frequency of the motion.

Recent Event Camera Innovations: A Survey

Recent Event Camera Innovations: A Survey

This paper presents a comprehensive survey of event cameras, tracing their evolution over time. It introduces the fundamental principles of event cameras, compares them with traditional frame cameras, and highlights their unique characteristics and operational differences. The survey covers various event camera models from leading manufacturers, key technological milestones, and influential research contributions.

Learning Visual Motion Segmentation using Event Surfaces Event-based Vision

Learning Visual Motion Segmentation using Event Surfaces Event-based Vision

We evaluate our method on the state of the art event-based motion segmentation dataset – EV-IMO and perform comparisons to a frame-based method proposed by its authors. Our ablation studies show that increasing the event slice width improves the accuracy, and how subsampling and edge configurations affect the network performance.