RESEARCH PAPERS

Asynchronous Multi-Object Tracking with an Event Camera

Asynchronous Multi-Object Tracking with an Event Camera

In this paper, the Asynchronous Event Multi-Object Tracking (AEMOT) algorithm is presented for detecting and tracking multiple objects by processing individual raw events asynchronously. AEMOT detects salient event blob features by identifying regions of consistent optical flow using a novel Field of Active Flow Directions built from the Surface of Active Events. Detected features are tracked as candidate objects using the recently proposed Asynchronous Event Blob (AEB) tracker to construct small intensity patches of each candidate object.

FRED: The Florence RGB-Event Drone Dataset

FRED: The Florence RGB-Event Drone Dataset

The Florence RGB-Event Drone dataset (FRED) is a novel multimodal dataset specifically designed for drone detection, tracking, and trajectory forecasting, combining RGB video and event streams. FRED features more than 7 hours of densely annotated drone trajectories, using five different drone models and including challenging scenarios such as rain and adverse lighting conditions.

RGB-Event Fusion with Self-Attention for Collision Prediction

RGB-Event Fusion with Self-Attention for Collision Prediction

This paper proposes a neural network framework for predicting the time and collision position of an unmanned aerial vehicle with a dynamic object, using RGB and event-based vision sensors. The proposed architecture consists of two separate encoder branches, one for each modality, followed by fusion by self-attention to improve prediction accuracy. To facilitate benchmarking, the ABCD dataset is leveraged, enabling detailed comparisons of single-modality and fusion-based approaches.

The neurobench framework for benchmarking neuromorphic computing algorithms and systems

The neurobench framework for benchmarking neuromorphic computing algorithms and systems

This article presents NeuroBench, a benchmark framework for neuromorphic algorithms and systems. It introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent and hardware-dependent settings.

Looking into the Shadow: Recording a Total Solar Eclipse with High-resolution Event Cameras

Looking into the Shadow: Recording a Total Solar Eclipse with High-resolution Event Cameras

This paper presents the first recording of a total solar eclipse with a pair of high-resolution event cameras, with accompanying methodology. A method is proposed to stabilize the recordings to counteract manual tripod adjustments required to track celestial bodies in-frame. A high-dynamic range image of the sun is also generated during the eclipse, showing how event cameras excel in this aspect compared to traditional CMOS-based cameras.

Motion Segmentation for Neuromorphic Aerial Surveillance

Motion Segmentation for Neuromorphic Aerial Surveillance

This paper addresses these challenges by introducing a novel motion segmentation method that leverages self-supervised vision transformers on both event data and optical flow information. Our approach eliminates the need for human annotations and reduces dependency on scene-specific parameters.

CoSEC: A Coaxial Stereo Event Camera Dataset for Autonomous Driving

CoSEC: A Coaxial Stereo Event Camera Dataset for Autonomous Driving

This paper introduces hybrid coaxial event-frame devices to build the multimodal system, and propose a coaxial stereo event camera (CoSEC) dataset for autonomous driving. As for the multimodal system, it first utilizes the microcontroller to achieve time synchronization, and then spatially calibrate different sensors, where they perform intra- and inter-calibration of stereo coaxial devices.

M3ED: Multi-Robot, Multi-Sensor, Multi-Environment Event Dataset

M3ED: Multi-Robot, Multi-Sensor, Multi-Environment Event Dataset

This paper presents M3ED, the first multi-sensor event camera dataset focused on high-speed dynamic motions in robotics applications. M3ED provides high-quality synchronized and labeled data from multiple platforms, including ground vehicles, legged robots, and aerial robots, operating in challenging conditions such as driving along off-road trails, navigating through dense forests, and performing aggressive flight maneuvers.

Real-time event simulation with frame-based cameras

Real-time event simulation with frame-based cameras

This work proposes simulation methods that improve the performance of event simulation by two orders of magnitude (making them real-time capable) while remaining competitive in the quality assessment.

Object Tracking with an Event Camera

Object Tracking with an Event Camera

This paper analyzes the synth-to-real domain shift in event data, i.e., the gap arising between simulated events obtained from synthetic renderings and those captured with a real camera on real images.

A Monocular Event-Camera Motion Capture System

A Monocular Event-Camera Motion Capture System

This work builds an event-based SL system that consists of a laser point projector and an event camera, and devises a spatial-temporal coding strategy that realizes depth encoding in dual domains through a single shot.

Vision événementielle Omnidirectionnelle : Théorie et Applications

Vision événementielle Omnidirectionnelle : Théorie et Applications

Depuis quelques années, l’utilisation des caméras événementielles est en plein essor en vision par ordinateur et en robotique, et ces capteurs sont à l’origine d’un nombre croissant de projets de recherche portant, par exemple, sur
le véhicule autonome.

Object Detection Method with Spiking Neural Network Based on DT-LIF Neuron and SSD

Object Detection Method with Spiking Neural Network Based on DT-LIF Neuron and SSD

This paper proposes an object detection method with SNN based on Dynamic Threshold Leaky Integrate-and-Fire (DT-LIF) neuron and Single Shot multibox Detector (SSD). First, a DT-LIF neuron is designed, which can dynamically
adjust the threshold of neuron according to the cumulative membrane potential to drive spike activity of the
deep network and imporve the inferance speed.

Asynchronous Optimisation for Event-based Visual Odometry

Asynchronous Optimisation for Event-based Visual Odometry

This paper focuses on event-based visual odometry (VO). While existing event-driven VO pipelines have adopted continuous-time representations to asynchronously process event data, they either assume a known map, restrict the camera to planar trajectories, or integrate other sensors into the system. Towards map-free event-only monocular VO in SE(3), we propose an asynchronous structure-from-motion optimisation back-end.

Target-free Extrinsic Calibration of Event-LiDAR Dyad using Edge Correspondences

Target-free Extrinsic Calibration of Event-LiDAR Dyad using Edge Correspondences

This paper proposes a novel method to calibrate the extrinsic parameters between a dyad of an event camera and a LiDAR without the need for a calibration board or other equipment. Our approach takes advantage of the fact that when an event camera is in motion, changes in reflectivity and geometric edges in the environment trigger numerous events, which can also be captured by LiDAR.

A New Stereo Fisheye Event Camera for Fast Drone Detection and Tracking

A New Stereo Fisheye Event Camera for Fast Drone Detection and Tracking

This paper presents a new compact vision sensor consisting of two fish eye event cameras mounted back to-back, which offers a full 360-degree view of the surrounding environment. We describe the optical design, projection model and practical calibration using the incoming stream of events, of the novel stereo camera, called SFERA.

eTraM: Event-based Traffic Monitoring Dataset

eTraM: Event-based Traffic Monitoring Dataset

Event cameras offer high temporal resolution and efficiency but remain underutilized in static traffic monitoring. We present eTraM, a first-of-its-kind event-based dataset with 10 hours of traffic data, 2M annotations, and eight participant classes. Evaluated with RVT, RED, and YOLOv8, eTraM highlights the potential of event cameras for real-world applications.

SEVD: Synthetic Event-based Vision Dataset for Ego and Fixed Traffic Perception

SEVD: Synthetic Event-based Vision Dataset for Ego and Fixed Traffic Perception

This paper evaluates a dataset using state-of-the-art event-based (RED, RVT) and frame-based (YOLOv8) methods for traffic participant detection tasks and provide baseline benchmarks for assessment. Additionally, the authors conduct experiments to assess the synthetic event-based dataset’s generalization capabilities.

Falcon ODIN: an event-based camera payload

Falcon ODIN: an event-based camera payload

This paper talks about the mission design and objectives for Falcon ODIN along with ground-based testing of all four camera. Falcon ODIN contains two event based cameras (EBC) and two traditional framing cameras along with mirrors mounted on azimuth elevation rotation stages which allow the field of regard of the EBCs to move.

Don’t miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter