RESEARCH PAPERS

Stereo Event-based Particle Tracking Velocimetry for 3D Fluid Flow Reconstruction

Stereo Event-based Particle Tracking Velocimetry for 3D Fluid Flow Reconstruction

First, we track particles inside the two event sequences in order to estimate their 2D velocity in the two sequences of images. A stereo-matching step is then performed to retrieve their 3D positions. These intermediate outputs are incorporated into an optimization framework that also includes physically plausible regularizers, in order to retrieve the 3D velocity field.

Fast Image Reconstruction with an Event Camera

Fast Image Reconstruction with an Event Camera

Previous works rely on hand-crafted spatial and temporal smoothing techniques to reconstruct images from events. We propose a novel neural network architecture for video reconstruction from events that is smaller (38k vs. 10M parameters) and faster (10ms vs. 30ms) than state-of-the-art with minimal impact to performance.

TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset

TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset

We provide ground truth poses from a motion capture system at 120Hz during the beginning and end of each sequence, which can be used for trajectory evaluation. TUM-VIE includes challenging sequences where state-of-the art visual SLAM algorithms either fail or result in large drift.

Event-based Visual Odometry on Non-Holonomic Ground Vehicles

Event-based Visual Odometry on Non-Holonomic Ground Vehicles

As demonstrated on both simulated and real data, our algorithm achieves accurate and robust estimates of the vehicle’s instantaneous rotational velocity, and thus results that are comparable to the delta rotations obtained by frame-based sensors under normal conditions. We furthermore significantly outperform the more traditional alternatives in challenging illumination scenarios.

Table tennis ball spin estimation with an event camera

Table tennis ball spin estimation with an event camera

Event cameras do not suffer as much from motion blur, thanks to their high temporal resolution. Moreover, the sparse nature of the event stream solves communication bandwidth limitations many frame cameras face. To the best of our knowledge, we present the first method for table tennis spin estimation using an event camera. We use ordinal time surfaces to track the ball and then isolate the events generated by the logo on the ball.

TimeReplayer: Unlocking the Potential of Event Cameras for Video Interpolation

TimeReplayer: Unlocking the Potential of Event Cameras for Video Interpolation

The pioneering work Time Lens introduced event cameras to video interpolation by designing optical devices to collect a large amount of paired training data of high-speed frames and events, which is too costly to scale. To fully unlock the potential of event cameras, this paper proposes a novel TimeReplayer algorithm to interpolate videos captured by commodity cameras with events.

Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks

Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks

We conduct benchmark experiments for the existing methods in some representative research directions, i.e., image reconstruction, deblurring, and object recognition, to identify some critical insights and problems. Finally, we have discussions regarding the challenges and provide new perspectives for inspiring more research studies.

Real-Time Face & Eye Tracking and Blink Detection using Event Cameras

Real-Time Face & Eye Tracking and Blink Detection using Event Cameras

This paper proposes a novel method to simultaneously detect and track faces and eyes for driver monitoring. A unique, fully convolutional recurrent neural network architecture is presented. To train this network, a synthetic event-based dataset is simulated with accurate bounding box annotations, called Neuromorphic HELEN.

Tracking-Assisted Object Detection with Event Cameras

Tracking-Assisted Object Detection with Event Cameras

Lastly, we propose a spatio-temporal feature aggregation module to enrich the latent features and a consistency loss to increase the robustness of the overall pipeline. We conduct comprehensive experiments to verify our method’s effectiveness where still objects are retained, but real occluded objects are discarded.

Detection and Tracking With Event Based Sensors

Detection and Tracking With Event Based Sensors

The MSMO algorithm uses the velocities of each event to create an average of the scene and filter out dissimilar events. This work shows the study performed on the velocity values of the events and explains why ultimately an average-based velocity filter is insufficient for lightweight MSMO detection and tracking of objects using an EBS camera.

Multi-Bracket High Dynamic Range Imaging with Event Cameras

Multi-Bracket High Dynamic Range Imaging with Event Cameras

In this paper, we propose the first multibracket HDR pipeline combining a standard camera with an event camera. Our results show better overall robustness when using events, with improvements in PSNR by up to 5dB on synthetic data and up to 0.7dB on real-world data.

EvUnroll: Neuromorphic Event Based Rolling Shutter Image Correction

EvUnroll: Neuromorphic Event Based Rolling Shutter Image Correction

We further propose datasets captured by a high-speed camera and an RS-Event hybrid camera system for training and testing our network. Experimental results on both public and proposed datasets show a systematic performance improvement compared to state-of-the-art methods.We further propose datasets captured by a high-speed camera and an RS-Event hybrid camera system for training and testing our network.

A Point-image fusion network for event-based frame interpolation

A Point-image fusion network for event-based frame interpolation

Temporal information in event streams plays a critical role in event-based video frame interpolation as it provides temporal context cues complementary to images. Most previous event-based methods first transform the unstructured event data to structured data formats through voxelisation, and then employ advanced CNNs to extract temporal information.

eWand: A calibration framework for wide baseline event-based camera systems

eWand: A calibration framework for wide baseline event-based camera systems

To overcome calibration limitations, we propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern. Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras.

Concept Study for Dynamic Vision Sensor Based Insect Monitoring

Concept Study for Dynamic Vision Sensor Based Insect Monitoring

In this concept study, the processing steps required for this are discussed and suggestions for suitable processing methods are given. On the basis of a small dataset, a clustering and filtering-based labeling approach is proposed, which is a promising option for the preparation of larger DVS insect monitoring datasets.

EventLFM: Event Camera integrated Fourier Light Field Microscopy for Ultrafast 3D imaging

EventLFM: Event Camera integrated Fourier Light Field Microscopy for Ultrafast 3D imaging

We introduce EventLFM, a straightforward and cost-effective system that overcomes these challenges by integrating an event camera with Fourier light field microscopy (LFM), a state-of-the-art single-shot 3D wide-field imaging technique. We further develop a simple and robust event-driven LFM reconstruction algorithm that can reliably reconstruct 3D dynamics from the unique spatiotemporal measurements captured by EventLFM.

Memory-Efficient Fixed-Length Representation of Synchronous Event Frames for Very-Low-Power Chip Integration

Memory-Efficient Fixed-Length Representation of Synchronous Event Frames for Very-Low-Power Chip Integration

The experimental evaluation on a public dataset demonstrates that the proposed fixed-length coding framework provides at least two times the compression ratio relative to the raw EF representation and a close performance compared with variable-length video coding standards and variable-length state-of-the-art image codecs for lossless compression of ternary EFs generated at frequencies below one KHz.

Event-based Background-Oriented Schlieren

Event-based Background-Oriented Schlieren

This paper presents a novel technique for perceiving air convection using events and frames by providing the first theoretical analysis that connects event data and schlieren. We formulate the problem as a variational optimization one combining the linearized event generation model with a physically-motivated parameterization that estimates the temporal derivative of the air density.

On-orbit optical detection of lethal non-trackable debris

On-orbit optical detection of lethal non-trackable debris

Resident space objects in the size range of 0.1 mm–3 cm are not currently trackable but have enough kinetic energy to have lethal consequences for spacecraft. The assessment of small orbital debris, potentially posing a risk to most space missions, requires the combination of a large sensor area and large time coverage.

G2N2: Lightweight event stream classification with GRU graph neural networks

G2N2: Lightweight event stream classification with GRU graph neural networks

We benchmark our model against other event-graph and convolutional neural network based approaches on the challenging DVS-Lip dataset (spoken word classification). We find that not only does our method outperform state of the art approaches for similar model sizes, but that, relative to the convolutional models, the number of calculation operations per second was reduced by 81%.

Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions

Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions

To explore the potential of event cameras, Ultraleap have developed a prototype stereo camera using two Prophesee IMX636ES sensors. To go from event data to hand positions the event data is aggregated into event frames. This is then consumed by a hand tracking model which outputs 28 joint positions for each hand with respect to the camera.

X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems

X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems

We present a new approach to direct depth estimation for Spatial Augmented Reality (SAR) applications using event cameras. These dynamic vision sensors are a great fit to be paired with laser projectors for depth estimation in a structured light approach. Our key contributions involve a conversion of the projector time map into a rectified X-map, capturing x-axis correspondences for incoming events and enabling direct disparity lookup without any additional search.

Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor

Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor

We present the first events-only static-obstacle avoidance method for a quadrotor with just an onboard, monocular event camera. By leveraging depth prediction as an intermediate step in our learning framework, we can pre-train a reactive obstacle avoidance events-to-control policy in simulation, and then fine-tune the perception component with limited events-depth real-world data to achieve dodging in indoor and outdoor settings.

Event-Based Motion Magnification

Event-Based Motion Magnification

In this work, we propose a dual-camera system consisting of an event camera and a conventional RGB camera for video motion magnification, providing temporally-dense information from the event stream and spatially-dense data from the RGB images. This innovative combination enables a broad and cost-effective amplification of high-frequency motions.

Cell detection with convolutional spiking neural network for neuromorphic cytometry

Cell detection with convolutional spiking neural network for neuromorphic cytometry

Our previous work demonstrated the early development of neuromorphic imaging cytometry, evaluating its feasibility in resolving conventional frame-based imaging systems’ limitations in data redundancy, fluorescence sensitivity, and compromised throughput. Herein, we adopted a convolutional spiking neural network (SNN) combined with the YOLOv3 model (SNN-YOLO) to perform cell classification and detection on label-free samples under neuromorphic vision.

Event-Based RGB Sensing With Structured Light

Event-Based RGB Sensing With Structured Light

We introduce a method to detect full RGB events using a monochrome EC aided by a structured light projector. We combine the benefits of ECs and projection-based techniques and allow depth and color detection of static or moving objects with a commercial TI LightCrafter 4500 projector and a monocular monochrome EC, paving the way for frameless RGB-D sensing applications.

Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields

Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields

We propose a novel event-based VFI framework with cross-modal asymmetric bidirectional motion field estimation. Our EIF-BiOFNet utilizes each valuable characteristic of the events and images for direct estimation of inter-frame motion fields without any approximation methods. We develop an interactive attention-based frame synthesis network to efficiently leverage the complementary warping-based and synthesis-based features.

Neuromorphic Event-Based Facial Expression Recognition

Neuromorphic Event-Based Facial Expression Recognition

Recently, event cameras have shown large applicability in several computer vision fields especially concerning tasks that require high temporal resolution. In this work, we investigate the usage of such kind of data for emotion recognition by presenting NEFER, a dataset for Neuromorphic Event-based Facial Expression Recognition.

Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras

Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras

Faces in Event Streams dataset contains 689 minutes of recorded event streams, and 1.6 million annotated faces with bounding box and five point facial landmarks. This paper presents the dataset and corresponding models for detecting face and facial landmakrs directly from event stream data.

Don’t miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter