Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions

Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions

To explore the potential of event cameras, Ultraleap have developed a prototype stereo camera using two Prophesee IMX636ES sensors. To go from event data to hand positions the event data is aggregated into event frames. This is then consumed by a hand tracking model which outputs 28 joint positions for each hand with respect to the camera.

X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems

X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems

We present a new approach to direct depth estimation for Spatial Augmented Reality (SAR) applications using event cameras. These dynamic vision sensors are a great fit to be paired with laser projectors for depth estimation in a structured light approach. Our key contributions involve a conversion of the projector time map into a rectified X-map, capturing x-axis correspondences for incoming events and enabling direct disparity lookup without any additional search.

Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor

Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor

We present the first events-only static-obstacle avoidance method for a quadrotor with just an onboard, monocular event camera. By leveraging depth prediction as an intermediate step in our learning framework, we can pre-train a reactive obstacle avoidance events-to-control policy in simulation, and then fine-tune the perception component with limited events-depth real-world data to achieve dodging in indoor and outdoor settings.

Event-Based Motion Magnification

Event-Based Motion Magnification

In this work, we propose a dual-camera system consisting of an event camera and a conventional RGB camera for video motion magnification, providing temporally-dense information from the event stream and spatially-dense data from the RGB images. This innovative combination enables a broad and cost-effective amplification of high-frequency motions.

Cell detection with convolutional spiking neural network for neuromorphic cytometry

Cell detection with convolutional spiking neural network for neuromorphic cytometry

Our previous work demonstrated the early development of neuromorphic imaging cytometry, evaluating its feasibility in resolving conventional frame-based imaging systems’ limitations in data redundancy, fluorescence sensitivity, and compromised throughput. Herein, we adopted a convolutional spiking neural network (SNN) combined with the YOLOv3 model (SNN-YOLO) to perform cell classification and detection on label-free samples under neuromorphic vision.