On-orbit optical detection of lethal non-trackable debris

On-orbit optical detection of lethal non-trackable debris

Resident space objects in the size range of 0.1 mm–3 cm are not currently trackable but have enough kinetic energy to have lethal consequences for spacecraft. The assessment of small orbital debris, potentially posing a risk to most space missions, requires the combination of a large sensor area and large time coverage.

G2N2: Lightweight event stream classification with GRU graph neural networks

G2N2: Lightweight event stream classification with GRU graph neural networks

We benchmark our model against other event-graph and convolutional neural network based approaches on the challenging DVS-Lip dataset (spoken word classification). We find that not only does our method outperform state of the art approaches for similar model sizes, but that, relative to the convolutional models, the number of calculation operations per second was reduced by 81%.

Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions

Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions

To explore the potential of event cameras, Ultraleap have developed a prototype stereo camera using two Prophesee IMX636ES sensors. To go from event data to hand positions the event data is aggregated into event frames. This is then consumed by a hand tracking model which outputs 28 joint positions for each hand with respect to the camera.

X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems

X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems

We present a new approach to direct depth estimation for Spatial Augmented Reality (SAR) applications using event cameras. These dynamic vision sensors are a great fit to be paired with laser projectors for depth estimation in a structured light approach. Our key contributions involve a conversion of the projector time map into a rectified X-map, capturing x-axis correspondences for incoming events and enabling direct disparity lookup without any additional search.

Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor

Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor

We present the first events-only static-obstacle avoidance method for a quadrotor with just an onboard, monocular event camera. By leveraging depth prediction as an intermediate step in our learning framework, we can pre-train a reactive obstacle avoidance events-to-control policy in simulation, and then fine-tune the perception component with limited events-depth real-world data to achieve dodging in indoor and outdoor settings.