This research introduces a novel approach to Stereo Hybrid Event-Frame Disparity Estimation, leveraging the unique strengths of both event and frame-based cameras. By combining these modalities, significant improvements in depth estimation accuracy, enabling more robust and reliable 3D perception systems was achieved.
In this paper the local histograms are normalised to produce probability distributions. Once these distributions are obtained, the optical flow is estimated using powerful methods taken from probability theory, in particular, methods based on the Fisher–Rao metric.
This paper presents a real-time method to detect and track multiple mobile ground robots using event cameras. The method uses density-based spatial clustering of applications with noise (DBSCAN) to detect the robots and a single k-dimensional (k − d) tree to accurately keep track of them as they move in an indoor arena.
To address the issue of dense processing, this paper introduces Sparse-E2VID, an architecture that processes data
in sparse format. With Sparse-E2VID, the inference time is reduced to 55 ms (at 720 × 1280 resolution), which is 30% faster than FireNet. Additionally, Sparse-E2VID reduces the computational cost by 98% compared to FireNet+, while also improving image quality.