This paper compares two methods of processing event data by means of deep learning for the task of pedestrian detection. It uses a representation in the form of video frames, convolutional neural networks and asynchronous sparse convolutional neural networks.
This paper introduces a novel minimal 5-point solver that jointly estimates line parameters and linear camera velocity projections, which can be fused into a single, averaged linear velocity when considering multiple lines.
This work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects that were used in the YCB-Video (YCB-V) dataset, allowing for cross-dataset algorithm performance evaluation.
This paper proposes a Human Pose Estimation system, MoveEnet, that can take events as input from a camera
and estimate 2D pose of the human agent in the scene. The final system can be attached to any event camera, regardless of resolution.
To explore the potential of event cameras in the above-mentioned challenging cases, this paper proposes EvTTC, which is the first multi-sensor dataset focusing on TTC tasks under high-relative-speed scenarios. EvTTC consists of data collected using standard cameras and event cameras, covering various potential collision scenarios in daily driving and involving multiple collision objects.