This paper proposes a novel, computationally efficient regularizer to mitigate event collapse in the CMax framework. From a theoretical point of view, the regularizer is designed based on geometric principles of motion field deformation (measuring area rate of change along point trajectories).
This paper compares two methods of processing event data by means of deep learning for the task of pedestrian detection. It uses a representation in the form of video frames, convolutional neural networks and asynchronous sparse convolutional neural networks.
This paper introduces a novel minimal 5-point solver that jointly estimates line parameters and linear camera velocity projections, which can be fused into a single, averaged linear velocity when considering multiple lines.
This work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects that were used in the YCB-Video (YCB-V) dataset, allowing for cross-dataset algorithm performance evaluation.
This paper proposes a Human Pose Estimation system, MoveEnet, that can take events as input from a camera
and estimate 2D pose of the human agent in the scene. The final system can be attached to any event camera, regardless of resolution.