This work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects that were used in the YCB-Video (YCB-V) dataset, allowing for cross-dataset algorithm performance evaluation.
This paper proposes a Human Pose Estimation system, MoveEnet, that can take events as input from a camera
and estimate 2D pose of the human agent in the scene. The final system can be attached to any event camera, regardless of resolution.
To explore the potential of event cameras in the above-mentioned challenging cases, this paper proposes EvTTC, which is the first multi-sensor dataset focusing on TTC tasks under high-relative-speed scenarios. EvTTC consists of data collected using standard cameras and event cameras, covering various potential collision scenarios in daily driving and involving multiple collision objects.
This paper proposes a novel spatio-temporal Vision Transformer model that uses Shifted Patch Tokenization (SPT) and locality Self-Attention (LSA) to enhance the accuracy of Action Unit classification from event streams.
Eoptic, Inc., a leader in advanced imaging and optics systems integration, and Prophesee, the global pioneer in neuromorphic vision systems, today announced a strategic collaboration to integrate high-speed event detection into Eoptic’s innovative and flexible prismatic sensor module. By combining Eoptic’s Cambrian Edge imaging platform with Prophesee’s cutting-edge, event-based Metavision® sensors, the partnership aims to tackle real-time imaging challenges and open new frontiers in dynamic visual processing.