Depuis quelques années, l’utilisation des caméras événementielles est en plein essor en vision par ordinateur et en robotique, et ces capteurs sont à l’origine d’un nombre croissant de projets de recherche portant, par exemple, sur
le véhicule autonome.
This paper proposes an object detection method with SNN based on Dynamic Threshold Leaky Integrate-and-Fire (DT-LIF) neuron and Single Shot multibox Detector (SSD). First, a DT-LIF neuron is designed, which can dynamically
adjust the threshold of neuron according to the cumulative membrane potential to drive spike activity of the
deep network and imporve the inferance speed.
This paper focuses on event-based visual odometry (VO). While existing event-driven VO pipelines have adopted continuous-time representations to asynchronously process event data, they either assume a known map, restrict the camera to planar trajectories, or integrate other sensors into the system. Towards map-free event-only monocular VO in SE(3), we propose an asynchronous structure-from-motion optimisation back-end.
This paper proposes a novel method to calibrate the extrinsic parameters between a dyad of an event camera and a LiDAR without the need for a calibration board or other equipment. Our approach takes advantage of the fact that when an event camera is in motion, changes in reflectivity and geometric edges in the environment trigger numerous events, which can also be captured by LiDAR.
Dans cet article, nous proposons un prototype de pipeline stéréo événementiel pour la reconstruction 3D et le suivi d’une caméra en mouvement. Le module de reconstruction 3D repose sur la fusion DSI (“disparity space image”), tandis que le module de suivi utilise les surfaces temporelles comme champs de distance anisotropes, pour estimer la pose de la caméra.