This paper presents the implementation details and experimental validation of a relatively low-cost motion capture system for multi-quadrotor motion planning using an event camera. The real-time, multi-quadrotor detection and tracking tasks are performed using a deep learning network You-Only-Look-Once (YOLOv5) and a k-dimensional (k-d) tree, respectively.
This paper concerns research on the load motion carried by a rotary crane. For this purpose, the laboratory crane model was designed in Solidworks software, and numerical simulations were made using the Motion module. The developed laboratory model is a scaled equivalent of the real Liebherr LTM 1020 object.
This paper proposes to use event cameras with bio-inspired silicon sensors, which are sensitive to radiance changes, to recover precise radiance values. It reveals that, under active lighting conditions, the transient frequency of event signals triggering linearly reflects the radiance value.
This paper aims to increase the performance of spiking neural networks for event data processing, in order to design intelligent automotive algorithms that are efficient, fast, and energy-efficient.
This research introduces a novel approach to Stereo Hybrid Event-Frame Disparity Estimation, leveraging the unique strengths of both event and frame-based cameras. By combining these modalities, significant improvements in depth estimation accuracy, enabling more robust and reliable 3D perception systems was achieved.