In this paper, object detection for event-based cameras (EBCs) is addressed, as their sparse and asynchronous data pose challenges for conventional image analysis. The I2EvDet framework bridges mainstream image detectors with temporal event data. Using a simple image-like representation, a Real-Time Detection Transformer (RT-DETR) achieves performance comparable to specialized EBC methods. A latent-space adaptation transforms image-based detectors into event-based models with minimal architectural modifications. The resulting EvRT-DETR reaches state-of-the-art performance on Gen1 and 1Mpx/Gen4 benchmarks, providing an efficient and generalizable approach for event-based object detection.
In this paper, an event-camera-based visual teach-and-repeat system is presented, enabling robots to autonomously follow previously demonstrated paths by comparing current sensory input with recorded trajectories. Conventional frame-based cameras limit responsiveness due to fixed frame rates, introducing latency in control. The method uses a frequency-domain cross-correlation framework, transforming event matching into fast Fourier-space operations exceeding 300 Hz. By leveraging binary event frames and image compression, localization accuracy is maintained while computational speed is increased. Experiments with a Prophesee EVK4 HD on an AgileX Scout Mini demonstrate successful navigation over 4000+ meters, achieving ATEs below 24 cm with high-frequency control updates.
Prophesee’s event-based vision for XR wearables is transforming smart glasses, delivering lightweight, high-performance devices with improved usability, low power consumption, precise eye tracking, and foveated rendering. Featured in Imaging & Machine Vision Europe, this technology addresses key XR challenges and enables more immersive, intuitive, and energy-efficient experiences.
CenturyArks’ SilkyEvCam HD Module, featuring the high-speed IMX636 HD event sensor realized in collaboration between Sony and Prophesee and a MIPI interface, delivers ultra-fast, low-latency event-based vision in a compact, lightweight design ideal for robotics, automotive, AGVs, and more.
In this paper, an experimental imaging flow cytometer using an event-based CMOS camera is presented, with data processed by adaptive feedforward and recurrent spiking neural networks. PMMA particles flowing in a microfluidic channel are classified, and analysis of experimental data shows that spiking recurrent networks, including LSTM and GRU models, achieve high accuracy by leveraging temporal dependencies. Adaptation mechanisms in lightweight feedforward spiking networks further improve performance. This work provides a roadmap for neuromorphic-assisted biomedical applications, enhancing classification while maintaining low latency and sparsity.