This paper presents a novel approach that combines a photonic neuromorphic spiking computing scheme with a bio-inspired event-based image sensor. Designed for real-time processing of sparse image data, the system uses a time-delayed spiking extreme learning machine implemented via a two-section laser. Tested on high-flow imaging cytometry data, it classifies artificial particles of varying sizes with 97.1% accuracy while reducing parameters by a factor of 6.25 compared to conventional neural networks. These results highlight the potential of fast, low-power event-based neuromorphic systems for biomedical analysis, environmental monitoring, and smart sensing.
In this paper, a new compact vision sensor consisting of two fisheye event cameras mounted back to back is presented, offering a full 360-degree view of the surrounding environment. The optical design, projection model, and practical calibration using incoming event streams of the novel stereo camera called SFERA are described. Its potential for real-time target tracking is evaluated using a Bayesian estimator adapted to the sphere’s geometry. Real-world experiments with a prototype including two synchronized Prophesee EVK4 cameras and a DJI Mavic Air 2 quadrotor demonstrate the system’s effectiveness for aerial surveillance.
In this paper, the Asynchronous Event Multi-Object Tracking (AEMOT) algorithm is presented for detecting and tracking multiple objects by processing individual raw events asynchronously. AEMOT detects salient event blob features by identifying regions of consistent optical flow using a novel Field of Active Flow Directions built from the Surface of Active Events. Detected features are tracked as candidate objects using the recently proposed Asynchronous Event Blob (AEB) tracker to construct small intensity patches of each candidate object.
The Florence RGB-Event Drone dataset (FRED) is a novel multimodal dataset specifically designed for drone detection, tracking, and trajectory forecasting, combining RGB video and event streams. FRED features more than 7 hours of densely annotated drone trajectories, using five different drone models and including challenging scenarios such as rain and adverse lighting conditions.
This paper proposes a neural network framework for predicting the time and collision position of an unmanned aerial vehicle with a dynamic object, using RGB and event-based vision sensors. The proposed architecture consists of two separate encoder branches, one for each modality, followed by fusion by self-attention to improve prediction accuracy. To facilitate benchmarking, the ABCD dataset is leveraged, enabling detailed comparisons of single-modality and fusion-based approaches.