Faces in Event Streams dataset contains 689 minutes of recorded event streams, and 1.6 million annotated faces with bounding box and five point facial landmarks. This paper presents the dataset and corresponding models for detecting face and facial landmakrs directly from event stream data.
In this paper, we present an asynchronous linear filter architecture, fusing event and frame camera data, for HDR video reconstruction and spatial convolution that exploits the advantages of both sensor modalities. The key idea is the introduction of a state that directly encodes the integrated or convolved image information and that is updated asynchronously as each event or each frame arrives from the camera.
We show that our proposed pipeline provides improved accuracy over the result of the state-of-the-art visual odometry for stereo event-based cameras, while running in real-time on a standard CPU (low-resolution cameras). To the best of our knowledge, this is the first published visual-inertial odometry for stereo event-based cameras.
In this paper, we propose a novel approach by integrating a bio-inspired event camera into the unsupervised video deraining pipeline, which enables us to capture high temporal resolution information and model complex rain characteristics. Specifically, we first design an end-to-end learning-based network consisting of two modules, the asymmetric separation module and the cross-modal fusion module.
Our work has achieved highly consistent outputs with a widely adopted flow cytometer (CytoFLEX) in detecting microparticles. Moreover, the capacity of an event-based photosensor in registering fluorescent signals was evaluated by recording 6 µm Fluorescein isothiocyanate-marked particles in different lighting conditions, revealing superior performance compared to a standard photosensor.