In this work, we propose a dual-camera system consisting of an event camera and a conventional RGB camera for video motion magnification, providing temporally-dense information from the event stream and spatially-dense data from the RGB images. This innovative combination enables a broad and cost-effective amplification of high-frequency motions.
Our previous work demonstrated the early development of neuromorphic imaging cytometry, evaluating its feasibility in resolving conventional frame-based imaging systems’ limitations in data redundancy, fluorescence sensitivity, and compromised throughput. Herein, we adopted a convolutional spiking neural network (SNN) combined with the YOLOv3 model (SNN-YOLO) to perform cell classification and detection on label-free samples under neuromorphic vision.
We introduce a method to detect full RGB events using a monochrome EC aided by a structured light projector. We combine the benefits of ECs and projection-based techniques and allow depth and color detection of static or moving objects with a commercial TI LightCrafter 4500 projector and a monocular monochrome EC, paving the way for frameless RGB-D sensing applications.
We propose a novel event-based VFI framework with cross-modal asymmetric bidirectional motion field estimation. Our EIF-BiOFNet utilizes each valuable characteristic of the events and images for direct estimation of inter-frame motion fields without any approximation methods. We develop an interactive attention-based frame synthesis network to efficiently leverage the complementary warping-based and synthesis-based features.
Recently, event cameras have shown large applicability in several computer vision fields especially concerning tasks that require high temporal resolution. In this work, we investigate the usage of such kind of data for emotion recognition by presenting NEFER, a dataset for Neuromorphic Event-based Facial Expression Recognition.