Welcome to the Prophesee Research Library, where academic innovation meets the world’s most advanced event-based vision technologies
We have brought together groundbreaking research from scholars who are pushing the boundaries with Prophesee Event-based Vision technologies to inspire collaboration and drive forward new breakthroughs in the academic community.
Introducing Prophesee Research Library, the largest curation of academic papers, leveraging Prophesee event-based vision.
Together, let’s reveal the invisible and shape the future of Computer Vision.
Interpolation-Based Event Visual Data Filtering Algorithms
In this paper, we propose a method for event data that is capable of removing approximately 99% of noise while preserving the majority of the valid signal. It proposes four algorithms based on the matrix of infinite impulse response (IIR) filters method.
Event-Based Shape from Polarization with Spiking Neural Networks
This paper investigates event-based shape from polarization using Spiking Neural Networks (SNNs), introducing the Single-Timestep and Multi-Timestep Spiking UNets for effective and efficient surface normal estimation.
Low-Complexity Lossless Coding of Asynchronous Event Sequences for Low-Power Chip Integration
This paper introduces a groundbreaking low-complexity lossless compression method for encoding asynchronous event sequences, designed for efficient memory usage and low-power hardware integration.
Event-Based Shape From Polarization
This paper tackles the speed-resolution trade-off using event cameras. Event cameras are efficient highspeed vision sensors that asynchronously measure changes in brightness intensity with microsecond resolution.
Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event Cameras
This paper tackles the speed-resolution trade-off using event cameras. Event cameras are efficient highspeed vision sensors that asynchronously measure changes in brightness intensity with microsecond resolution.
Evaluating Image-Based Face and Eye Tracking with Event Cameras
This paper showcases the viability of integrating conventional algorithms with event-based data, transformed into a frame format while preserving the unique benefits of event cameras.
Don’t miss the next story.
Subscribe to our newsletter!
FEATURED PAPERS
LOW-LATENCY AUTOMOTIVE VISION WITH EVENT CAMERAS
University of Zurich
Advanced driver assistance systems using RGB cameras face a bandwidth–latency trade-off. Event cameras, measuring intensity changes asynchronously, offer high temporal resolution and sparsity, reducing these requirements. However, event-camera-based algorithms either lack accuracy or sacrifice efficiency. This paper proposes a hybrid object detector combining event and frame-based data, leveraging both modalities’ advantages to achieve efficient, high-rate object detections with reduced latency. Using a 20 fps RGB camera and an event camera matches the latency of a 5,000 fps camera with the bandwidth of a 45 fps camera, maintaining accuracy. This method enhances efficient and robust perception in edge scenarios.
EVENTPS: REAL-TIME PHOTOMETRIC STEREO USING AN EVENT CAMERA
Peking University, Shanghai Jiao Tong University, The University of Tokyo, National Institute of Informatics
This paper introduces EventPS, a novel approach to real-time photometric stereo using an event camera. Capitalizing on the exceptional temporal resolution, dynamic range, and low bandwidth characteristics of event cameras, EventPS estimates surface normal only from the radiance changes, significantly enhancing data efficiency. EventPS seamlessly integrates with both optimization-based and deep-learning-based photometric stereo techniques to offer a robust solution for non-Lambertian surfaces. Extensive experiments validate the effectiveness and efficiency of EventPS compared to frame-based counterparts. Our algorithm runs at over 30 fps in real-world scenarios, unleashing the potential of EventPS in time-sensitive and high-speed downstream applications.
DSEC: A STEREO EVENT CAMERA DATASET FOR DRIVING SCENARIOS
University of Zurich, ETH Zurich
Autonomous driving has advanced significantly with corporate funding, yet it struggles in challenging illumination conditions like night, sunrise, and sunset. Standard cameras are being pushed to their limits in low light and high dynamic range scenarios. To address these challenges, this paper introduces DSEC, a new dataset that contains such demanding illumination conditions, providing a rich set of sensory data from a wide-baseline stereo setup of two color frame cameras and two high-resolution monochrome event cameras, along with lidar and RTK GPS measurements, both hardware synchronized with all camera data. DSEC is notable for its high-resolution event cameras, which excel in temporal resolution and dynamic range. This dataset, comprising 53 sequences in varied lighting, provides ground truth disparity for developing and evaluating event-based stereo algorithms. It is the first high-resolution, large-scale stereo dataset with event cameras.
INVENTORS AROUND THE WORLD
Aug 2024