Welcome to the Prophesee Research Library, where academic innovation meets the world’s most advanced event-based vision technologies
We have brought together groundbreaking research from scholars who are pushing the boundaries with Prophesee Event-based Vision technologies to inspire collaboration and drive forward new breakthroughs in the academic community.
Introducing Prophesee Research Library, the largest curation of academic papers, leveraging Prophesee event-based vision.
Together, let’s reveal the invisible and shape the future of Computer Vision.
Widefield Diamond Quantum Sensing with Neuromorphic Vision Sensors
In this paper, a neuromorphic vision sensor encodes fluorescence changes in diamonds into spikes for optically detected magnetic resonance. This enables reduced data volume and latency, wide dynamic range, high temporal resolution, and excellent signal-to-background ratio, improving widefield quantum sensing performance. Experiments with an off-the-shelf event camera demonstrate significant temporal resolution gains while maintaining precision comparable to specialized frame-based approaches, and the technology successfully monitors dynamically modulated laser heating of gold nanoparticles, providing new insights for high-precision, low-latency widefield quantum sensing.
An Event-Based Perception Pipeline for a Table Tennis Robot
In this paper, the contact-free reconstruction of an individual’s cardiac pulse signal from time event recording of the face is investigated using a supervised convolutional neural network (CNN) model. An end-to-end model is trained to extract the cardiac signal from a two-dimensional representation of the event stream, with model performance evaluated based on the accuracy of the calculated heart rate. Experimental results confirm that physiological cardiac information in the facial region is effectively preserved, and models trained on higher FPS event frames outperform standard camera results.
Contactless Cardiac Pulse Monitoring Using Event Cameras
In this paper, the contact-free reconstruction of an individual’s cardiac pulse signal from time event recording of the face is investigated using a supervised convolutional neural network (CNN) model. An end-to-end model is trained to extract the cardiac signal from a two-dimensional representation of the event stream, with model performance evaluated based on the accuracy of the calculated heart rate. Experimental results confirm that physiological cardiac information in the facial region is effectively preserved, and models trained on higher FPS event frames outperform standard camera results.
Dynamic Vision-Based Non-Contact Rotating Machine Fault Diagnosis with EViT
In this paper, a dynamic vision-based non-contact machine fault diagnosis method is proposed using the Eagle Vision Transformer (EViT). The architecture incorporates Bi-Fovea Self-Attention and Bi-Fovea Feedforward Network mechanisms to process asynchronous event streams while preserving temporal precision. EViT achieves exceptional fault diagnosis performance across diverse operational conditions through multi-scale spatiotemporal feature analysis, adaptive learning, and transparent decision pathways. Validated on rotating machine monitoring data, this approach bridges bio-inspired vision processing with industrial requirements, providing new insights for predictive maintenance in smart manufacturing environments.
A magnetically levitated conducting rotor with ultra-low rotational damping circumventing eddy loss
In this paper, a conducting rotor is levitated diamagnetically in high vacuum, achieving extremely low rotational damping by minimizing eddy-current losses. Experiments and simulations reveal that at higher pressures, gas collisions are the main source of damping, while at low pressures, minor asymmetries in the setup cause residual eddy damping. Analytic calculations confirm that, under ideal symmetric conditions, steady eddy currents can vanish. This approach enables ultra-low-loss rotors suitable for high-precision gyroscopes, pressure sensors, and tests in fundamental physics.
BiasBench: A reproducible benchmark for tuning the biases of event cameras
In this paper, an experimental imaging flow cytometer using an event-based CMOS camera is presented, with data processed by adaptive feedforward and recurrent spiking neural networks. PMMA particles flowing in a microfluidic channel are classified, and analysis of experimental data shows that spiking recurrent networks, including LSTM and GRU models, achieve high accuracy by leveraging temporal dependencies. Adaptation mechanisms in lightweight feedforward spiking networks further improve performance. This work provides a roadmap for neuromorphic-assisted biomedical applications, enhancing classification while maintaining low latency and sparsity.
Don’t miss the next story.
Subscribe to our newsletter!
INVENTORS AROUND THE WORLD
Feb 2025






