PROPHESEE reveals the world’s first event-based vision sensor, in an industry-standard package.
Now ready for mass deployment in your machine vision systems.
METAVISION® SENSOR, PACKAGED
Prophesee third generation Metavision® sensor, is now available in an industry-standard package. For the first time, Event-Based Vision’s light and efficient integration into existing system is made possible.
640×480 VGA Event-Based sensor
Package: 13×15 mm mini PBGA
PIXEL INTELLIGENCE
Bringing intelligence to the very edge
Inspired by the human retina, at the heart of Prophesee patented Event-Based Metavision sensors, each pixel embeds its own intelligence processing enabling them to activate themselves independently, triggering events.
SPEED
>10k fps Time-Resolution Equivalent
There is no framerate tradeoff anymore. Take full advantage of events over frames and reveal the invisible hidden in hyper fast and fleeting scene dynamics.
DYNAMIC RANGE
>120dB Dynamic Range
Achieve high robustness even in extreme lighting conditions. With Metavision sensors you can now perfectly see details from pitch dark to blinding brightness in one same scene, at any speed.
LOW LIGHT
0.08 lx Low-Light Cutoff
Sometimes the darkest areas hold the clearest insights. Metavision enables you to see events where light almost does not exist, down to 0.08 lx.
DATA EFFICIENCY
10 to 1000x less data
With each pixel only reporting when it senses movement, Metavision sensors generate on average 10 to 1000x less data than traditional image-based ones.
POWER EFFICIENCY
3nW/event
The Metavision sensor’s pixel independence and overall architecture enable new levels of power efficiency with just 3nW/event and 26mW at sensor level.
BUILT FOR THE INDUSTRY 4.0 REVOLUTION
A new category of Machine Vision to unlock a new performance dimension in speed, precision and endurance for Ultra-High Speed counting, Vibration Measurement, Kinematic Monitoring and more.
>1000 Obj/s.
Throughput
1%
Motion Period Irregularity Detection
1 pixel
Minimal Amplitude Detection
REVEAL THE INVISIBLE
BETWEEN THE FRAMES
FRAME-BASED VISION
In a traditional frame-based sensor, the whole sensor array is triggered at a pre-defined rhythm, regardless of the actual scene’s dynamics.
This leads to the acquisition of large volumes of raw, undersampled or redundant, data.
On the LEFT, a simulation of Frame-Based Vision running at 10fps. This approach leverages traditional cinema techniques and records a succession of static images to represent movement. Between these images, there is nothing, the system is blind by design.
On the RIGHT, the same scene recorded using Event-Based Vision. There is no gap between the frames, because there are no frames anymore. Instead, a continuous stream of essential information dynamically driven by movement, pixel by pixel.
This drastically reduces the power, latency and data processing requirements imposed by traditional frame-based systems.
ADVANCED TOOLKIT
With Metavision sensor purchase comes a complementary access to an advanced toolkit composed of an online portal, drivers, data player and SDK.
We are sharing an advanced toolkit so you can start building your own vision.
METAVISION INTELLIGENCE SUITE
Experience first hand the new performance standards set by Event-Based Vision by interacting with more than 95 algorithms, 67 code samples and 11 ready-to-use applications, the industry’s widest selection available to date.