Introducing Metavision® Intelligence Suite, the most comprehensive Event-Based Vision software toolkit to date

THE MOST COMPREHENSIVE EVENT-BASED VISION SOFTWARE SUITE

95 algorithms, 67 code samples and 11 ready-to-use applications in total so you can either start your discovery or build your Event-Based Vision product.

OPEN SOURCE ARCHITECTURE

Metavision Intelligence Suite is based on an open source architecture, unlocking full interoperability between our software and hardware devices and enabling a fast-growing Event-Based community.  

LEADING EVENT-BASED MACHINE LEARNING TOOLKIT

Build your advanced Event-Based ML network leveraging the most performant object detector to date spotlighted at NeurIPS 2020, the largest HD dataset, Image-to-event simulation module, Training, Inference, grading features and more.

6 MODULE FAMILIES

With a wide range of machine vision fields covered: Machine Learning, Computer Vision, camera calibration, high-performance applications and more, the tool you are looking for is here.

EXTENSIVE DOCUMENTATION

With 270+ pages of regularly updated content on docs.prophesee.ai, more than 20 Jupyter notebooks, reference data and extensive guidelines, get a head start on your product development.

GET RESULTS IN MINUTES

We took over 6 years to perfect the largest collection of pre-built pipelines, extensive datasets, code samples, GUI tools and more, so you could get results in minutes.

TRY

New to Event-Based Vision and want to try before you buy? Download ESSENTIALS a free evaluation version of Metavision Intelligence suite. Experience all key features, without time constraints.

BUY

Get the full PROFESSIONAL experience with a licensing plan designed to support your business strategy.  Access to our support helpdesk, advanced add-ons and source code. 

MODULES & APPS

TOOLS

HARDWARE

MODULES & APPS

MACHINE LEARNING – DETECTION INFERENCE

Unlock the potential of Event-Based machine learning, with a set of dedicated tools providing everything you need to start execution of Deep Neural Network (DNN) with events. Leverage our pretrained automotive model written in pytorch, and experiment live detection & tracking using our c++ pipeline. Use our comprehensive python library to design your own networks.

Pretrained network trained on a 15h and 23M labels automotive dataset 

Live detection and tracking @100Hz

MACHINE LEARNING – DETECTION TRAINING

Train your own Object Detection application with our ready-to use training framework. Experiment with multiple pre-built Event-Based Tensor representations, and training network topology suited for event-based Data.

4 pre-built tensor representation 

Automated HDF5 dataset generation 

Comprehensive training toolbox, including Tailor-made preprocessing, DataLoader,  NN architecture, visualization tool and more

MACHINE LEARNING – EVENT SIMULATOR

Bridge frame-based and event-based worlds with our Event Simulator. Generate synthetic data to augment your dataset, and partially reuse existing references.

Off-the-shelf, ready-to-use Event simulator

MACHINE LEARNING – DETECTION KPI

Evaluate your detection performance with our Object Detection KPI toolkit in line with the latest COCO API.

mAP, mAR and their variants included

MACHINE LEARNING – OPTICAL FLOW INFERENCE

Predict optical flow from Event-Based data leveraging our pretrained Flow Model, customized data loader and collections of loss function and visualization tools to set up your flow inference pipeline. 

Self-supervised Flownet architectures 

Lightweight model

MACHINE LEARNING – OPTICAL FLOW TRAINING

No ground truth? Leverage our self-supervised architecture. Train your Optical Flow application with our custom-built FlowNet training framework. Experiment with 4 pre-built Flow Networks tailor-made for your Event-based data.

Comprehensive flow training toolbox, including different network topologies, various loss functions and visualization mode

VIBRATION MONITORING

Monitor vibration frequencies continuously, remotely, with pixel precision, by tracking the temporal evolution of every pixel in a scene. For each event, the pixel coordinates, the polarity of the change and the exact timestamp are recorded, thus providing a global, continuous understanding of vibration patterns.

From 1Hz to kHz range

1 Pixel Accuracy

SPATTER MONITORING

Track small particles with spatter-like motion. Thanks to the high time resolution and dynamic range of our Event-Based Vision sensor, small particles can be tracked in the most difficult and demanding environment.

Up to 200kHz tracking frequency (5µs time resolution)

>Simultaneous XYT tracking  of all particles  

HIGH-SPEED COUNTING

Count objects at unprecedented speeds, high accuracy, generating less data and without any motion blur. Objects are counted as they pass through the field of view, triggering each pixel independently as the object goes by.

>1,000 Obj/s. Throughput

>99.5% Accuracy @1,000 Obj/s.

PARTICLE SIZE MONITORING

Control, count and measure the size of objects moving at very high speed in a channel or a conveyor. Get instantaneous quality statistics in your production line, to control your process.

Up to 500,000 pixels/second speed 

99.9% Counting precision

OBJECT TRACKING

Track moving objects in the field of view. Leverage the low data-rate and sparse information provided by event-based sensors to track objects with low compute power.

Continuous tracking in time: no more “blind spots” between frame acquisitions

Native segmentation: analyze only motion, ignore the static background

CALIBRATION

Deploy your applications in real-life environments and control all the optical parameters of your event-based systems. Calibrate your cameras and adjust the focus with a suite of pre-built tools. Extend to your specific needs, and connect to standard calibration routines using our algorithmic bricks. 

Lens focus assessment 

Automatic intrinsics camera calibration 

 

EDGELET TRACKING

Track 3D edges and/or Fiducial markers for your AR/VR application. Benefit from the high temporal resolution of Events to increase accuracy and robustness of your edge tracking application.

Automated 3D object detection with geometrical prior 

3D object real-time tracking 

 

OPTICAL FLOW

Rediscover this fundamental computer vision building block, but with an event twist. Understand motion much more efficiently, through continuous pixel-by-pixel tracking and not sequential frame by frame analysis anymore.

17x less power compared to traditional image-based approaches 

Get features only on moving objects

XYT VISUALIZATION

Discover the power of time – space continuity for your application by visualizing your data with our XYT viewer.

See between the frames

Zoom in time and understand motion in the scene

 

DATA RATE VISUALIZATION

Understand the process of event generation over time, visualize data and generate plots with the power of python.

Ready-to-use Python environment

Interface with recordings & live cameras

ULTRA SLOW MOTION

Slow down time, down to the time-resolution equivalent of over 200,000+ frames per second, live, while generating orders of magnitude less data than traditional approaches. Understand the finest motion dynamics hiding in ultra fast and fleeting events.

Up to 200,000 fps (time resolution equivalent)

TOOLS

Metavision Studio is the perfect tool to start with, whether you own an EVK or not.

It features a Graphical User Interface allowing anyone to visualize and record data streamed by PROPHESEE-compatible Event-Based Vision systems.

It also enables you to read provided event datasets to deepen your understanding of Event-Based Vision.

 

Read, stream, tune events

Comprehensive Graphical User Interface

Main features
  • Data visualization from Prophesee-compatible Event-Based Vision systems
  • Control of data visualization (accumulation time, fps)
  • Data recording
  • Replay recorded data
  • Export to AVI video
  • Control, saving and loading of sensors settings

Metavision Studio is the perfect tool to start with, whether you own an EVK or not.

It features a Graphical User Interface allowing anyone to visualize and record data streamed by PROPHESEE-compatible Event-Based Vision systems.

It also enables you to read provided event datasets to deepen your understanding of Event-Based Vision.

 

Read, stream, tune events

Comprehensive Graphical User Interface

Main features
  • Data visualization from Prophesee-compatible Event-Based Vision systems
  • Control of data visualization (accumulation time, fps)
  • Data recording
  • Replay recorded data
  • Export to AVI video
  • Control, saving and loading of sensors settings

Metavision Designer is a tool that allows engineers to interconnect components very easily for fast prototyping of Event-Based Vision applications.

It consists of a rich set of libraries, Python APIs and code examples built for quick and efficient integration and testing.

Metavision Designer is built to help engineers who desire to quantify the benefits of Event-based Vision applications.

 

Create a functional prototype in minutes

34 components, 11 python samples,  7 jupyter notebooks tutorials

Main features

 

#!/usr/bin/python
​
"""
Simple script to estimate and display a sparse optical flow.
"""
​
from metavision_hal import DeviceDiscovery
from metavision_designer_engine import Controller
from metavision_designer_core import HalDeviceInterface, CdProducer, ImageDisplayCV
from metavision_designer_cv import SpatioTemporalContrast, SparseOpticalFlow, FlowFrameGenerator
​
​
def parse_args():
    import argparse
    """Parse input arguments."""
    parser = argparse.ArgumentParser(description='Sparse optical flow sample.')
    parser.add_argument('-i', '--input-raw-file', dest='input_path',
                        help='Path to input RAW file. If not specified, the camera live stream is used.')
    args = parser.parse_args()
    return args
​
​
def main():
    """ Main """
    args = parse_args()
​
    # Open a live camera if no file is provided
    from_file = False
    if not args.input_path:
        device = DeviceDiscovery.open("")
        if device is None:
            print('Error: Failed to open a camera.')
            return 1
    else:
        device = DeviceDiscovery.open_raw_file(args.input_path)
        from_file = True
​
    # Create HalDeviceInterface
    interface = HalDeviceInterface(device)
​
    # Read CD events from the interface
    prod_cd = CdProducer(interface)
​
    # Add event filtering
    stc_filter = SpatioTemporalContrast(prod_cd, 40000)
    stc_filter.set_name("STC filter")
​
    # Create a sparse optical flow filter
    sparse_flow = SparseOpticalFlow(
        stc_filter, SparseOpticalFlow.Tuning.FAST_OBJECTS)
    sparse_flow.set_name("Sparse Optical Flow")
​
    # Generate a graphical representation of the events with the flow
    flow_frame_generator = FlowFrameGenerator(
        prod_cd, sparse_flow, True)
    flow_frame_generator.set_name("Flow frame generator")
​
    # Display the generated image
    flow_display = ImageDisplayCV(flow_frame_generator, False)
    flow_display.set_name("Flow display")
​
    # Create the controller
    controller = Controller()
​
    # Register the components
    controller.add_device_interface(interface)
    controller.add_component(prod_cd)
    controller.add_component(stc_filter)
    controller.add_component(sparse_flow)
    controller.add_component(flow_frame_generator)
    controller.add_component(flow_display)
​
    # Setup rendering at 25 frames per second
    controller.add_renderer(flow_display, Controller.RenderingMode.SimulationClock, 25.)
    controller.enable_rendering(True)
​
    # Set controller parameters for running
    controller.set_slice_duration(10000)
    controller.set_batch_duration(100000)
    do_sync = True if from_file else False
​
    # Main loop
    cnt = 0
​
    # Start the camera
    if not from_file:
        simple_device = device.get_i_device_control()
        simple_device.start()
        
    # Start the streaming of events
    i_events_stream = device.get_i_events_stream()
    i_events_stream.start()
​
    while not controller.are_producers_done():
        controller.run(do_sync)
​
        last_key = controller.get_last_key_pressed()
        if last_key == ord('q'):
            break
​
        if cnt % 10 == 0:
            controller.print_stats(False)
​
        cnt = cnt + 1
​
    return 0
​
​
if __name__ == "__main__":
    import sys
    sys.exit(main())

 

Metavision Designer is a tool that allows engineers to interconnect components very easily for fast prototyping of Event-Based Vision applications.

It consists of a rich set of libraries, Python APIs and code examples built for quick and efficient integration and testing.

Metavision Designer is built to help engineers who desire to quantify the benefits of Event-based Vision applications.

 

Create a functional prototype in minutes

34 components, 11 python samples, 7 jupyter notebooks tutorials

Main features

 

#!/usr/bin/python
​
"""
Simple script to estimate and display a sparse optical flow.
"""
​
from metavision_hal import DeviceDiscovery
from metavision_designer_engine import Controller
from metavision_designer_core import HalDeviceInterface, CdProducer, ImageDisplayCV
from metavision_designer_cv import SpatioTemporalContrast, SparseOpticalFlow, FlowFrameGenerator
​
​
def parse_args():
    import argparse
    """Parse input arguments."""
    parser = argparse.ArgumentParser(description='Sparse optical flow sample.')
    parser.add_argument('-i', '--input-raw-file', dest='input_path',
                        help='Path to input RAW file. If not specified, the camera live stream is used.')
    args = parser.parse_args()
    return args
​
​
def main():
    """ Main """
    args = parse_args()
​
    # Open a live camera if no file is provided
    from_file = False
    if not args.input_path:
        device = DeviceDiscovery.open("")
        if device is None:
            print('Error: Failed to open a camera.')
            return 1
    else:
        device = DeviceDiscovery.open_raw_file(args.input_path)
        from_file = True
​
    # Create HalDeviceInterface
    interface = HalDeviceInterface(device)
​
    # Read CD events from the interface
    prod_cd = CdProducer(interface)
​
    # Add event filtering
    stc_filter = SpatioTemporalContrast(prod_cd, 40000)
    stc_filter.set_name("STC filter")
​
    # Create a sparse optical flow filter
    sparse_flow = SparseOpticalFlow(
        stc_filter, SparseOpticalFlow.Tuning.FAST_OBJECTS)
    sparse_flow.set_name("Sparse Optical Flow")
​
    # Generate a graphical representation of the events with the flow
    flow_frame_generator = FlowFrameGenerator(
        prod_cd, sparse_flow, True)
    flow_frame_generator.set_name("Flow frame generator")
​
    # Display the generated image
    flow_display = ImageDisplayCV(flow_frame_generator, False)
    flow_display.set_name("Flow display")
​
    # Create the controller
    controller = Controller()
​
    # Register the components
    controller.add_device_interface(interface)
    controller.add_component(prod_cd)
    controller.add_component(stc_filter)
    controller.add_component(sparse_flow)
    controller.add_component(flow_frame_generator)
    controller.add_component(flow_display)
​
    # Setup rendering at 25 frames per second
    controller.add_renderer(flow_display, Controller.RenderingMode.SimulationClock, 25.)
    controller.enable_rendering(True)
​
    # Set controller parameters for running
    controller.set_slice_duration(10000)
    controller.set_batch_duration(100000)
    do_sync = True if from_file else False
​
    # Main loop
    cnt = 0
​
    # Start the camera
    if not from_file:
        simple_device = device.get_i_device_control()
        simple_device.start()
        
    # Start the streaming of events
    i_events_stream = device.get_i_events_stream()
    i_events_stream.start()
​
    while not controller.are_producers_done():
        controller.run(do_sync)
​
        last_key = controller.get_last_key_pressed()
        if last_key == ord('q'):
            break
​
        if cnt % 10 == 0:
            controller.print_stats(False)
​
        cnt = cnt + 1
​
    return 0
​
​
if __name__ == "__main__":
    import sys
    sys.exit(main())

 

Metavision SDK is the largest set of Event-Based Vision algorithms accessible to date. High-performance algorithms are available via APIs, ready to go to production with Event-Based Vision applications.

Algorithms are coded in C++ and available via pre-compiled Windows and Linux binaries in its free license version. SDK c++ source code can be purchased in its commercial license version.

 

Develop high-performance Event-Based Vision solutions

25 c++ code samples, 44 python classes 51 algorithms, 26 python samples, 16 tutorials and 11 ready-to-use applications

Main features
  • C++ APIs for highly efficient applications and runtime execution
  • Source access possible to port on custom architecture
  • Extensive documentation / code examples /  training & learning material available
  • Runs natively on Linux and Windows
  • Compatible with Prophesee vision systems and « Powered by Prophesee » partners products
  • A complete API that allows you to explore the full potential of Event-Based Vision in just a few lines of code

/**********************************************************************************************************************
 * Copyright (c) Prophesee S.A.                                                                                       *
 *                                                                                                                    *
 * Licensed under the Apache License, Version 2.0 (the "License");                                                    *
 * you may not use this file except in compliance with the License.                                                   *
 * You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0                                 *
 * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed   *
 * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.                      *
 * See the License for the specific language governing permissions and limitations under the License.                 *
 **********************************************************************************************************************/

// This code sample demonstrate how to use the Metavision SDK. The goal of this sample is to create a simple event
// counter and displayer by introducing some basic concepts of the Metavision SDK.

#include 
#include 
#include 
#include 

// this class will be used to analyze the events
class EventAnalyzer {
public:
    // class variables to store global information
    int global_counter                 = 0; // this will track how many events we processed
    Metavision::timestamp global_max_t = 0; // this will track the highest timestamp we processed

    // this function will be associated to the camera callback
    // it is used to compute statistics on the received events
    void analyze_events(const Metavision::EventCD *begin, const Metavision::EventCD *end) {
        std::cout << "----- New callback! -----" << std::endl;

        // time analysis
        // Note: events are ordered by timestamp in the callback, so the first event will have the lowest timestamp and
        // the last event will have the highest timestamp
        Metavision::timestamp min_t = begin->t;     // get the timestamp of the first event of this callback
        Metavision::timestamp max_t = (end - 1)->t; // get the timestamp of the last event of this callback
        global_max_t = max_t; // events are ordered by timestamp, so the current last event has the highest timestamp

        // counting analysis
        int counter = 0;
        for (const Metavision::EventCD *ev = begin; ev != end; ++ev) {
            ++counter; // increasing local counter
        }
        global_counter += counter; // increase global counter

        // report
        std::cout << "There were " << counter << " events in this callback" << std::endl;
        std::cout << "There were " << global_counter << " total events up to now." << std::endl;
        std::cout << "The current callback included events from " << min_t << " up to " << max_t << " microseconds."
                  << std::endl;

        std::cout << "----- End of the callback! -----" << std::endl;
    }
};

// main loop
int main(int argc, char *argv[]) {
    Metavision::Camera cam;       // create the camera
    EventAnalyzer event_analyzer; // create the event analyzer

    if (argc >= 2) {
        // if we passed a file path, open it
        cam = Metavision::Camera::from_file(argv[1]);
    } else {
        // open the first available camera
        cam = Metavision::Camera::from_first_available();
    }

    // add the event callback. This callback will be called periodically to provide access to the most recent events
    cam.cd().add_callback([&event_analyzer](const Metavision::EventCD *ev_begin, const Metavision::EventCD *ev_end) {
        event_analyzer.analyze_events(ev_begin, ev_end);
    });

    // get camera resolution
    int camera_width  = cam.geometry().width();
    int camera_height = cam.geometry().height();

    // create a frame generator for visualization
    // this will get the events from the callback and accumulate them in a cv::Mat
    Metavision::CDFrameGenerator cd_frame_generator(camera_width, camera_height);

    // this callback tells the camera to pass the events to the frame generator, who will then create the frame
    cam.cd().add_callback(
        [&cd_frame_generator](const Metavision::EventCD *ev_begin, const Metavision::EventCD *ev_end) {
            cd_frame_generator.add_events(ev_begin, ev_end);
        });

    const int fps       = 25; // event-based cameras do not have a frame rate, but we need one for visualization
    const int wait_time = static_cast(std::round(1.f / fps * 1000)); // how much we should wait between two frames
    cv::Mat cd_frame;                                                     // frame where events will be accumulated
    const std::string window_name = "Metavision SDK Get Started";

    // this function is used to tell the frame generator what to do with the frame and how often to generate it
    cd_frame_generator.start(
        fps, [&cd_frame](const Metavision::timestamp &ts, const cv::Mat &frame) { frame.copyTo(cd_frame); });

    cv::namedWindow(window_name, cv::WINDOW_GUI_EXPANDED);
    cv::resizeWindow(window_name, camera_width, camera_height);

    // start the camera
    cam.start();

    // keep running while the camera is on or the video is not finished
    while (cam.is_running()) {
        // display the frame if it's not empty
        if (!cd_frame.empty()) {
            cv::imshow(window_name, cd_frame);
        }

        // if the user presses the `q` key, quit the loop
        if ((cv::waitKey(wait_time) & 0xff) == 'q') {
            break;
        }
    }

    // the video is finished or the user wants to quit, stop the camera.
    cam.stop();

    // print the global statistics
    const double length_in_seconds = event_analyzer.global_max_t / 1000000.0;
    std::cout << "There were " << event_analyzer.global_counter << " events in total." << std::endl;
    std::cout << "The total duration was " << length_in_seconds << " seconds." << std::endl;
    if (length_in_seconds >= 1) { // no need to print this statistics if the video was too short
        std::cout << "There were " << event_analyzer.global_counter / length_in_seconds
                  << " events per second on average." << std::endl;
    }
}

 

Metavision SDK is the largest set of Event-Based Vision algorithms accessible to date. High-performance algorithms are available via APIs, ready to go to production with Event-Based Vision applications.

Algorithms are coded in C++ and available via pre-compiled Windows and Linux binaries in its free license version. SDK c++ source code can be purchased in its commercial license version.

 

Develop high-performance Event-Based Vision solutions

25 c++ code samples, 44 python classes 51 algorithms, 26 python samples, 16 tutorials and 11 ready-to-use applications

Main features
  • C++ APIs for highly efficient applications and runtime execution
  • Source access possible to port on custom architecture
  • Extensive documentation / code examples /  training & learning material available
  • Runs natively on Linux and Windows
  • Compatible with Prophesee vision systems and « Powered by Prophesee » partners products
  • A complete API that allows you to explore the full potential of Event-Based Vision in just a few lines of code

/**********************************************************************************************************************
 * Copyright (c) Prophesee S.A.                                                                                       *
 *                                                                                                                    *
 * Licensed under the Apache License, Version 2.0 (the "License");                                                    *
 * you may not use this file except in compliance with the License.                                                   *
 * You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0                                 *
 * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed   *
 * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.                      *
 * See the License for the specific language governing permissions and limitations under the License.                 *
 **********************************************************************************************************************/

// This code sample demonstrate how to use the Metavision SDK. The goal of this sample is to create a simple event
// counter and displayer by introducing some basic concepts of the Metavision SDK.

#include 
#include 
#include 
#include 

// this class will be used to analyze the events
class EventAnalyzer {
public:
    // class variables to store global information
    int global_counter                 = 0; // this will track how many events we processed
    Metavision::timestamp global_max_t = 0; // this will track the highest timestamp we processed

    // this function will be associated to the camera callback
    // it is used to compute statistics on the received events
    void analyze_events(const Metavision::EventCD *begin, const Metavision::EventCD *end) {
        std::cout << "----- New callback! -----" << std::endl;

        // time analysis
        // Note: events are ordered by timestamp in the callback, so the first event will have the lowest timestamp and
        // the last event will have the highest timestamp
        Metavision::timestamp min_t = begin->t;     // get the timestamp of the first event of this callback
        Metavision::timestamp max_t = (end - 1)->t; // get the timestamp of the last event of this callback
        global_max_t = max_t; // events are ordered by timestamp, so the current last event has the highest timestamp

        // counting analysis
        int counter = 0;
        for (const Metavision::EventCD *ev = begin; ev != end; ++ev) {
            ++counter; // increasing local counter
        }
        global_counter += counter; // increase global counter

        // report
        std::cout << "There were " << counter << " events in this callback" << std::endl;
        std::cout << "There were " << global_counter << " total events up to now." << std::endl;
        std::cout << "The current callback included events from " << min_t << " up to " << max_t << " microseconds."
                  << std::endl;

        std::cout << "----- End of the callback! -----" << std::endl;
    }
};

// main loop
int main(int argc, char *argv[]) {
    Metavision::Camera cam;       // create the camera
    EventAnalyzer event_analyzer; // create the event analyzer

    if (argc >= 2) {
        // if we passed a file path, open it
        cam = Metavision::Camera::from_file(argv[1]);
    } else {
        // open the first available camera
        cam = Metavision::Camera::from_first_available();
    }

    // add the event callback. This callback will be called periodically to provide access to the most recent events
    cam.cd().add_callback([&event_analyzer](const Metavision::EventCD *ev_begin, const Metavision::EventCD *ev_end) {
        event_analyzer.analyze_events(ev_begin, ev_end);
    });

    // get camera resolution
    int camera_width  = cam.geometry().width();
    int camera_height = cam.geometry().height();

    // create a frame generator for visualization
    // this will get the events from the callback and accumulate them in a cv::Mat
    Metavision::CDFrameGenerator cd_frame_generator(camera_width, camera_height);

    // this callback tells the camera to pass the events to the frame generator, who will then create the frame
    cam.cd().add_callback(
        [&cd_frame_generator](const Metavision::EventCD *ev_begin, const Metavision::EventCD *ev_end) {
            cd_frame_generator.add_events(ev_begin, ev_end);
        });

    const int fps       = 25; // event-based cameras do not have a frame rate, but we need one for visualization
    const int wait_time = static_cast(std::round(1.f / fps * 1000)); // how much we should wait between two frames
    cv::Mat cd_frame;                                                     // frame where events will be accumulated
    const std::string window_name = "Metavision SDK Get Started";

    // this function is used to tell the frame generator what to do with the frame and how often to generate it
    cd_frame_generator.start(
        fps, [&cd_frame](const Metavision::timestamp &ts, const cv::Mat &frame) { frame.copyTo(cd_frame); });

    cv::namedWindow(window_name, cv::WINDOW_GUI_EXPANDED);
    cv::resizeWindow(window_name, camera_width, camera_height);

    // start the camera
    cam.start();

    // keep running while the camera is on or the video is not finished
    while (cam.is_running()) {
        // display the frame if it's not empty
        if (!cd_frame.empty()) {
            cv::imshow(window_name, cd_frame);
        }

        // if the user presses the `q` key, quit the loop
        if ((cv::waitKey(wait_time) & 0xff) == 'q') {
            break;
        }
    }

    // the video is finished or the user wants to quit, stop the camera.
    cam.stop();

    // print the global statistics
    const double length_in_seconds = event_analyzer.global_max_t / 1000000.0;
    std::cout << "There were " << event_analyzer.global_counter << " events in total." << std::endl;
    std::cout << "The total duration was " << length_in_seconds << " seconds." << std::endl;
    if (length_in_seconds >= 1) { // no need to print this statistics if the video was too short
        std::cout << "There were " << event_analyzer.global_counter / length_in_seconds
                  << " events per second on average." << std::endl;
    }
}

 

HARDWARE

BUY COMPATIBLE HARDWARE

Prophesee Evaluation Kit and Packaged sensor are fully compatible with Metavision Intelligence suite as well as Century Arks' SilkyEvCam - Powered by Prophesee.

 

BUILD COMPATIBLE HARDWARE

Through an extensive partnership program, Prophesee enables vision equipment manufacturers to build their own Event-Based Vision products.

Contact us to develop your own hardware or integrate your vision system with Metavision Intelligence.

FAQ

Do I need to buy an EVK to start ?

You don't necessarily need an Evaluation Kit or Event-Based Vision equipment to start your discovery. You could start with Metavision Studio and interact with provided recordings first.

Which OS are supported ?

Metavision Intelligence suite supports Linux Ubuntu 18.04 / 20.04 and Windows 10 64-bit. For other OS compatibilities, contact us.

Which platforms are supported ?

Metavision Intelligence Pro is provided in source form with build instructions for Windows and Linux. Metavision Essentials is provided in binary form for PC (x86-64 architecture).

What can I do with a free vs. paid license ?

With a free evaluation license, you are able to use all key features of Metavision Intelligence Suite. Purchasing a paid license becomes required should you want to sell a commercial application, gain access to source code, advanced add-ons or support.

Which Event-Based Vision hardware is supported ?

The software suite is compatible with Prophesee Metavision sensors and Evaluation Kits. It can also operate with compatible third party products.

 

Do you provide Python examples ?

Yes, we provide python sample code: https://docs.prophesee.ai/stable/samples.html

TRY or BUY ?

PROFESSIONAL

Get the full experience with a licensing plan designed to support your business strategy
  • Paid Commercial License
  • Access to extended software tools: Studio, Designer, SDK + additional software add-ons 
  • Access to public online documentation + Prophesee Knowledge Center + advanced support packages + training workshops
  • Access to advanced modules: High-Speed Counting, Spatter Monitoring, Vibration Monitoring, Object Tracking, Optical Flow, Ultra-Slow Motion, Visualization tools, Jet Monitoring, Particle Size Monitoring, Edgelet Tracking, Machine Learning Detection Inference and KPI + Detection Training + Event Simulator + Optical Flow Inference + Optical Flow Training
  • Access to source code
  • Commercial rights

Don't miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter