Introducing Metavision® Intelligence Suite, the most comprehensive Event-Based Vision software toolkit to date

 

Composed of Player, Designer and SDK, Metavision Intelligence suite provides the set of tools you need, whether you are starting your discovery of Event-Based Vision or are currently building machine vision products.

Experience first hand the new performance standards set by Event-Based Vision by interacting with more than 62 algorithms, 54 code samples and 11 ready-to-use applications in total, the industry’s widest selection available to date.

TRY

New to Event-Based Vision and want to try before you buy? Download ESSENTIALS a free evaluation version of Metavision Intelligence suite. Experience all key features, without time constraints.

BUY

Get the full PROFESSIONAL experience with a licensing plan designed to support your business strategy.  Access to our support helpdesk, advanced add-ons or source code. 

I am starting my journey with Event-Based Vision. I am looking to better understand what events are and experiment with datasets.

I need a way to quickly evaluate the benefits of Event-Based Vision by building application prototypes using a rich Python API, datasets, live execution.

I am looking for a powerful set of C++ tools and modules to optimize and productize Event-Based Vision solutions.

I am starting my journey with Event-Based Vision. I am looking to better understand what events are and experiment with datasets.

I need a way to quickly evaluate the benefits of Event-Based Vision by building application prototypes using a rich Python API, datasets, live execution.

I am looking for a powerful set of C++ tools and modules to optimize and productize Event-Based Vision solutions.

DISCOVER

Metavision Player is the perfect tool to start with, whether you own an EVK or not.

It features a Graphical User Interface allowing anyone to visualize and record data streamed by PROPHESEE-compatible Event-Based Vision systems.

It also enables you to read provided event datasets to deepen your understanding of Event-Based Vision.

 

Read, stream, tune events

Comprehensive Graphical User Interface

Main features
  • Data visualization from Prophesee-compatible Event-Based Vision systems
  • Control of data visualization (accumulation time, fps)
  • Data recording
  • Replay recorded data
  • Export to AVI video
  • Control, saving and loading of sensors settings

DISCOVER

Metavision Player is the perfect tool to start with, whether you own an EVK or not.

It features a Graphical User Interface allowing anyone to visualize and record data streamed by PROPHESEE-compatible Event-Based Vision systems.

It also enables you to read provided event datasets to deepen your understanding of Event-Based Vision.

 

Read, stream, tune events

Comprehensive Graphical User Interface

Main features
  • Data visualization from Prophesee-compatible Event-Based Vision systems
  • Control of data visualization (accumulation time, fps)
  • Data recording
  • Replay recorded data
  • Export to AVI video
  • Control, saving and loading of sensors settings

PROTOTYPE

Metavision Designer is a tool that allows engineers to interconnect components very easily for fast prototyping of Event-Based Vision applications.

It consists of a rich set of libraries, Python APIs and code examples built for quick and efficient integration and testing.

Metavision Designer is built to help engineers who desire to quantify the benefits of Event-based Vision applications.

Create a functional prototype in minutes

34 components, 11 python samples,  7 jupyter notebooks tutorials 

Main features

 

#!/usr/bin/python
​
"""
Simple script to estimate and display a sparse optical flow.
"""
​
from metavision_hal import DeviceDiscovery
from metavision_designer_engine import Controller
from metavision_designer_core import HalDeviceInterface, CdProducer, ImageDisplayCV
from metavision_designer_cv import SpatioTemporalContrast, SparseOpticalFlow, FlowFrameGenerator
​
​
def parse_args():
    import argparse
    """Parse input arguments."""
    parser = argparse.ArgumentParser(description='Sparse optical flow sample.')
    parser.add_argument('-i', '--input-raw-file', dest='input_path',
                        help='Path to input RAW file. If not specified, the camera live stream is used.')
    args = parser.parse_args()
    return args
​
​
def main():
    """ Main """
    args = parse_args()
​
    # Open a live camera if no file is provided
    from_file = False
    if not args.input_path:
        device = DeviceDiscovery.open("")
        if device is None:
            print('Error: Failed to open a camera.')
            return 1
    else:
        device = DeviceDiscovery.open_raw_file(args.input_path)
        from_file = True
​
    # Create HalDeviceInterface
    interface = HalDeviceInterface(device)
​
    # Read CD events from the interface
    prod_cd = CdProducer(interface)
​
    # Add event filtering
    stc_filter = SpatioTemporalContrast(prod_cd, 40000)
    stc_filter.set_name("STC filter")
​
    # Create a sparse optical flow filter
    sparse_flow = SparseOpticalFlow(
        stc_filter, SparseOpticalFlow.Tuning.FAST_OBJECTS)
    sparse_flow.set_name("Sparse Optical Flow")
​
    # Generate a graphical representation of the events with the flow
    flow_frame_generator = FlowFrameGenerator(
        prod_cd, sparse_flow, True)
    flow_frame_generator.set_name("Flow frame generator")
​
    # Display the generated image
    flow_display = ImageDisplayCV(flow_frame_generator, False)
    flow_display.set_name("Flow display")
​
    # Create the controller
    controller = Controller()
​
    # Register the components
    controller.add_device_interface(interface)
    controller.add_component(prod_cd)
    controller.add_component(stc_filter)
    controller.add_component(sparse_flow)
    controller.add_component(flow_frame_generator)
    controller.add_component(flow_display)
​
    # Setup rendering at 25 frames per second
    controller.add_renderer(flow_display, Controller.RenderingMode.SimulationClock, 25.)
    controller.enable_rendering(True)
​
    # Set controller parameters for running
    controller.set_slice_duration(10000)
    controller.set_batch_duration(100000)
    do_sync = True if from_file else False
​
    # Main loop
    cnt = 0
​
    # Start the camera
    if not from_file:
        simple_device = device.get_i_device_control()
        simple_device.start()
        
    # Start the streaming of events
    i_events_stream = device.get_i_events_stream()
    i_events_stream.start()
​
    while not controller.are_producers_done():
        controller.run(do_sync)
​
        last_key = controller.get_last_key_pressed()
        if last_key == ord('q'):
            break
​
        if cnt % 10 == 0:
            controller.print_stats(False)
​
        cnt = cnt + 1
​
    return 0
​
​
if __name__ == "__main__":
    import sys
    sys.exit(main())

 

PROTOTYPE

Metavision Designer is a tool that allows engineers to interconnect components very easily for fast prototyping of Event-Based Vision applications.

It consists of a rich set of libraries, Python APIs and code examples built for quick and efficient integration and testing.

Metavision Designer is built to help engineers who desire to quantify the benefits of Event-based Vision applications.

Create a functional prototype in minutes

34 components, 11 python samples, 7 jupyter notebooks tutorials  

Main features

 

#!/usr/bin/python
​
"""
Simple script to estimate and display a sparse optical flow.
"""
​
from metavision_hal import DeviceDiscovery
from metavision_designer_engine import Controller
from metavision_designer_core import HalDeviceInterface, CdProducer, ImageDisplayCV
from metavision_designer_cv import SpatioTemporalContrast, SparseOpticalFlow, FlowFrameGenerator
​
​
def parse_args():
    import argparse
    """Parse input arguments."""
    parser = argparse.ArgumentParser(description='Sparse optical flow sample.')
    parser.add_argument('-i', '--input-raw-file', dest='input_path',
                        help='Path to input RAW file. If not specified, the camera live stream is used.')
    args = parser.parse_args()
    return args
​
​
def main():
    """ Main """
    args = parse_args()
​
    # Open a live camera if no file is provided
    from_file = False
    if not args.input_path:
        device = DeviceDiscovery.open("")
        if device is None:
            print('Error: Failed to open a camera.')
            return 1
    else:
        device = DeviceDiscovery.open_raw_file(args.input_path)
        from_file = True
​
    # Create HalDeviceInterface
    interface = HalDeviceInterface(device)
​
    # Read CD events from the interface
    prod_cd = CdProducer(interface)
​
    # Add event filtering
    stc_filter = SpatioTemporalContrast(prod_cd, 40000)
    stc_filter.set_name("STC filter")
​
    # Create a sparse optical flow filter
    sparse_flow = SparseOpticalFlow(
        stc_filter, SparseOpticalFlow.Tuning.FAST_OBJECTS)
    sparse_flow.set_name("Sparse Optical Flow")
​
    # Generate a graphical representation of the events with the flow
    flow_frame_generator = FlowFrameGenerator(
        prod_cd, sparse_flow, True)
    flow_frame_generator.set_name("Flow frame generator")
​
    # Display the generated image
    flow_display = ImageDisplayCV(flow_frame_generator, False)
    flow_display.set_name("Flow display")
​
    # Create the controller
    controller = Controller()
​
    # Register the components
    controller.add_device_interface(interface)
    controller.add_component(prod_cd)
    controller.add_component(stc_filter)
    controller.add_component(sparse_flow)
    controller.add_component(flow_frame_generator)
    controller.add_component(flow_display)
​
    # Setup rendering at 25 frames per second
    controller.add_renderer(flow_display, Controller.RenderingMode.SimulationClock, 25.)
    controller.enable_rendering(True)
​
    # Set controller parameters for running
    controller.set_slice_duration(10000)
    controller.set_batch_duration(100000)
    do_sync = True if from_file else False
​
    # Main loop
    cnt = 0
​
    # Start the camera
    if not from_file:
        simple_device = device.get_i_device_control()
        simple_device.start()
        
    # Start the streaming of events
    i_events_stream = device.get_i_events_stream()
    i_events_stream.start()
​
    while not controller.are_producers_done():
        controller.run(do_sync)
​
        last_key = controller.get_last_key_pressed()
        if last_key == ord('q'):
            break
​
        if cnt % 10 == 0:
            controller.print_stats(False)
​
        cnt = cnt + 1
​
    return 0
​
​
if __name__ == "__main__":
    import sys
    sys.exit(main())

 

DEVELOP

Metavision SDK is the largest set of Event-Based Vision algorithms accessible to date. High-performance algorithms are available via APIs, ready to go to production with Event-Based Vision applications.

Algorithms are coded in C++ and available via pre-compiled Windows and Linux binaries in its free license version. SDK c++ source code can be purchased in its commercial license version.

Develop high-performance Event-Based Vision solutions

21 c++ code samples, 41 algorithms and 10 ready-to-use applications

Main features
  • C++ APIs for highly efficient applications and runtime execution
  • Source access possible to port on custom architecture
  • Extensive documentation / code examples /  training & learning material available
  • Runs natively on Linux and Windows
  • Compatible with Prophesee vision systems and « Powered by Prophesee » partners products
#include <metavision/sdk/driver/camera.h>
#include <metavision/sdk/base/events/event_cd.h>
#include <metavision/sdk/core/utils/cd_frame_generator.h>
​
// this class will be used to analyze the events
class EventAnalyzer {
public:
    // class variables to store global information
    int global_counter                 = 0; // this will track how many events we processed
    Metavision::timestamp global_max_t = 0; // this will track the highest timestamp we processed
​
    // this function will be associated to the camera callback
    // it is used to compute statistics on the received events
    void analyze_events(const Metavision::EventCD *begin, const Metavision::EventCD *end) {
        std::cout << "----- New callback! -----" << std::endl;
​
        // time analysis
        // Note: events are ordered by timestamp in the call back, so the first event will have the lowest timestamp and
        // the last event will have the highest timestamp
        Metavision::timestamp min_t = begin->t;     // get the timestamp of the first event of this callback
        Metavision::timestamp max_t = (end - 1)->t; // get the timestamp of the last event of this callback
        global_max_t = max_t; // events are ordered by timestamp, so the current last event has the highest timestamp
​
        // counting analysis
        int counter = 0;
        for (const Metavision::EventCD *ev = begin; ev != end; ++ev) {
            ++counter; // increasing local counter
        }
        global_counter += counter; // increase global counter
​
        // report
        std::cout << "There were " << counter << " events in this callback" << std::endl;
        std::cout << "There were " << global_counter << " total events up to now." << std::endl;
        std::cout << "The current callback included events from " << min_t << " up to " << max_t << " microseconds."
                  << std::endl;
​
        std::cout << "----- End of the callback! -----" << std::endl;
    }
};
​
// main loop
int main(int argc, char *argv[]) {
    Metavision::Camera cam;       // create the camera
    EventAnalyzer event_analyzer; // create the event analyzer
​
    if (argc >= 2) {
        // if we passed a file path, open it
        cam = Metavision::Camera::from_file(argv[1]);
    } else {
        // open the first available camera
        cam = Metavision::Camera::from_first_available();
    }
​
    // add the event callback. This callback will be called periodically to provide access to the most recent events
    cam.cd().add_callback([&event_analyzer](const Metavision::EventCD *ev_begin, const Metavision::EventCD *ev_end) {
        event_analyzer.analyze_events(ev_begin, ev_end);
    });
​
    // get camera resolution
    int camera_width  = cam.geometry().width();
    int camera_height = cam.geometry().height();
​
    // create a frame generator for visualization
    // this will get the events from the callback and accumulate them in a cv::Mat
    Metavision::CDFrameGenerator cd_frame_generator(camera_width, camera_height);
​
    // this callback tells the camera to pass the events to the frame generator, who will then create the frame
    cam.cd().add_callback(
        [&cd_frame_generator](const Metavision::EventCD *ev_begin, const Metavision::EventCD *ev_end) {
            cd_frame_generator.add_events(ev_begin, ev_end);
        });
​
    int fps         = 25;               // event-cameras do not have a frame rate, but we need one for visualization
    float wait_time = 1.0 / fps * 1000; // how much we shoul wait between two frames
    cv::Mat cd_frame;                   // the cv::Mat where the events will be accumulated
    std::string window_name = "Metavision SDK Get Started";
​
    // this function is used to tell the frame generator what to do with the frame and how often to generate it
    cd_frame_generator.start(
        fps, [&cd_frame](const Metavision::timestamp &ts, const cv::Mat &frame) { frame.copyTo(cd_frame); });
​
    cv::namedWindow(window_name, cv::WINDOW_GUI_EXPANDED);
    cv::resizeWindow(window_name, camera_width, camera_height);
​
    // start the camera
    cam.start();
​
    // keep running while the camera is on or the video is not finished
    while (cam.is_running()) {
        // display the frame if it's not empty
        if (!cd_frame.empty()) {
            cv::imshow(window_name, cd_frame);
        }
​
        // if the user presses the `q` key, quit the loop
        if ((cv::waitKey(wait_time) & 0xff) == 'q') {
            break;
        }
    }
​
    // the video is finished or the user wants to quit, stop the camera.
    cam.stop();
​
    // print the global statistics
    float length_in_seconds = event_analyzer.global_max_t / 1000000.0;
    std::cout << "There were " << event_analyzer.global_counter << " events in total." << std::endl;
    std::cout << "The total duration was " << length_in_seconds << " seconds." << std::endl;
    if (length_in_seconds >= 1) { // no need to print this statistics if the video was too short
        std::cout << "There were " << event_analyzer.global_counter / (event_analyzer.global_max_t / 1000000.0)
                  << " events per seconds on average." << std::endl;
    }
}

 

DEVELOP

Metavision SDK is the largest set of Event-Based Vision algorithms accessible to date. High-performance algorithms are available via APIs, ready to go to production with Event-Based Vision applications.

Algorithms are coded in C++ and available via pre-compiled Windows and Linux binaries in its free license version. SDK c++ source code can be purchased in its commercial license version.

Develop high-performance Event-Based Vision solutions

21 c++ code samples, 41 algorithms and 10 ready-to-use applications 

Main features
  • C++ APIs for highly efficient applications and runtime execution
  • Source access possible to port on custom architecture
  • Extensive documentation / code examples /  training & learning material available
  • Runs natively on Linux and Windows
  • Compatible with Prophesee vision systems and « Powered by Prophesee » partners products
#include <metavision/sdk/driver/camera.h>
#include <metavision/sdk/base/events/event_cd.h>
#include <metavision/sdk/core/utils/cd_frame_generator.h>
​
// this class will be used to analyze the events
class EventAnalyzer {
public:
    // class variables to store global information
    int global_counter                 = 0; // this will track how many events we processed
    Metavision::timestamp global_max_t = 0; // this will track the highest timestamp we processed
​
    // this function will be associated to the camera callback
    // it is used to compute statistics on the received events
    void analyze_events(const Metavision::EventCD *begin, const Metavision::EventCD *end) {
        std::cout << "----- New callback! -----" << std::endl;
​
        // time analysis
        // Note: events are ordered by timestamp in the call back, so the first event will have the lowest timestamp and
        // the last event will have the highest timestamp
        Metavision::timestamp min_t = begin->t;     // get the timestamp of the first event of this callback
        Metavision::timestamp max_t = (end - 1)->t; // get the timestamp of the last event of this callback
        global_max_t = max_t; // events are ordered by timestamp, so the current last event has the highest timestamp
​
        // counting analysis
        int counter = 0;
        for (const Metavision::EventCD *ev = begin; ev != end; ++ev) {
            ++counter; // increasing local counter
        }
        global_counter += counter; // increase global counter
​
        // report
        std::cout << "There were " << counter << " events in this callback" << std::endl;
        std::cout << "There were " << global_counter << " total events up to now." << std::endl;
        std::cout << "The current callback included events from " << min_t << " up to " << max_t << " microseconds."
                  << std::endl;
​
        std::cout << "----- End of the callback! -----" << std::endl;
    }
};
​
// main loop
int main(int argc, char *argv[]) {
    Metavision::Camera cam;       // create the camera
    EventAnalyzer event_analyzer; // create the event analyzer
​
    if (argc >= 2) {
        // if we passed a file path, open it
        cam = Metavision::Camera::from_file(argv[1]);
    } else {
        // open the first available camera
        cam = Metavision::Camera::from_first_available();
    }
​
    // add the event callback. This callback will be called periodically to provide access to the most recent events
    cam.cd().add_callback([&event_analyzer](const Metavision::EventCD *ev_begin, const Metavision::EventCD *ev_end) {
        event_analyzer.analyze_events(ev_begin, ev_end);
    });
​
    // get camera resolution
    int camera_width  = cam.geometry().width();
    int camera_height = cam.geometry().height();
​
    // create a frame generator for visualization
    // this will get the events from the callback and accumulate them in a cv::Mat
    Metavision::CDFrameGenerator cd_frame_generator(camera_width, camera_height);
​
    // this callback tells the camera to pass the events to the frame generator, who will then create the frame
    cam.cd().add_callback(
        [&cd_frame_generator](const Metavision::EventCD *ev_begin, const Metavision::EventCD *ev_end) {
            cd_frame_generator.add_events(ev_begin, ev_end);
        });
​
    int fps         = 25;               // event-cameras do not have a frame rate, but we need one for visualization
    float wait_time = 1.0 / fps * 1000; // how much we shoul wait between two frames
    cv::Mat cd_frame;                   // the cv::Mat where the events will be accumulated
    std::string window_name = "Metavision SDK Get Started";
​
    // this function is used to tell the frame generator what to do with the frame and how often to generate it
    cd_frame_generator.start(
        fps, [&cd_frame](const Metavision::timestamp &ts, const cv::Mat &frame) { frame.copyTo(cd_frame); });
​
    cv::namedWindow(window_name, cv::WINDOW_GUI_EXPANDED);
    cv::resizeWindow(window_name, camera_width, camera_height);
​
    // start the camera
    cam.start();
​
    // keep running while the camera is on or the video is not finished
    while (cam.is_running()) {
        // display the frame if it's not empty
        if (!cd_frame.empty()) {
            cv::imshow(window_name, cd_frame);
        }
​
        // if the user presses the `q` key, quit the loop
        if ((cv::waitKey(wait_time) & 0xff) == 'q') {
            break;
        }
    }
​
    // the video is finished or the user wants to quit, stop the camera.
    cam.stop();
​
    // print the global statistics
    float length_in_seconds = event_analyzer.global_max_t / 1000000.0;
    std::cout << "There were " << event_analyzer.global_counter << " events in total." << std::endl;
    std::cout << "The total duration was " << length_in_seconds << " seconds." << std::endl;
    if (length_in_seconds >= 1) { // no need to print this statistics if the video was too short
        std::cout << "There were " << event_analyzer.global_counter / (event_analyzer.global_max_t / 1000000.0)
                  << " events per seconds on average." << std::endl;
    }
}

 

APPLICATIONS & ALGORITHMS

XYT VISUALIZATION

Discover the power of time – space continuity for your application by visualizing your data with our XYT viewer.

See between the frames

Zoom in time and understand motion in the scene

DATA RATE VISUALIZATION

Understand the process of event generation over time, visualize data and generate plots with the power of python.

Ready-to-use Python environment

Interface with recordings & live cameras

ULTRA SLOW MOTION

Slow down time, down to the time-resolution equivalent of over 200,000+ frames per second, live, while generating orders of magnitude less data than traditional approaches. Understand the finest motion dynamics hiding in ultra fast and fleeting events.

Up to 200,000 fps (time resolution equivalent)

OBJECT TRACKING

Track moving objects in the field of view. Leverage the low data-rate and sparse information provided by event-based sensors to track objects with low compute power.

Continuous tracking in time: no more “blind spots” between frame acquisitions

Native segmentation: analyze only motion, ignore the static background

OPTICAL FLOW

Rediscover this fundamental computer vision building block, but with an event twist. Understand motion much more efficiently, through continuous pixel-by-pixel tracking and not sequential frame by frame analysis anymore.

17x less power compared to traditional image-based approaches 

Get features only on moving objects

HIGH-SPEED COUNTING

Count objects at unprecedented speeds, high accuracy, generating less data and without any motion blur. Objects are counted as they pass through the field of view, triggering each pixel independently as the object goes by.

>1,000 Obj/s. Throughput

>99.5% Accuracy @1,000 Obj/s.

VIBRATION MONITORING

Monitor vibration frequencies continuously, remotely, with pixel precision, by tracking the temporal evolution of every pixel in a scene. For each event, the pixel coordinates, the polarity of the change and the exact timestamp are recorded, thus providing a global, continuous understanding of vibration patterns.

From 1Hz to kHz range

1 Pixel Accuracy

SPATTER MONITORING

Track small particles with spatter-like motion. Thanks to the high time resolution and dynamic range of our Event-Based Vision sensor, small particles can be tracked in the most difficult and demanding environment.

Up to 200kHz tracking frequency (5µs time resolution)

>Simultaneous XYT tracking  of all particles  

CALIBRATION

Deploy your applications in real-life environments and control all the optical parameters of your event-based systems. Calibrate your cameras and adjust the focus with a suite of pre-built tools. Extend to your specific needs, and connect to standard calibration routines using our algorithmic bricks. 

Lens focus assessment 

Automatic intrinsics camera calibration 

 

MACHINE LEARNING

Unlock the potential of Event-Based machine learning, with a set of dedicated tools providing everything you need to start execution of Deep Neural Network (DNN) with events. Leverage our pretrained automotive model written in pytorch, and experiment live detection & tracking using our c++ pipeline. Use our comprehensive python library to design your own networks.

Pretrained network trained on a 15h and 23M labels automotive dataset 

Live detection and tracking @100Hz

PRODUCTS

BUY COMPATIBLE HARDWARE

Prophesee Evaluation Kit and Packaged sensor are fully compatible with Metavision Intelligence suite as well as Powered by Prophesee partner’s products.

 

Slide PACKAGED SENSOR Slide EVALUATION KIT

BUILD COMPATIBLE HARDWARE

Through an extensive partnership program, Prophesee enables vision equipment manufacturers to build their own Event-Based Vision products.

Contact us to develop your own hardware or integrate your vision system with Metavision Intelligence.

FAQ

Do I need to buy an EVK to start ?

You don’t necessarily need an Evaluation Kit or Event-Based Vision equipment to start your discovery. You could start with Metavision Player and interact with provided datasets and algorithms first.

Which OS are supported ?

Metavision Intelligence suite supports Linux Ubuntu 18.04 and Windows 10 64-bit. For other OS compatibilities, contact us.

What can I do with a free vs. paid license ?

With a free evaluation license, you are able to use all key features of Metavision Intelligence Suite. Purchasing a paid license becomes required should you want to sell a commercial application, gain access to source code, advanced add-ons or support.

Which Event-Based Vision hardware is supported ?

The software suite is compatible with Prophesee Metavision sensors and Evaluation Kits. It can also operate with compatible third party products.

 

Do you provide Python examples ?

Yes, we provide python sample code with Metavision Designer software.

TRY or BUY ?

PROFESSIONAL

Get the full experience with a licensing plan designed to support your business strategy
  • Paid Commercial License
  • Access to extended software suite: Player, Designer, SDK + additional software add-ons 
  • Access to public online documentation, Prophesee Knowledge Center, advanced support packages and training workshop
  • Access to advanced add-ons: High-Speed Counting, Spatter Monitoring, Vibration Monitoring, Object Tracking, Optical Flow, Ultra-Slow Motion, Visualization tools + Machine Learning 
  • Access to source code
  • Commercial rights

Don’t miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter