This paper explores the use of event cameras as an add-on to traditional MOKE microscopy to enhance time resolution for observing magnetic domains. Event cameras improve temporal resolution to 1 µs, enabling real-time monitoring and post-processing of fast magnetic dynamics. A proof-of-concept feedback control experiment demonstrated a latency of just 25 ms, highlighting the potential for dynamic material research. Limitations of current event cameras in this application are also discussed.
This paper presents a hybrid image- and event-based architecture for detecting dim space objects in geosynchronous orbit using dynamic vision sensing. Combining conventional and point-cloud feature extractors like PointNet, the approach enhances detection performance in high-background activity scenes. An event-based imaging simulator is also developed for model training and sensor parameter optimization, demonstrating improved recall for dim objects in challenging conditions.
This paper introduces a dataset from a subterranean (SubT) environment, captured with state-of-the-art sensors like RGB, RGB-D, event-based, and thermal cameras, along with 2D/3D lidars, IMUs, and UWB positioning systems. Synchronized raw data is provided in ROS message format, enabling evaluations of navigation, localization, and mapping algorithms.
In this paper, a new event camera dataset, EVIMO2, is introduced that improves on the popular EVIMO dataset by providing more data, from better cameras, in more complex scenarios. As with its predecessor, EVIMO2 provides labels in the form of per-pixel ground truth depth and segmentation as well as camera and object poses.
This paper presents the implementation details and experimental validation of a relatively low-cost motion capture system for multi-quadrotor motion planning using an event camera. The real-time, multi-quadrotor detection and tracking tasks are performed using a deep learning network You-Only-Look-Once (YOLOv5) and a k-dimensional (k-d) tree, respectively.