亚洲知识产权资讯网为知识产权业界提供一个一站式网上交易平台,协助业界发掘知识产权贸易商机,并与环球知识产权业界建立联系。无论你是知识产权拥有者正在出售您的知识产权,或是制造商需要购买技术以提高操作效能,又或是知识产权配套服务供应商,你将会从本网站发掘到有用的知识产权贸易资讯。

Robust Visual-Inertial Sensor Fusion For Navigation, Localization, Mapping, And 3D Reconstruction

技术优势
Uses integrated inertial and vision measurementsImproved robustness and performanceFocuses on handling outliers
技术应用
Augmented and virtual realityRoboticsAutonomous vehicles and flying robotsIndoor localization in GPS-denied areasEgo-motion estimation
详细技术说明
Researchers led by Professor Stefano Soatto have developed a novel sensor fusion system that integrates inertial and vision measurements to estimate 3D positon and orientation, along with a point-cloud model of the 3D world surrounding it. This invention has better robustness and performance that other performing VINS schemes, such as Google Tango, and with the same computational footprint. This unique technology addresses the problem of inferring ego-motion of a sensor platform from visual and inertial measurements, focusing on handling outliers.
*Abstract
UCLA researchers in the Computer Science Department have invented a novel model for a visual-inertial system (VINS) for navigation, localization, mapping, and 3D reconstruction applications.
*IP Issue Date
May 19, 2016
*Principal Investigation

Name: Stefano Soatto

Department:


Name: Konstantine Tsotsos

Department:

申请号码
20160140729
其他

Background

Vision-augmented navigation or VINS is central to augmented and virtual reality, robotics, autonomous vehicles, and navigation in mobile phones. The future growth of these applications depends on reliable navigation in dynamic environments, thus improvement to these systems is of importance. Current methods rely upon low-level processing of visual data for 3D motion estimation. However, the processing is substantially useless and easily 60 – 90% of sparse features selected and tracked across frames are inconsistent with a single rigid motion due to illumination effects, occlusions, and independently moving objects. These effects are global to the scene, while low-level processing is local to the image, so it is not realistic to expect significant improvements in the vision front-end. Instead, it is critical for algorithms utilizing vision to leverage other sensory modalities, such as inertial.


Related Materials

E. S. Jones and S. Soatto. Visual-Inertial Navigation, Mapping and Localization: A Scalable Real-Time Causal Approach. The International Journal of Robotics Research. 2011.
K. Tsotsos, A. Chiuso, and S. Soatto. Robust Inference for Visual-Inertial Sensor Fusion. 2015 IEEE International Conference on Robotics and Automation. 2015.


Additional Technologies by these Inventors


Tech ID/UC Case

27401/2015-346-0


Related Cases

2015-346-0

国家/地区
美国

欲了解更多信息,请点击 这里
移动设备