亞洲知識產權資訊網為知識產權業界提供一個一站式網上交易平台,協助業界發掘知識產權貿易商機,並與環球知識產權業界建立聯繫。無論你是知識產權擁有者正在出售您的知識產權,或是製造商需要購買技術以提高操作效能,又或是知識產權配套服務供應商,你將會從本網站發掘到有用的知識產權貿易資訊。

Spatio-Temporarily Consistent Video Disparity Estimation Algorithm

技術優勢
The proposed method shows superior speed, accuracy, and consistency compared to state-of-the-art algorithms. We have shown the method to be resilient even in large amounts of noise.Furthermore, we have illustrated that in general our method can be used to refine the results of any disparity estimation technique suffering from impulsive noise or estimation errors.
技術應用
Our method can be used as a post-processing step to refine noisy estimates or to extend to videos.
詳細技術說明
We present a novel stereo video disparity estimation method. The proposed method is a two-stage algorithm. During the first stage, initial disparity maps are computed in a frame by-frame basis. In the second stage, the initial estimates are treated as a space-time volume.By setting up an l1-normed minimization problem with a novel three-dimensional total variation regularization, spatial smoothness and temporal consistency are handled simultaneously.
*Abstract
From a rectified stereo image pair, the task of view synthesis is to generate images from any viewpoint along the baseline. The main difficulty of the problem is how to fill occluded regions. We present a new method for view synthesis that is both fast and accurate. Occlusions are filled using color and disparity information to produce consistent pixel estimates.

Our goal is to present a systematic method by which we generate accurate and spatio-temporally consistent disparity maps from complex stereo video sequences. We leverage the strengths of current state-of-the-art image-based techniques, but, in addition, we explicitly enforce the consistency of estimates in both space and time by treating the video as a space-time volume corrupted by noise. In so doing, we provide an algorithm that has the capability of refining arbitrary image-based disparity estimation techniques and, at the same time, extending them to the video domain.
*IP Issue Date
May 23, 2017
*Principal Investigation

Name: Ho Chan

Department:


Name: Ramsin Khoshabeh

Department:


Name: Truong Nguyen

Department:

附加資料
Inventor: NGUYEN, Truong | CHAN, Ho | KHOSHABEH, Ramsin
Priority Number: WO2013173282A1
IPC Current: H04N001300 | H04N001500
Assignee Applicant: The Regents of the University of California
Title: VIDEO DISPARITY ESTIMATE SPACE-TIME REFINEMENT METHOD AND CODEC | PROCÉDÉ D'AFFINAGE SPATIO-TEMPOREL D'ESTIMATION DE DISPARITÉ VIDÉO ET CODEC
Usefulness: VIDEO DISPARITY ESTIMATE SPACE-TIME REFINEMENT METHOD AND CODEC | PROCÉDÉ D'AFFINAGE SPATIO-TEMPOREL D'ESTIMATION DE DISPARITÉ VIDÉO ET CODEC
Summary: Method for disparity estimation of stereo video data by a video codec (claimed) for three-dimensional video processing.
Novelty: Method for disparity estimation of stereo video data by video codec, involves grouping initial disparity estimates into space-time volume, and reducing error in disparity in space-time volume to refine initial disparity estimates
主要類別
信息和通信技術/電信
細分類別
圖像處理
申請號碼
9659372
其他

Tech ID/UC Case

23982/2011-137-0


Related Cases

2011-137-0

國家/地區
美國

欲了解更多信息,請點擊 這裡
移動設備