Sentence Directed Video Object Codetection
Researchers at Purdue University have developed a new method of video object codetection using the audio and visual cues in the video to help interpret and detect objects. This technology allows for video object detection without the need for object pre-learning. Using an algorithm that first generates sentences that describe an object's appearance/movement in the video by comparing subtle differences against a static background, it produces and displays a bounding box around an object while it is present in the video field. Using this technology, object detection is more robust, allowing for the detection of more objects, faster, and more accurately than ever before.
Objects do not need to be pre-learned Increased accuracy Detects nearly any sized object Detects multiple objects simultaneously Works with fast object movement and motion blur
Surveillance Security Autonomous vehicles Facial recognition Computer vision Medical imaging Robotics
Jeffrey SiskindPurdue Electrical and Computer Engineering
United States
None
美国
