Vision Based Brain Computer Interface for Selecting Novel Objects
Breakthrough Object & Command selection techniqueSubstantial reduction in BCI inputs from user, and time to complete tasksUsers achieved an average accuracy of 93.33%
Enabling users with impaired physical function to control everyday objects in their environment
USF inventors have invented a noninvasive brain-computer interface (BCI) for controlling a robotic system. The BCI and robotic system enable patients to carry out ADLs by focusing their attention on a screen displaying a task workspace and stimuli corresponding to activities or actions to perform. The stimuli generate signals in the brain that can be analyzed and processed by a BCI system. As a result, the BCI can detect human intention, and command a robot to execute a task without human muscle movement.This interface will reduce the cognitive load on the user, as well as reduce the time required to perform or complete tasks. The functionality of the interface is composed of: 1) Segmenting a scene image for use with the BCI, 2) Enabling a user to select segments, 3) Identifying objects in the segments, 4) Enabling the user to select an action to be carried out on the object and 5) Performing the selected action.
USA
