Visual and Spatial Robotic Task Planning Using an Augmented Reality Authoring Interface
- Summary
- Researchers at Purdue University have developed a lightweight design for robot task planning with a spatially situated approach. This technology utilizes AR interfaces, Robot Assistants (RA), and the interactive Internet of Things (IoT) to program robots and machines using a smart phone. With a handheld device, the user can move around to directly and spatially author the robot's navigation path and interactive functions with room-level navigation without needing an external tracking system.
- Technology Benefits
- Direct visual authoringEasy to installReady to use
- Technology Application
- RoboticsInternet of Things
- Detailed Technology Description
- Karthik RamaniC Design LabPurdue Mechanical Engineering
- Countries
- United States
- Application No.
- None
- *Abstract
-
None
- *Background
- Task planning for mobile robots is an important topic in the general robotics area. Many previous works have adopted an Augmented Reality (AR) interface for robot task planning and animation authoring because of its ability to bridge the digital authoring interface with the physical world. However, they all use an external computer vision approach, i.e. with image markers or object recognition methods, to localize the robot during the manipulation and navigation. Being able to program the robot to perform a series of location and time-based sequential tasks has the potential to greatly help us in daily life.
- *IP Issue Date
- None
- *IP Type
- Provisional
- *Stage of Development
- Proof of Concept
- *Web Links
- Purdue Office of Technology CommercializationPurdue Innovation and EntrepreneurshipKarthik RamaniC Design LabPurdue Mechanical Engineering
- Country/Region
- USA

For more information, please click Here