Visual and Spatial Robotic Task Planning Using an Augmented Reality Authoring Interface
- 總結
- Researchers at Purdue University have developed a lightweight design for robot task planning with a spatially situated approach. This technology utilizes AR interfaces, Robot Assistants (RA), and the interactive Internet of Things (IoT) to program robots and machines using a smart phone. With a handheld device, the user can move around to directly and spatially author the robot's navigation path and interactive functions with room-level navigation without needing an external tracking system.
- 技術優勢
- Direct visual authoringEasy to installReady to use
- 技術應用
- RoboticsInternet of Things
- 詳細技術說明
- Karthik RamaniC Design LabPurdue Mechanical Engineering
- *Abstract
-
None
- *Background
- Task planning for mobile robots is an important topic in the general robotics area. Many previous works have adopted an Augmented Reality (AR) interface for robot task planning and animation authoring because of its ability to bridge the digital authoring interface with the physical world. However, they all use an external computer vision approach, i.e. with image markers or object recognition methods, to localize the robot during the manipulation and navigation. Being able to program the robot to perform a series of location and time-based sequential tasks has the potential to greatly help us in daily life.
- *IP Issue Date
- None
- *IP Type
- Provisional
- *Stage of Development
- Proof of Concept
- *Web Links
- Purdue Office of Technology CommercializationPurdue Innovation and EntrepreneurshipKarthik RamaniC Design LabPurdue Mechanical Engineering
- 國家
- United States
- 申請號碼
- None
- 國家/地區
- 美國

欲了解更多信息,請點擊 這裡