亞洲知識產權資訊網為知識產權業界提供一個一站式網上交易平台,協助業界發掘知識產權貿易商機,並與環球知識產權業界建立聯繫。無論你是知識產權擁有者正在出售您的知識產權,或是製造商需要購買技術以提高操作效能,又或是知識產權配套服務供應商,你將會從本網站發掘到有用的知識產權貿易資訊。

Deep-Learning Accelerator And Analog Neuromorphic Computation With CMOS-Compatible Charge-Trap-Transistor (CTT) Technique

技術優勢
High Speed  Low power consumption  Compatible with existing commercial manufacture processes  Low design costs
技術應用
Deep neural networks accelerator Image/voice recognition Artificial intelligence
詳細技術說明
UCLA researchers from the Department of Electrical Engineering have developed a charge trap transistor-based computing architecture for neural networks applications. This design increases the computing speed and cuts down the power consumption. Additionally, it is compatible with existing commercial processors. The inventors designed a CTT with high-k-metal-gate (HKMG) memory-based non-volatile memory (eNVM) solution to increase computation speed and energy efficiency in deep-learning accelerator applications. Since certain information stored on-chip in a convolutional neuron network (CNN) is repeatedly read and used during computation, the fast reading and slow writing characteristics of CTT-based eNVM fit perfectly into a CNN accelerator. Additionally, using the CTT-based eNVM in deep learning hardware architecture does not require new manufacturing materials or processes, making them perfectly compatible with existing CMOS chips. Therefore, they could easily be commercially implemented for high performance and low power computing in artificial intelligence applications.
*Abstract
UCLA researchers from the Department of Electrical Engineering have invented a charge trap transistor based computing architecture for neural networks applications.
*Principal Investigation

Name: Mau-Chung Frank Chang

Department:


Name: Li Du

Department:


Name: Yuan Du

Department:

其他

Background

Deep neural networks have wide applications in industries such as machine vision, voice recognition and artificial intelligence- all of which are billion dollar markets growing rapidly each year. Current commercial computing processors (such as CPU, GPU and accelerators) can hardly keep up with the increase in computation demand for complex deep neural network algorithms. The design limits in energy efficiency and on-chip memory density hinders the expansion of computation capabilities in existing commercial processors.

In recent years, analog computing engines have shown promises in increasing computing speed and energy efficiency, as well as decreasing design costs. However, these devices require new material and additional manufacturing processes that are not supported by major CMOS foundries. Hence, they are incompatible with existing commercial CMOS chips.


Additional Technologies by these Inventors


Tech ID/UC Case

29381/2018-003-0


Related Cases

2018-003-0

國家/地區
美國

欲了解更多信息,請點擊 這裡
移動設備