亞洲知識產權資訊網為知識產權業界提供一個一站式網上交易平台,協助業界發掘知識產權貿易商機,並與環球知識產權業界建立聯繫。無論你是知識產權擁有者正在出售您的知識產權,或是製造商需要購買技術以提高操作效能,又或是知識產權配套服務供應商,你將會從本網站發掘到有用的知識產權貿易資訊。

Reducing Computational Complexity of Training Algorithms for Artificial Neural Networks

技術優勢
Faster training times (10x or more)Reduced computational cost  Can train larger and more complex artificial neural networks  Can be used with current typical computing technologyAllows ANNs to become more widely applicable
技術應用
Any application that uses artificial neural networks:Face & speech recognitionAutonomous drivingDiagnostics using electronic biomedical dataData miningBiometric securityFinancial forecastingPredictive coding
詳細技術說明
Researchers at UCLA from the Department of Chemistry and Biochemistry have developed a novel mathematical theorem to rapidly train large-scale artificial neural networks (ANNs). Their algorithm prevents the exponential increase of computational cost with the size of the ANN. As a proof of concept, ANNs were trained on a variety of benchmark applications using steepest descent, standard second order methods, other state-of-the-art methods, and this novel method. Their algorithm was able to consistently perform training operations at least 10x faster than the other methods. The increased efficiency enables training networks with higher complexity and more neurons than currently possible with existing training algorithms and computational technology.
*Abstract
Researchers at UCLA have developed a novel mathematical theorem to revolutionize the training of large-scale artificial neural networks (ANN).
*Principal Investigation

Name: Louis Bouchard

Department:


Name: Khalid Youssef

Department:

其他

Background

Artificial neural networks (ANNs) have gained popularity in recent years due to their exceptional performance and applicability to a wide array of machine learning applications. ANNs digitally mimic the structure and behavior of brain tissue by creating an interconnected network of simple processing units, termed neurons.

The size of the ANN increases with the complexity of the application and desired degree of accuracy. Pivotal tasks such as medical image diagnosis, biometric security, and self-driving cars are extremely complex and require a high degree of accuracy, so ANNs need to be trained before they can execute tasks.

The current gold standard, known as steepest descent, is ineffective at training large-scale networks. Second order methods can train ANNs much more effectively, but their use is limited to small to medium-sized networks due to limits in computational technology. Effective training of large-scale ANNs will have immense effects on the advancement of artificial intelligence.


Additional Technologies by these Inventors


Tech ID/UC Case

28860/2017-730-0


Related Cases

2017-730-0

國家/地區
美國

欲了解更多信息,請點擊 這裡
移動設備