||
本文为加拿大多伦多大学(作者:Gregory Koch)的硕士论文,共30页。
对于机器学习应用程序来说,学习良好特征的过程在计算上可能非常昂贵,并且在几乎没有可用数据的情况下是困难的。这方面的一个典型例子是一次性学习环境,在这种环境下,我们必须正确地预测每一个新类型的一个单独的例子。在这篇文章中,我们探索了一种学习孪生神经网路的方法,它使用一种独特的结构来自然地排列输入之间的相似性。一旦一个神经网络被调整,我们就可以利用强大的鉴别特征,将网络的预测能力不仅推广到新的数据,而且推广到来自未知分布的全新类别。使用卷积式结构,我们能够获得比其他深度学习模型更强大的结果,并且在一次性分类任务上的性能接近最先进水平。
The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available. A prototypical example of this is the one-shot learning setting, in which we must correctly make predictions given only a single example of each new class. In this paper, we explore a method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs. Once a network has been tuned, we can then capitalize on powerful discriminative features to generalize the predictive power of the network not just to new data, but to entirely new classes from unknown distributions. Using a convolutional architecture, we are able to achieve strong results which exceed those of other deep learning models with near stateof-the-art performance on one-shot classification tasks.
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-9-23 11:02
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社