The lack of a large amount of training data has always been the constraining factor in solving many problems in machine learning, making One-Shot Learning one of the most intriguing ideas in machine learning. It aims to learn necessary objective information from one or only a few training examples. This process of learning in neural networks is generally accomplished by using a proper objective function (loss function) and embedding extraction (architecture). In this paper, we discussed metric-based deep learning architectures for one-shot learning such as siamese neural networks[10] and present a method to improve on their accuracy using Kafnets (kernel-based non-parametric activation functions for neural networks)[17] by learning finer embeddings with relatively less number of epochs. Using kernel activation functions, we are able to achieve strong results that exceed ReLU-based deep learning models in terms of embedding structure, loss convergence, and accuracy. The project code with results can be found at https://github.com/shruti-jadon/Siamese-Network-for-One-shot-Learning.
CITATION STYLE
Jadon, S., & Srinivasan, A. A. (2021). Improving Siamese Networks for One-Shot Learning Using Kernel-Based Activation Functions. In Advances in Intelligent Systems and Computing (Vol. 1175, pp. 353–367). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-15-5619-7_25
Mendeley helps you to discover research relevant for your work.