Integrating prior knowledge to build transformer models

21Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The big Artificial General Intelligence models inspire hot topics currently. The black box problems of Artificial Intelligence (AI) models still exist and need to be solved urgently, especially in the medical area. Therefore, transparent and reliable AI models with small data are also urgently necessary. To build a trustable AI model with small data, we proposed a prior knowledge-integrated transformer model. We first acquired prior knowledge using Shapley Additive exPlanations from various pre-trained machine learning models. Then, we used the prior knowledge to construct the transformer models and compared our proposed models with the Feature Tokenization Transformer model and other classification models. We tested our proposed model on three open datasets and one non-open public dataset in Japan to confirm the feasibility of our proposed methodology. Our results certified that knowledge-integrated transformer models perform better (1%) than general transformer models. Meanwhile, our proposed methodology identified that the self-attention of factors in our proposed transformer models is nearly the same, which needs to be explored in future work. Moreover, our research inspires future endeavors in exploring transparent small AI models.

Cite

CITATION STYLE

APA

Jiang, P., Obi, T., & Nakajima, Y. (2024). Integrating prior knowledge to build transformer models. International Journal of Information Technology (Singapore), 16(3), 1279–1292. https://doi.org/10.1007/s41870-023-01635-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free