An Artificial Intelligence Security Framework

15Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the overall acceleration of the scale construction and application of artificial intelligence worldwide, the security risks of artificial intelligence infrastructure, design and development, and integration applications are becoming increasingly prominent. Major countries have developed AI security governance by formulating AI ethical norms and improving laws and regulations and industry management. Artificial intelligence security technology system is an essential part of artificial intelligence security governance, critical support for implementing artificial intelligence ethical norms, meeting legal and regulatory requirements. It is also an important guarantee for the artificial intelligence industry's healthy and orderly development. This article in view of the problem of lacking artificial intelligence security framework worldwide, focusing on the current artificial intelligence prominent security risks, proposes an AI security framework that covers AI security goals, graded capabilities of AI security, and AI security technologies and management systems. We look forward to providing useful references for the community to improve the safety and protection capabilities of artificial intelligence.

Cite

CITATION STYLE

APA

Jing, H., Wei, W., Zhou, C., & He, X. (2021). An Artificial Intelligence Security Framework. In Journal of Physics: Conference Series (Vol. 1948). IOP Publishing Ltd. https://doi.org/10.1088/1742-6596/1948/1/012004

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free