Strategies, Policies, and Standards in the EU Towards a Roadmap for Robust and Trustworthy AI Certification

  • Sharkov G
  • Todorova C
  • Varbanov P
N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Within recent years, governments in the EU member states have put increasing efforts into managing the scope and speed of socio-technical transformations due to rapid advances in Artificial Intelligence (AI). With the expanding deployment of AI in autonomous transportation, healthcare, defense, and surveillance, the topic of ethical and secure AI is coming to the forefront. However , even against the backdrop of a growing body of technical advancement and knowledge, the governance of AI-intensive technologies is still a work in progress facing numerous challenges in balancing between the ethical, legal and societal aspects of AI technologies on the one hand and investment, financial and technological on the other. Guaranteeing and providing access to reliable AI is a necessary prerequisite for the proper development of the sector. One way to approach this challenge is through governance and certification. This article discusses initiatives supporting a better understanding of the magnitude and depth of adoption of AI. Given the numerous ethical concerns posed by unstandardized AI, it further explains why certification and govern-ance of AI are a milestone for the reliability and competitiveness of technological solutions.

Cite

CITATION STYLE

APA

Sharkov, G., Todorova, C., & Varbanov, P. (2021). Strategies, Policies, and Standards in the EU Towards a Roadmap for Robust and Trustworthy AI Certification. Information & Security: An International Journal, 50, 11–22. https://doi.org/10.11610/isij.5030

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free