AI and Its Risks in Android Smartphones: A Case of Google Smart Assistant

6Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper intends to highlight the risks of AI in Android smartphones. In this regard, we perform a risk analysis of Google Smart Assistant, a state-of-the-art, AI-powered smartphone app, and assess the transparency in its risk communication to users and implementation. Android users rely on the transparency of an app’s descriptions and Permission requirements for its risk evaluation, and many risk evaluation models consider the same factors while calculating app threat scores. Further, different risk evaluation models and malware detection methods for Android apps use an app’s Permissions and API usage to assess its behavior. Therefore, in our risk analysis, we assess Description-to-Permissions fidelity and Functions-to-API-Usage fidelity in Google Smart Assistant. We compare Permission and API usage in Google Smart Assistant with those of four leading smart assistants and discover that Google Smart Assistant has unusual permission requirements and sensitive API usage. Our risk analysis finds a lack of transparency in risk communication and implementation of Google Smart Assistant. This lack of transparency may make it impossible for users to assess the risks of this app. It also makes some of the state-of-the-art app risk evaluation models and malware detection methods ineffective.

Cite

CITATION STYLE

APA

Elahi, H., Wang, G., Peng, T., & Chen, J. (2019). AI and Its Risks in Android Smartphones: A Case of Google Smart Assistant. In Communications in Computer and Information Science (Vol. 1123 CCIS, pp. 341–355). Springer. https://doi.org/10.1007/978-981-15-1304-6_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free