CVII: Enhancing Interpretability in Intelligent Sensor Systems via Computer Vision Interpretability Index

2Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

In the realm of intelligent sensor systems, the dependence on Artificial Intelligence (AI) applications has heightened the importance of interpretability. This is particularly critical for opaque models such as Deep Neural Networks (DNN), as understanding their decisions is essential, not only for ethical and regulatory compliance, but also for fostering trust in AI-driven outcomes. This paper introduces the novel concept of a Computer Vision Interpretability Index (CVII). The CVII framework is designed to emulate human cognitive processes, specifically in tasks related to vision. It addresses the intricate challenge of quantifying interpretability, a task that is inherently subjective and varies across domains. The CVII is rigorously evaluated using a range of computer vision models applied to the COCO (Common Objects in Context) dataset, a widely recognized benchmark in the field. The findings established a robust correlation between image interpretability, model selection, and CVII scores. This research makes a substantial contribution to enhancing interpretability for human comprehension, as well as within intelligent sensor applications. By promoting transparency and reliability in AI-driven decision-making, the CVII framework empowers its stakeholders to effectively harness the full potential of AI technologies.

References Powered by Scopus

Deep residual learning for image recognition

174383Citations
N/AReaders
Get full text

Deep learning

63572Citations
N/AReaders
Get full text

Microsoft COCO: Common objects in context

28871Citations
N/AReaders
Get full text

Cited by Powered by Scopus

An Intelligent Self-Validated Sensor System Using Neural Network Technologies and Fuzzy Logic Under Operating Implementation Conditions

0Citations
N/AReaders
Get full text

Using Learning Focal Point Algorithm to Classify Emotional Intelligence

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Mohammadi, H., Thirunarayan, K., & Chen, L. (2023). CVII: Enhancing Interpretability in Intelligent Sensor Systems via Computer Vision Interpretability Index. Sensors, 23(24). https://doi.org/10.3390/s23249893

Readers over time

‘24‘25036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 1

50%

Researcher 1

50%

Readers' Discipline

Tooltip

Social Sciences 2

100%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free
0