Background: Automated program repair and other bug-fixing approaches are gaining attention in the software engineering community. Automation shows promise in reducing bug fixing costs. However, many developers express reluctance about accepting machinegenerated patches into their codebases. Aims: To contribute to the scientific understanding and the empirical investigation of human trust and perception with regards to automation in software maintenance. Method: We design and conduct an eye-tracking study investigating how developers perceive trust as a function of code provenance (i.e., author or source). We systematically vary provenance while controlling for patch quality. Results: In our study of ten participants, overall visual code scanning and the distribution of attention differed across identical code patches labeled as human- vs. machine-written. Participants looked more at the source code for human-labeled patches and looked more at tests for machine-labeled patches. Participants judged human-labeled patches to have better readability and coding style. However, participants were more comfortable giving a critical task to an automated program repair tool. Conclusion: We find that there are significant differences in code review behavior based on trust as a function of patch provenance. Further, we find that important differences can be revealed by eye tracking. Our results may inform the subsequent design and analysis of automated repair techniques to increase developers' trust and, consequently, their deployment.
CITATION STYLE
Bertram, I., Hong, J., Huang, Y., Weimer, W., & Sharafi, Z. (2020). Trustworthiness perceptions in code review: An eye-tracking study. In International Symposium on Empirical Software Engineering and Measurement. IEEE Computer Society. https://doi.org/10.1145/3382494.3422164
Mendeley helps you to discover research relevant for your work.