Abstract
Medical devices that use artificial intelligence (AI) will be an essential part of the healthcare work environment of the future, changing the way practitioners make decisions. To make this technology a team-player in the clinical setting, we need to characterize the interaction between practitioners and AI technology in the clinical setting. However, the characteristics of this environment need to be carefully considered in the design of user interfaces for medical devices that recommend actions or present analytic insights from predictive algorithms-algorithms that interpret historical data in order to predict future events. In this paper, we describe five themes that should inform future user interface design, and highlight areas for research that can shed light on use cases that are specific to their implementation in healthcare. Healthcare is a unique work environment, serving many types of users who have varied experience and needs. This setting is known to frequently be hectic and highly demanding, with many uncertainties and a high cost for unintended incidents (Cook and Woods, 1994). Designers working in this domain must take into consideration these unique characteristics, and plan their user-centered design process accordingly. To support this process, research should fill knowledge gaps about the expected interaction between practitioners and medical devices, and provide design guidance on how to safely support these human operators. The documented knowledge and experience of medical device design emphasizes the need to base the design process of new technology on knowledge that is specific to practitioners and the setting in which they work. The cognitive strategies practitioners apply have unique characteristics that require attention and dedicated solutions when designing a system that might become an integral part of their work procedures (Bitan, et al., 2019). The challenge of defining the user interface characteristics for medical devices, before we fully understand the capabilities and risks of the AI predictive algorithms that could be embedded in them, requires us to examine several questions. We describe five themes that designers, purchasers, implementers, accreditors, and regulators should consider when designing the next-generation medical devices that incorporate predictive algorithms. While some of these themes are well known and documented in design methods, it is important to rethink these ideas in the unique perspective of medical devices that use predictive algorithms. 1. Clinician attitude toward technology-Practitioners' attitude toward new technologies and their current role as the only decision-makers in the clinical setting are challenges affecting their work environment (Teach and Shortliffe, 1981). Preliminary findings from clinician interviews in intensive care units indicate that experience is a major parameter that affects the level of trust in automated systems. Thus, we can expect that clinician's experience, professional role, and culture will affect their collaboration with the recommendation systems. Most currently available predictive algorithms are perceived to be a "black box," in that the justification for the assessments and recommendations is completely opaque; it is hard to imagine that most practitioners would be willing to use a "black box" without also having tools and methods of support, while undergoing the process of building trust when implementing these new technologies. Learning from other domains suggests the importance of transparency and flexibility in order to build such trust and maintain a sense of control and situational awareness with what is displayed on the user interface (Choi and Ji, 2015; Hallbert and Logan, 2013). In our interviews, physicians also indicated the importance of moving from a "black box" to a "clear box"-presenting detailed information about the parameters that an AI system takes into consideration before providing its recommendation, and providing tools that support "what-if" capabilities (manipulating parameters in order to learn how they affect possible outcomes) to increase trust in the system. 2. Previous experience-Interaction with new tools is affected by a user's prior experience with similar products. For example, practitioners might wonder how these predictive algorithms are different from decision support systems that were introduced in the 80s and 90s, which suffered from limitations that pushed them into a specific niche (segment) and limited wider implementation (Elwyn et al., 2013). Explaining that these algorithms can learn and improve over time, and that they can provide specific predictions tailored to detailed patient profiles might not be enough. We should provide practitioners with features that will present the advantages of the new technology. Again, transparency into a "clear box" is needed in order to understand the algorithms' outcomes. Tools that allow investigating how changes to parameters might change assessments and recommendations are needed as part of a user interface that will support practitioners with prior negative experiences to achieve sufficient confidence, in order to be willing to collaborate with an AI-based system. 3. Biased data-Prediction models are based on data, but differences between work environments in healthcare might generate varied data sets that are not generalizable. Hospitals use different medical devices and varied treatment protocols, so data that are collected in one hospital or in one type of care environment might not generalize well to another hospital or care environment. In addition, the accuracy of healthcare technological systems is inherently constrained due
Cite
CITATION STYLE
Bitan, Y., & Patterson, E. S. (2020). Unique challenges in user interface design for medical devices that use predictive algorithms. Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care, 9(1), 265–266. https://doi.org/10.1177/2327857920091004
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.