Spirometry requires the patient to cooperate and do the manoeuvre 'right' for reliable results. Algorithms to assess test quality as well as educational recommendations for personnel are defined in guidelines. We compared the quality of forced spirometry tests performed by spirometry technicians with little or no previous experience of spirometry using spirometry systems with different modes of feedback. In both cases, the spirometry technician received general feedback on the screen based on ATS/ERS guidelines, such as 'exhale faster' and 'exhale longer'. The major difference was whether quality grading system of the complete session was available simultaneously on screen, or in the printed report afterwards. Two parts of the same population-based study (LifeGene), the pilot (LG1) and the first part (LG2) of the subsequent study, were compared retrospectively. In LG1 (on-screen grading) approved examination quality was achieved for 88% of the 10 first subjects for each spirometry technician compared to 70% in LG2 (printed grading afterwards). The corresponding values after 40 subjects was 94 % in LG1, compared to 73% in LG2, and after the first ten subjects there was no apparent quality improvement in either LG1 or LG2. The quality for LG1 is among the highest reported in the literature even though the spirometry technician were relatively inexperienced. We conclude that on-screen grading in addition to general technical quality feedback is powerful in enhancing the spirometry test session quality.
CITATION STYLE
Qvarfordt, M., Anderson, M., & Svartengren, M. (2018). Quality and learning aspects of the first 9000 spirometries of the LifeGene study. Npj Primary Care Respiratory Medicine, 28(1). https://doi.org/10.1038/s41533-018-0073-y
Mendeley helps you to discover research relevant for your work.