Analysis of error detection in EPID-based IMRT pre-treatment QA

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A common approach to IMRT pre-treatment quality assurance is to capture images of IMRT fields with no patient or phantom in the beam, and compare these to images predicted by the treatment planning system. This is done prior to treatment in order to detect and correct file transfer and beam delivery errors that would affect the quality of treatment. In many institutions, the distance between measured and predicted images is quantified using gamma indices with 3%/3mm criteria. This work: (i) examines repeated electronic portal imaging device (EPID) images of an IMRT field; (ii) evaluates output variations, alignment errors and measurement 'noise'; and (iii) uses this data with receiver operating curve (ROC) analysis to determine the size of IMRT delivery errors that can be reliably detected. The gamma index with 3%/3mm acceptance criteria is shown to be sub-optimal; dose differences must be ≥5% in a 24mm×24mm area to be detected 95% of the time, while a pixel intensity difference (PID) test reliably detects ≥2% dose deviations in the same area. Excluding high gradient regions has little effect on the detection ability of gamma, but allows a PID test to detect 2% dose deviations in 5mm×5mm areas. © 2009 Springer-Verlag.

Cite

CITATION STYLE

APA

Gordon, J. J., Gardner, J., Wang, S., & Siebers, J. V. (2009). Analysis of error detection in EPID-based IMRT pre-treatment QA. In IFMBE Proceedings (Vol. 25, pp. 519–522). Springer Verlag. https://doi.org/10.1007/978-3-642-03474-9_145

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free