Using depth sensor cameras such as the Kinect and highly customizable software development frameworks in conjunction with artificial intelligence methodologies offer significant opportunities in a variety of applications, such as undergraduate science, technology, engineering, and math (STEM) education, professional or military training simulation, and individually-tailored cultural and media arts immersion. Designing a participatory educational experience where users are able to actively interface and experience given subject matter in a practical experiential manner can enhance the user's ability to learn and retain presented information. Such natural gesture user interfaces have potential for broad application in disciplines ranging from systems engineering education to process simulation. This paper will discuss progress on the development of testing environments for interactive educational methods in conjunction with artificial intelligent systems that have the ability to adjust the educational user experience based on individual user identification. This will be achieved through depth sensor skeletal tracking, allowing experience adaptation based on the nature and effectiveness of the interactive educational experience. © 2012 Published by Elsevier B.V.
Moriarty, B., Lennon, E., DiCola, F., Buzby, K., Manzella, M., & Hromada, E. (2012). Utilizing depth based sensors and customizable software frameworks for experiential application. In Procedia Computer Science (Vol. 12, pp. 200–205). Elsevier B.V. https://doi.org/10.1016/j.procs.2012.09.054