2D images can be used to capture food intake data in nutrition studies. Estimates of food volume from these images are required for nutrient analysis. Although 3D image capture is possible, it is not commonplace. Additionally, nutrition studies often require multiple food images taken by non-expert users, typically collected using mobile phones, due to their convenience. Current 2D image to 3D volume approaches are restricted by the need for prescribed camera placement, image metadata analysis and/or significant computational resources. A new method is presented combining 2D image capture and automated 3D scene projection with manual placement and resizing of wire mesh objects. 2D images, with a reference object, are taken on low specification mobile phones. 3D scene projection is calculated by twinning a cuboid in 3D space to the reference object in the 2D image. A manually selected 3D wire mesh object is then positioned over the target food item and manually transformed to improve accuracy. The virtual wire mesh object is then projected into the 3D scene and the volume of the target food item calculated. The whole process is computationally light and runs in real-time as an app on a standard Apple iPad. Based on a user study with 60 participants, experimental evaluations of volume estimates over regular shape and ground truth food items demonstrate that this approach provides acceptable accuracy. We demonstrate that the accuracy of estimates can be increased by combining multiple independent estimates.
CITATION STYLE
Smith, S. P., Adam, M. T. P., Manning, G., Burrows, T., Collins, C., & Rollo, M. E. (2022). Food Volume Estimation by Integrating 3D Image Projection and Manual Wire Mesh Transformations. IEEE Access, 10, 48367–48378. https://doi.org/10.1109/ACCESS.2022.3171584
Mendeley helps you to discover research relevant for your work.