Snap-N-Eat: Food Recognition and Nutrition Estimation on a Smartphone

Citation

Zhang, W., Yu, Q., Siddiquie, B., Divakaran, A., & Sawhney, H. (2015). “Snap-n-Eat”: food recognition and nutrition estimation on a smartphone. Journal of Diabetes Science and Technology, 9(3), 525-533. doi: 10.1177/1932296815582222

Abstract

We present snap-n-eat, a mobile food recognition system. The system can recognize food and estimate the calorific and nutrition content of foods automatically without any user intervention. To identify food items, the user simply snaps a photo of the food plate. The system detects the salient region, crops its image, and subtracts the background accordingly. Hierarchical segmentation is performed to segment the image into regions. We then extract features at different locations and scales and classify these regions into different kinds of foods using a linear support vector machine classifier. In addition, the system determines the portion size which is then used to estimate the calorific and nutrition content of the food present on the plate. Previous approaches have mostly worked with either images captured in a lab setting, or they require additional user input (eg, user crop bounding boxes). Our system achieves automatic food detection and recognition in real-life settings containing cluttered backgrounds. When multiple food items appear in an image, our system can identify them and estimate their portion size simultaneously. We implemented this system as both an Android smartphone application and as a web service. In our experiments, we have achieved above 85% accuracy when detecting 15 different kinds of foods.

Keywords: food recognition; mobile food recognition; nutrition estimation; visual food recognition.


Read more from SRI