Smartphones monitor food portions for better nutrition

December 21, 2012 in Medicine & Health / Health
Smartphones monitor food portions for better nutrition

(Medical Xpress)—Family feasts, office parties and celebrating with friends find many people questioning how to control the amount they indulge in for the holiday season. The answer may be forthcoming, as researchers in LSU's College of Engineering and the Pennington Biomedical Research Center are working to help control food intake through smartphone technology.

Advocated by the Louisiana Technology Council, Bahadir Gunturk, associate professor in the LSU School of Electrical Engineering and Computer Science, and Corby Martin, associate professor at the Pennington Biomedical Research Center, have been collaborating on a computer vision system to estimate food intake from the pictures captured on a smartphone and sent through a mobile .

With a holiday plate at a table setting, mobile phones can be used to take pictures of food and send the images to a computer program. Data is then gathered from the images, and nutritional information is reported based on the processed image.

"This technology is helping to solve a fundamental problem in health promotion – accurately determining what people eat in natural conditions," Martin said.

Martin and Gunturk's automated image analysis system takes digital food images as input and provides intake/nutrition information as output by using a large database of sample images with known weights and nutritional data to compare against. The system consists of a number of image processing steps, including segmentation, pattern classification and volume estimation.

"The remote food photography idea provides an effective means of analyzing eating habits in free-living people," Gunturk said. "While a trained person may sit and analyze the pictures manually, our work helps us to get the results faster."

The segmentation step detects the plate using shape information and sections the food regions inside a plate in a given photograph. The classification step identifies the food type in a food region by comparing its color and texture features against the same features of known food images in a database. Specifically, a specialist classifier network is trained for each food type to give a positive or a negative output depending on whether an input image patch matches the food.

A food region is divided into small patches and run through the specialist networks; the number of positive votes for each specialist is tallied and sorted. For instance, if the food is a donut, the "green beans" specialist will give very few, if any, positive votes, whereas the "donut" specialist will yield a high percentage of positive votes. The volume estimation step analyzes the surface area of a given food in the image and cross references this information with a database to estimate the volume of food after correcting for the viewpoint and distance of the camera. Testing has shown that food tends to a have a linear relationship between surface area and volume, so the amount of food can be estimated.

Obesity is a growing problem in the world today, and much of the nutrition research being conducted requires detailed intake and expenditure information. Using technology, Gunturk and Martin's research is undertaking the challenge to provide an accurate estimate of people's through their smartphones.

"This technology promises to significantly advance delivery of health promotion programs, including weight management treatment," Martin said.

Provided by Louisiana State University

"Smartphones monitor food portions for better nutrition" December 21, 2012