Obesity classification from facial images using deep learning
AdvisorSiddiqui, Hera; Rattani, Ajita; Cure Vellojin, Laila N.; Woods, Nikki Keene; Lewis, Rhonda K.; Twomey, Janet M.; Smith-Campbell, Betty; Hill, Twyla J.
MetadataShow full item record
Siddiqui, H. 2021. Obesity classification from facial images using deep learning -- In Proceedings: 17th Annual Symposium on Graduate Research and Scholarly Projects. Wichita, KS: Wichita State University
INTRODUCTION: Obesity is a serious health problem that is on the rise both in the United States and globally. Obesity is frequently defined using the clinical Body Mass Index (BMI) ratio of height and weight. Overweight individuals have a BMI between 25-30, and those over 30 are classified as obese. Obesity can lead to heart disease, type 2 diabetes, and many other serious health conditions. Self-diagnostic face-based solutions are being investigated for obesity classification and monitoring. PURPOSE: To classify obesity status based on facial images using deep learning-based convolutional neural networks (CNNs). METHODS: The four CNNs (VGG16, ResNet50, DenseNet121, and MobileNetV2) used in this study were pre-trained on three public datasets (ImageNet, VGGFace, and VGGFace2). Using the above CNNs, we extracted deep features from the FIW-BMI and VisualBMI datasets annotated with BMI information. The deep features from 8298 images in the FIW-BMI dataset along with BMI values were then used to train a Support Vector Classification (SVC) classifier. The trained SVC model was tested on 4206 different images from the VisualBMI dataset for the validation. RESULTS: CNNs trained on ImageNet dataset obtained an initial accuracy (percentage of correct obese and non-obese classifications) in the range 64% to 72%. Accuracy of 84% to 86% was obtained by using CNNs trained on VGGFace dataset. 86% accuracy was obtained by concatenating features from pre-trained (VGGFace) and fine-tuned (FIW-BMI) model. ResNet-50 trained on VGGFace2 dataset obtained an accuracy of 91% when features from the original image datasets were used and 92% accuracy when features were fused from the original image with the horizontally flipped image. The fused image modifications resulted in a model with Sensitivity, Specificity, and Precision of 0.90, 0.94, and 0.95, respectively. Mean Absolute Error (MAE) of this model in predicting BMI is 3.16 and area under the curve (AUC) is 0.97. CONCLUSION: Obesity can be predicted from facial images using deep learning models with a promising accuracy. SVC models trained on deep features extracted from models pre-trained on VGGFace2 dataset performed better than models pre-trained on ImageNet dataset. ResNet-50 (pre-trained on VGGFace2) obtained the highest accuracy of 92% by combining features from the original image and horizontally flipped image. These models when deployed on smartphones can help individuals in monitoring their obesity status, BMI, and weight changes.
Presented to the 17th Annual Symposium on Graduate Research and Scholarly Projects (GRASP) held online, Wichita State University, April 2, 2021.
Research completed in the Department of Electrical Engineering and Computer Science, College of Engineering; Department of Industrial, Systems and Manufacturing Engineering, College of Engineering; Department of Public Health Sciences, College of Health Professions; Department of Psychology, Fairmount College of Liberal Arts and Sciences; School of Nursing, College of Health Professions; Department of Sociology, Fairmount College of Liberal Arts and Sciences
- EECS Graduate Student Conference Papers
- ISME Graduate Student Conference Papers
- NUR Graduate Student Conference Papers
- PHS Graduate Student Conference Papers
- Proceedings 2021: 17th Annual Symposium on Graduate Research and Scholarly Projects
- PSY Graduate Student Conference Papers
- SOC Graduate Student Conference Papers