This application makes use of transfer learning, to build a cereal box image classifier, based on the Google InceptionV3 model with TensorFlow. The InceptionV3 model is a pre-trained image classifier to classify 1000 image classes from the ImageNet Large Vision Recognition dataset in 2012. This model is made freely available for adaptation by Google. Using a technique known as "transfer learning" we can use a pre-trained model such as the InceptionV3 model as a starting point for building a custom image classifier to classify our own images different from the original 1000 classes that the pre-trained model was trained to classify. Training a recognition model from scratch is not a trivial task, thereby to take advantage of such pre-trained models drastically cuts down development time, and opens the door for various products to incorporate object recognition technology. This app is a demo to that effect. The app is trained to classify the following cereals: apple cinnamon cheerios, berry kix, chocolate cheerios, cocoa puffs, honeycomb, kelloggs all bran, kelloggs corn pops, kelloggs frosted flakes, kelloggs rice krispies, multi grain cheerios, and sugar crisp.
Later, I envision expanding the classified products and adding speech recognition (using Speech API) to call out items during grocery shopping with the aim of easing shopping or enabling shopping for the visually impaired. That's the concept of the final product.