BlindSight uses cutting edge artificial intelligence to see the world as humans do. Twin neural networks mimic the brain's visual processing and language production centres to convert images from your phone's camera into spoken words, helping blind and partially sighted people navigate the world around them. No internet connection required - BlindSight's AI runs entirely on your phone's hardware.
Developed using Google's Tensorflow and based on the model detailed in:
"Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge."
Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan.
IEEE transactions on pattern analysis and machine intelligence (2016).
Disclaimer: The technology behind BlindSight is new and remains experimental. The app is not a medical device; it will often be wrong and is not to be used in lieu of proper assistive devices.