Can you please tell what algorithm you used for this , bcoz right now we are working on stereo vision to get exact data.
I have been a user of the vOICe for several years. It is the only way to get vision without surgery. It has allowed me to see what I cannot touch such as the interaction of light and shadows on water, a rising Geyser in Iceland etc. I can also do other things like read the address of my house when I approach it, see where I am going and navigate a busy office. Many people have asked me how long does using the vOICe take to learn. The answer is that I am still learning. You can get usable results in about 15 minutes of training. I do not mean that you will begin to interpret complex soundscapes but you can learn to track the changes in sounds indicating that your environment has changed and if you use lights, you can track if they are on or off. That is handy if you are living with sighted people and also in reducing your electricity bill. The built-in OCR is an immediate win which is how I read my house number.
Simply amazing. I'm not blind, but I can negotiate a bit with it on and my eyes closed. Just a great example of where technology is taking us.
v2.15: Latest OCR result, barcode result and spoken photo label automatically copied to system clipboard. Contrast refinement. Continuous hi-res OCR option. Barcode detection.
v2.14: Experimental speech recognition labeling after taking photograph (headphones required) => labels logged in /vOICe/photolog.txt file.
v2.13: Added support for Eye-D Pro app. Added support for sharing images with The vOICe.