By: Akshata T.
Year: 2022
School: Aliso Niguel High School
Grade: 11
Science Teacher: Robert Jansen
Project “Anja” (the third eye, in Sanskrit language) is meant to help visually impaired people recognize day to day objects. The students used a Raspberry Pi, a Pi Camera, an external speaker and Python code to enable four capabilities (visual recognition, artificial intelligence, text to speech and speech to action) to work together.
The Raspberry Pi camera was used to identify objects and a speaker to read out the object’s name. The student tested it with everyday objects (e.g orange, donut, keyboard, clock, cellphone etc.) and used a pre-trained object library for the current phase of the project. The student used “Open CV” computer vision library tools for image processing, and Pico for our text to speech converter. To make this device portable and avoid draining out the battery, The student configured a wake up word for the object recognition sequence to get activated. In the next phase we plan to integrate TensorFlow based machine learning to further improve accuracy and expand the capability of this device.
In summary, the student was able to exceed the goals of creating a cost effective, digital assistant for visually challenged individuals. Our device, “Anja”, is a portable object recognition device that doesn’t need an internet connection to work. In addition, Anja can also be continuously updated to enhance its visual recognition library and be more effective. It can also respond in multiple languages, making it even more useful across the globe. There is always room for improvement and we will be working further to enhance Anja to integrate it with TensorFlow to make it even more effective.