AI- Powered Assistive Aid for the Visually Impaired

By: Katherine H.
Year: 2022
School: Woodbridge High
Grade: 10
Science Teacher: Crystal Cooper

Global studies highlight a pressing issue affecting 253 million visually impaired individuals worldwide, a number projected to triple by 2050. Traditional aids such as Braille, guide dogs, and white canes, though once indispensable, lack versatility and adaptability. In response to this challenge, Katherine developed a groundbreaking prototype utilizing A.I. object detection, text recognition algorithms, and ultrasonic technology to create an effective vision aid.

Katherine’s vision aid prototype was built around a Raspberry Pi 4 computer board equipped with a camera, ultrasonic sensor, headphones, and other peripherals. The device’s core functionality involves the implementation of object detection and text recognition models through modified open-source Python scripts. This allows the prototype to recognize real objects and printed text captured by the camera’s live video feed.

The modified source code not only enables the device to recognize objects and text but also incorporates text-to-speech synthesis for comprehensive audio outputs. Users receive information on detected objects, their types, location, recognized words, and sentences in real-time through audio feedback.

Katherine’s innovation extended by integrating ultrasonic sensors into the prototype. These sensors accurately determine the distance to objects by measuring the time lapses between sending and receiving sound waves. This feature enhances the device’s functionality, providing users with information about the distance between them and detected objects.

During rigorous testing, the prototype showcased impressive performance metrics. It recognized 15 sample words in 2.5 seconds on average, 15 sample sentences in 6.4 seconds with 100% accuracy, and detected 24 objects out of 30 with an average time of 2.9 seconds. Notably, the device can detect large objects like cars up to 584cm away, demonstrating its responsiveness and accuracy.

The prototype’s real-time speech feedback exhibited an 80% accuracy rate (> 75% criteria) within an average of 2.86 seconds. By verbalizing object names, distances, and relative locations, the device provides quick and accurate assistance, proving its potential as an advanced vision aid.