A blind student at the University of Michigan asked my friend Alex for help to get to his class several months ago when Senator Tim Kaine came speak on campus. There were tents and vehicles all around the Diag, our main avenue between classes, and Yousef couldn’t navigate the new environment without bumping into many new items. His cane was not a perfect tool.
He said new environments were almost impossible for him to navigate without getting lost or banging himself up a lot, even with his white cane. But in an age of autonomous cars and 3d sound shouldn’t the blind be able to use technology better to detect the objects around them?
What it does
We are using Kinect to make a realtime 3D map of the immediate environment and create a sound map of the objects in it. Based on where objects are located, such as walls, people, and everything else are in relation to you, we ping the object with a sound in your 3D environment based on its proximity and direction from you. This allows you to track multiple objects around you and successfully navigate from point A to B without sight.
How it was built
We used Microsoft Kinect to make a realtime map of the objects in front of the wearer. We’re used the Kinect’s IR sensor to get the depth of everything in front of the wearer then brought this into our algorithm so we could determine where potential objects existed that needed to be avoided. We then put these coordinates into Unity to determine the sound to assign the object to create the 3D sound the wearer uses to hear where the objects are an avoid them.
Challenges I ran into
Kinect makes a really bad 3D map and porting that into Unity was not an easy task at all and took us most of the time. Then trying to map the 3D sound onto the Kinect’s readings didn’t work initially because we couldn’t port it into Unity and were trying to do it manually but without much luck. Finally, we got it and were able to get the 3D sound map up and running to finally make the project work.
Accomplishments that I’m proud of
We made something that can help real people, like Yousef! Plus, we overcame a bunch of obstacles involving incompatible systems, new languages, and learned sound design.
What I learned
I learned sound design to make something that you can bear hearing for extended periods of time. I learned Unity, C#, Visual Studio, and basically converted from Apple to Microsoft for a period of time in order to execute on this empowering idea.
What’s next for Sound Sense
I would like to add a couple more features to make it even more useful for the visually impaired community such as taking a picture and sending it to DeepMind to tell the user what is in front of them. The second feature I would like to add is maps integration so they always know the direction they should be heading.
Try it out
Personal Highlights and Accomplishments
For this project, I am especially proud of several significant accomplishments, which include:
- I conducted a literature search on Binaural Beats, and how to programmatically control sound source for 3-D sound.
- I used C# to integrate information from Kinect camera sensors and infrared sensor to create a depth map.
- I designed and implemented algorithms to convert depth map into 3-D sound map for navigation purpose.
I performed user testing with volunteers and measured product performance qualitatively and quantitatively.