It took months of analyzing, categorizing and converting more than 2 million YouTube sound files to develop See Sound, a smart device that alerts folks with hearing loss to potential dangers inside and outside their homes through a mobile app.
"It's been amazing to discover what we can do as an agency—building the machine learning model, training it, hardware engineering, product design and branding, toxic fine particle and epoxy fume inhalation," says Tim Hawkey, creative chief at FCB Health Network shop Area 23, which devised the product with software startup Wavio.
Hawkey believes See Sound was well worth the effort, as it addresses a demand in the deaf community for technology with a robust audio library, unlike most current offerings that recognize just a few sounds. See Sound responds to 75 cues in all, including breaking glass, beeping microwaves, loud thuds, police sirens and gunfire. It's no stretch to say that in some cases, the device could mean the difference between life and death.
Last week, the Innovation Jury at Cannes Lions awarded See Sound its Grand Prix, a distinction that should help generate buzz as the product's creators attempt to secure funds for mass production.
Hawkey chatted with Muse about how the project came together, and discussed plans for taking the visionary device to market:
Muse: What inspired See Sound, and how did Area 23 hook up with Wavio?
Tim Hawkey: There is an ACD team at Area 23, Corinne and Kristen. Kristen's husband is deaf, and the creative team worked on an innovation that would solve a lot of problems in their household. Recently, for example, they had to pay to repair significant water damage in their home because Kristen's husband could not hear the water running. The agency team was socially connected to the Wavio team through their ties to the Rochester Institute of Technology. Rochester has one of the largest deaf populations in the country, and RIT is the hotbed for deaf innovation.
Why is this product necessary?
See Sound is a vital necessity to the deaf community for a couple of reasons. First, there is a phenomenon which many take for granted called situational awareness. The sounds around us orient us in our world, and they alert us to danger. But if you're deaf, if you didn't see it, it didn't happen. Ask any deaf person. The CEO of Wavio told me that his friend's young daughter died tragically when no one heard her struggling in the pool. Current products are extremely limited. There are single-sound devices that can recognize a door bell or a phone ringing, but nothing that can distinguish [from among multiple] sounds. Additionally, current products are low accuracy, expensive and ugly as hell. They all scream "handicap."
How did you hit on using YouTube for the machine-learning process?
The YouTube data set was not our first approach. Originally, we envisioned training our machine-learning model with sounds we would generate, and creating a user-generated campaign to have people "donate" their sounds to the model. We learned the hard way that this was not going to be feasible because we would need literally millions of sound samples to create a model with acceptable accuracy.
It turns out the answer was in the millions and millions of videos that are already on YouTube. Once we had our source of data locked down, the model was trained over the course of several months on the Google/Udacity TensorFlow platform. The more sound clips you add, the smarter the machine-learning model gets. Even though the data was already organized as an open source platform it was definitely incredibly laborious to train our machine learning models using that data. Our model essentially had to be exposed to every single sound that is in each categorized data set, and that took a lot of time and effort. Ultimately, the product worked much better than we anticipated, and with every rapid iteration we were able to improve performance—from accuracy rating, to speed of response, to number of sounds, to user experience.
Are you still loading files to the database?
We will always be updating our model with additional sounds and improving the accuracy of the sounds that are already on the platform.
What was your biggest challenge during development?
The toughest challenge we have is one that we have yet to overcome. This product is not yet commercially available. We're ready to go into manufacturing. But to manufacture at scale, we're looking for a $4 million investment from VCs and other sources. To be honest, it has been tough for Wavio, a group of four deaf men with no prior products, to get a serious VC meeting in Silicon Valley. But even getting on the Innovation shortlist has changed things and opened doors. And winning the Innovation Grand Prix will change these guys' lives forever. We already have investor meetings set up for next week.
Any idea on the price point?
We are trying to keep retail price per See Sound unit around $99, so users can afford to have four or five units per home. We already have deep relationships and endorsements from the National Association for the Deaf. We will also be able to activate the deaf community on day one. There are a number of distributors who have sent us letters of intent to sell the product as soon as we have it in hand. Even Amazon is asking why See Sound is not on Amazon already.
But here's the real kicker: The See Sound device will ultimately be fully covered for deaf customers under the Americans With Disabilities Act. So, our ability to activate this reimbursement strategy will push the product into the stratosphere and into every deaf home in America.