Voice Forensics in Artificial Intelligence – Who Knew?

We’ve all heard various things about AI or artificial intelligence. For me, AI was being developed through the collected information via search engines and then social media. My personal position has always been that the big breakthroughs in AI would stem from companies like Google.

In fact, the accumulated personal data recorded by Google over the past 20 years, allows Google to make intuitive decisions to a very high degree of accuracy, on a solution being the correct response to just about any query. And why not as we have been asking Google so many questions everyday with the search giant analyzing our responses to its offerings. All in the name of marketing?

Well, as enlightened in AI as I may have thought I was through occupational hazard, I never thought of data capturing for AI in any other forms but clearly I have erred in my ignorance.

Turns out that there is an even bigger collection of human data being collected and used in artificial intelligence.

Voice forensic data offers so much more in terms of usable information about humans. Voice forensics is apparently so powerful that according to Rita Singh of Carnegie Mellon, the voice alone can be analyzed to produce an accurate 3D rendering of the speakers face. Further, voice forensics will soon be employed in health diagnosis and is already being used by some police departments to root out criminality.

To give you an idea at just how big the potential data sets of information could be; it is estimated that there is over 700 centuries of speech spoken every day over just cellphones. When you add in the number of online communications like podcasts, youtube video uploads and skype, the numbers start to double.

The question isn’t “are they listening to our conversations” because we know that they are, they arrogantly told us that they are. No the question is, are they retaining our conversations?

http://mlsp.cs.cmu.edu/people/rsingh/index.php