Extracting Meaning from Sound — Computer Scientists and Hearing Scientists Come Together Right Now

Machines that listen to us, hear us, and act on what they hear are becoming common in our homes.. So far, however, they are only interested in what we say, not how we say it, where we say it, or what other sounds they hear. Richard Lyon describes where we go from here.

 

Based on positive experiences of marrying auditory front ends to machine-learning back ends, and watching others do the same, I am optimistic that we will see an explosion of sound-understanding applications in coming years. At the same time, however, I see too many half-baked attempts that ignore important properties of sound and hearing, and that expect the machine learning to make up for poor front ends. This is one . . . → Read More: Extracting Meaning from Sound — Computer Scientists and Hearing Scientists Come Together Right Now

Machine learning helps computers predict near-synonyms

Choosing the best word or phrase for a given context from among candidate near-synonyms, such as “slim” and “skinny”, is something that human writers, given some experience, do naturally; but for choices with this level of granularity, it can be a difficult selection problem for computers.

Researchers from Macquarie University in Australia have published an article in the journal Natural Language Engineering, investigating whether they could use machine learning to re-predict a particular choice among near-synonyms made by a human author – a task known as the lexical gap problem.

They used a supervised machine learning approach to this problem in which the weights of different features of a document are learned computationally. Through using this approach, the computers were able to predict . . . → Read More: Machine learning helps computers predict near-synonyms