A research team at Carnegie Mellon University developed a technology that recognizes that smart speakers can talk to them without using wake words such as OK Google or Alexa.
While the voice assistant on the smart speaker is convenient, you have to write the wake word every time. Therefore, Google is testing a method that can be manipulated without a wake word using the Nest Hub ultrasonic sensor. However, it cannot be used without the sensor and must be near the device.
The research team at Carnegie Mellon University focused on the direction of sound as a way to alleviate this inconvenience. Depending on the frequency component of the voice, a machine learning model that can recognize whether it is a voice directed to a smart speaker or a voice obtained in response to a wall was created.
Using this, the wake word is unnecessary, and the smart speaker can determine whether it is an instruction directed to itself. It is said that this method can be realized with a lightweight software base, so that the device can be used alone without analyzing the voice in the cloud every time.
Now it is recognized with 90% accuracy, but there are also problems in use. Instructions should always be directed towards the device. It can be difficult to dictate while working on something, and this can be a bit unnatural given the use of smart speakers. In addition, it can be said that an environment with a lot of noise, such as that it cannot be used in restaurants or parties with multiple people, is a challenge.
In addition, this technology is not intended to use a smart speaker without a wake word, but it is also possible to selectively amplify only the voice directed to the wearer when mounted on a hearing aid. Related information can be found here .
Add comment