Social Robots and modeling children’s pronunciation
Today, we get to tell you about one of the most cutting edge use cases of the Speechace API. There is something magical about encountering applications of your technology that you never really imagined and this one has a special place in our hearts.
Last year, a group of researchers from MIT Media Lab reached out requesting access to the Speechace API. When we listened to the use case we were blown away by the ambitiousness of the end-goal and the multiple disciplines that had to come together to make this work.
The main concept is deceptively simple: A robot sits across from a child playing a vocabulary game on a shared tablet. The tablet displays an image and each player, child or robot, races to “ring in” and say the word. The player wins a point if they say it correctly and after multiple rounds the player with the most points wins. The learning objective is let the child demonstrate words they know and to teach him or her new words. The competitive nature of the game is designed to engage the child.
But in reality the robot records and evaluates the student speech (using the Speechace API) and continuously infers and updates an internal student model. The model is used to drive the robot’s decision whether to ring in or not, and to choose the words progressively introduced in the game. The whole experience is carefully doctored to meet the learning objective.
The end result is that the whole game is driven by the the student model and the corresponding active learning procedure. The aim to help the child demonstrate existing knowledge and learn new words keeping the game interesting and appropriately challenging.
This week the paper was presented was made public at the AAMAS 2018 (International Conference on Autonomous Agents and Multi-agent Systems). Section 5 in the paper contains an independent evaluation of Speechace using ground truth human ratings in which Speechace achieved a favorable 0.81 Area-Under-the-curve (AUC) in a binary classification task of children’s pronunciation.
We would like to congratulate Sam, Huili, Safiniah, Michael, and Cynthia on the publication of the paper and thank them for using Speechace and the endorsement and acknowledgment given to Speechace in their paper. We look forward to continuing to support their efforts and their future use of Speechace API.
The paper is available at http://ifaamas.org/Proceedings/aamas2018/pdfs/p1658.pdf
The Speechace Team